id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2310.02700
Insights of using Control Theory for minimizing Induced Seismicity in Underground Reservoirs
Deep Geothermal Energy, Carbon Capture, and Storage and Hydrogen Storage have significant potential to meet the large-scale needs of the energy sector and reduce the CO$_2$ emissions. However, the injection of fluids into the earth's crust, upon which these activities rely, can lead to the formation of new seismogenic faults or the reactivation of existing ones, thereby causing earthquakes. In this study, we propose a novel approach based on control theory to address this issue. First, we obtain a simplified model of induced seismicity due to fluid injections in an underground reservoir using a diffusion equation in three dimensions. Then, we design a robust tracking control approach to force the seismicity rate to follow desired references. In this way, the induced seismicity is minimized while ensuring fluid circulation for the needs of renewable energy production and storage. The designed control guarantees the achievement of the control objectives even in the presence of system uncertainties and unknown dynamics. Finally, we present simulations of a simplified geothermal reservoir under different scenarios of energy demand to show the reliability and performance of the control approach, opening new perspectives for field experiments based on real-time regulators.
Diego Gutierrez-Oribio, Ioannis Stefanou
2023-10-04T10:13:44Z
http://arxiv.org/abs/2310.02700v3
# Robust Tracking for a 3D Diffusion Equation: Controlling Seismicity Rate in Geothermal Reservoirs ###### Abstract Deep Geothermal Energy has significant potential to meet the large-scale needs of the energy sector. However, the injection of fluids into the earth's crust, upon which it relies, can lead to the formation of new seismogenic faults or the reactivation of existing ones, thereby causing earthquakes. To date, no effective method exists for mitigating these human-induced earthquakes. In this study, we propose a novel approach based on control theory to address this issue. First, we model induced seismicity resulting from fluid injections in a geothermal reservoir using a diffusion equation in three dimensions. Then, we design a robust tracking control approach to force the seismicity rate to follow the desired references. In this way, the induced seismicity is minimized while ensuring fluid circulation for the needs of energy production. The designed control guarantees the stabilization of the error variable even in the presence of system uncertainties and unknown dynamics. Finally, we present simulations of a geothermal reservoir under different scenarios of intermittent energy demand to show the reliability and performance of the control approach, opening new perspectives for field experiments based on real-time regulators for the first time. Power Generation, Sliding-mode Control, Partial Differential Equations, Deep Geothermal Energy, Earthquake Prevention. ## 1 Introduction Deep Geothermal Energy presents a promising potential for meeting the energy sector's large-scale demands. However, its effectiveness relies on the injection of fluids into the Earth's crust, which can potentially trigger earthquakes [1, 2, 3]. The occurrence of induced seismicity poses a significant threat to the feasibility of projects utilizing these techniques. This issue is the reason for the closure of several geothermal plants in the world, _e.g._, in Alsace, France, in 2020 [4, 5], Pohang, South Korea, in 2019 [6, 7], and Basel, Switzerland, in 2009 [8, 9]. Earthquakes initiate when there is a sudden release of significant elastic energy stored within the Earth's crust due to abrupt sliding along fault lines [10, 11]. The injection of fluids can lead to the formation of new seismogenic faults or the reactivation of existing ones, which cause earthquakes [2, 3, 12]. More specifically, elevated fluid pressures at depth amplify the amount of elastic energy accumulated in the Earth's crust while reducing the friction along faults. As a result, the likelihood of earthquakes significantly increases, even in regions that are typically considered to have low seismic potential (see [2, 7], and [12], among others). Therefore, earthquake prevention strategies are necessary to mitigate induced seismicity in the energy sector. Traffic Light Systems are the most widely used approaches for earthquake prevention [13]. They operate based on a set of empirical rules that determine whether fluid injections should continue (green light) or cease (red light). However, these rules and the subsequent adjustment of injection rates are subjective in nature, often influenced by experts' opinions and political considerations. Consequently, they do not guarantee earthquake prevention. Another approach is cyclic stimulation [14], where fluid injections into a reservoir occur with cyclically varying injection rates. A third technique, known as fracture caging [15], involves the placement of a series of production wells around geothermal injection wells to create a kind of barrier around them. Nevertheless, all of these methods rely on trial and error rather than a systematic control approach. They lack a mathematical foundation and cannot guarantee the avoidance of induced seismicity. On the contrary, there is no proof that these methods cannot even trigger earthquakes of greater magnitudes than the ones they are supposed to mitigate [16]. Numerous instances worldwide provide evidence of their shortcomings, as previously mentioned [2, 7, 12]. More recently, significant progress has been made in controlling the earthquake instability of specific, well-defined, mature seismic faults [17, 18, 19, 20, 21, 22]. These studies have employed various control algorithms to stabilize the complex and uncertain nature of the underlying underactuated physical system. The designed controllers effectively stabilize the system and modify its natural response time. As a result, the energy dissipation within the system occurs at a significantly slower rate, orders of magnitude lower than that of a real earthquake event. However, it is worth noting that these investigations did not consider factors such as the presence of multiple smaller faults, which are typically found in deep geothermal reservoirs, as well as fluid circulation constraints associated with energy production. Regarding the controllability, observability and parameter identification of geological reservoirs, we refer to [23, 24, 25]. This study accounts for water injections into a geological reservoir using a 3D diffusion equation. By utilizing this Partial Differential Equation (PDE), a robust tracking control strategy is developed to regulate the seismicity rate, ensuring tracking to a desired reference. The primary control objective is to prevent induced seismicity while maintaining energy production. The designed control scheme demonstrates resilience against system uncertainties and unmodelled dynamics. Simulations of the process are conducted to validate the effectiveness of the control approach and show the potential for optimizing underground fluid circulation and thus energy production. Various simulation scenarios, considering different energy demands and constraints, are presented to provide a comprehensive understanding of the system's behaviour and the reliability of the control strategy. For the first time, this research opens new perspectives for field applications at the kilometric scale based on real-time regulators and control theory. The structure of this paper can be outlined as follows. In Section II, a motivating example of an idealized geothermal reservoir is presented showing how the seismicity rate increases when we inject fluids. Section III introduces the underlying 3D diffusion model and explains how the seismicity rate is considered as the desired output to be tracked. The design of a robust output feedback control system for the error system is presented in Section IV. To demonstrate the effectiveness of the proposed control strategy, simulations are conducted in Section V, considering different scenarios of intermittent energy demand and production constraints. Finally, concluding remarks are provided in Section VI, summarizing the key findings of the study. ## II Example of Induced Seismicity in a Reservoir Due to Fluid Injections Consider an underground reservoir at approximately \(4\) [km] below the earth's surface, as depicted in Fig. 1. The reservoir is made of a porous rock which allows the circulation of fluids through its pores and cracks. In our example, the reservoir has a thickness of \(\sim 100\) [m] and covers horizontally a square surface of dimensions \(\sim 5\) [km] \(\times 5\) [km]. Wells are injecting and/or extracting fluids (_e.g._ water) at different injection points in the reservoir, as shown in Fig. 1. For the sake of simplicity, injection of fluids will refer to both injection and fluid extraction from the reservoir. Pumping fluids in-depth causes the circulation of fluids in the reservoir, which, in turn, causes the host porous rock to deform. The hydro-mechanical behaviour of the reservoir due to the injection of fluids at depth can be described by Biot's theory [26]. According to this theory, the diffusion of the fluid and the deformation of the host porous rock are coupled dynamic processes. However, if the injection rates are slow enough, with respect to the characteristic times of the system due to inertia, and if the volumetric strain rate of the host porous rock is negligible, then the diffusion of the fluid in the host rock due to fluid injections can be described by the following diffusion equation [27] \[u_{t}=-\frac{1}{\beta}\nabla q+r, \tag{1}\] where \(u=u(x,t)\) is the change of the fluid pressure in the reservoir due to fluid injections, \(x\) is the spatial coordinate, \(t\geq 0\) is the time, \(u_{t}\) denotes the partial derivative of \(u\) with respect to time, \(q=-\frac{k}{\eta}\nabla u\) is the change of the hydraulic flux and \(r\) is a source/sink term representing the fluid injections. Furthermore, \(k\) is the permeability of the host rock, \(\eta\) is the dynamic viscosity of the fluid, and \(\beta\) is the compressibility of the rock-fluid mixture. All these parameters are assumed constant in the following examples and, thus, they can define a simple expression for the hydraulic diffusivity of the system, \(c_{hy}=k/\eta_{\beta}\), which will be used in the following sections. Finally, the reservoir has volume \(V\). We consider drained boundary conditions at the boundary of the reservoir, _i.e._, \(u=0\) at \(\partial V\). Furthermore, we assume point source terms, as the diameter of the wells is negligible compared to the size of the reservoir. In particular we set \(r=\frac{1}{\beta}\mathcal{B}(x)Q(t)\), where \(Q(t)\in\Re^{m}\), \(Q(t)=[Q_{1}(t),...,Q_{m}(t)]^{T}\), are injection fluxes applied at the injection points, \((x^{1},...,x^{m})\), trough the coefficient \(\mathcal{B}(x)\in\Re^{1\times m}\), \(\mathcal{B}(x)=[\delta(x-x^{1}),...,\delta(x-x^{m})]\). The terms \(\delta(x-x^{i})\) are Dirac's distributions and \(m\) is the number of the wells in the reservoir. For the rigorous statement of the mathematical problem and its control we refer to the next section and to 1. It is nowadays well established that the injection of fluids in the earth's crust causes the creation of new and the reactivation of existing seismic faults, which are responsible for notable earthquakes (see for instance [2], [12] and [7]). The physical mechanism behind those human-induced seismic events is connected with the change of stresses in the host rock due to the injections, which intensify the loading and/or reduce the friction over existing or new discontinuities (faults). In other words, fluid injections increase the seismicity rate in a region, _i.e._, the number of earthquakes in a given time window. The seismicity rate, \(\bar{R}\), of a given region depends on the average stress rate change, \(\bar{\tau}\), over the same region according to the following expression \[\dot{\bar{R}}=\frac{\bar{R}}{t_{a}}\left(\frac{\dot{\bar{\tau}}}{\dot{\tau}_{0 }}-\bar{R}\right), \tag{2}\] where \((\dot{)}\) denotes the time derivative, \(t_{a}\) is a characteristic decay time and \(\dot{\bar{\tau}}_{0}\) is the background stress change rate in the region, _i.e._ the stress change rate due to various natural tectonic processes, and is considered constant. The above equation coincides with the one of Segall and Lu [28] (see also [29]), with the difference that here the seismicity rate is defined region-wise rather than point-wise. This choice results in a more general and convenient formulation as we mainly focus on averages over large volumes rather than point-wise measurements of the seismicity rate, which can be also singular due to point sources. Following, Segall and Lu [28] we assume also that the stress change rate is a linear function of the pore fluid pressure change rate, _i.e._, \(\dot{\bar{\tau}}=\dot{\bar{\tau}}_{0}+f\dot{\bar{u}}\), where \(\dot{\bar{u}}\) is the average fluid pressure change rate over a given region of the reservoir and \(f\) a (mobilized) friction coefficient. The latter linear hypothesis is justified on the basis of Coulomb friction over the fault planes and Terzaghi's effective stress principle [30]. In the absence of fluid injections, \(\hat{\tau}=\hat{\tau}_{0}\) and, therefore, \(\bar{R}=1\). In this case, the seismicity rate of the region reduces to the natural one. If, on the contrary, fluids are injected into the reservoir, then \(\hat{u}_{t}>0\) and consequently, \(\hat{\tau}>\hat{\tau}_{0}\), leading to an increase of the seismicity rate (\(\bar{R}>0\)) over the region. To illustrate this mechanism, let us consider an injection of \(Q=Q_{s_{1}}=0.32\) [m\({}^{3}\)/hr] through a single injection well. In this numerical example, we consider the parameters of Table 1, we depth average Equation 1 as shown in 2 and we integrate the resulting partial differential equation in time and space using a spectral decomposition method as explained in 3. We then calculate the seismicity rate over two distinct regions, one close to the injection point and one in the surroundings. Fig. 2 shows the location of the regions and of the injection point. In Fig. 3 (left) we plot the seismicity rate in both regions as a function of time. We observe that the maximum seismicity rate over \(V_{1}\) is equal to \(R_{1}=45.91\), which means that \(45.91\) more earthquakes of a given magnitude in a given time window are expected over region \(V_{1}\), than without injections. The seismicity is even higher near the injection well (see region \(V_{2}\), Fig. 3 right). In the case of an Enhanced Geothermal System [31], we would like to increase the permeability between two wells by creating a small network of cracks that would facilitate the circulation of fluids between them. The creation of those cracks would be associated with a localized microseinsicity in the region encircling the wells. This microseismicity is welcome, provided that the overall seismicity rate over the larger region of the reservoir remains close to one. Therefore, in the control problem addressed in this work, we will set as control objective the controlled increase of the seismicity rate in a small region surrounding some wells (_e.g._, in region \(V_{2}\), see Fig. 2), while keeping constant and equal to one the seismicity rate over the larger area of the reservoir (_e.g._, in region \(V_{1}\), see Fig. 2). For this purpose, additional wells will be added in the reservoir, whose fluxes will be controlled by a specially designed controller. This controller will be robust to uncertainties and will achieve the aforementioned control objective under intermittent production rates. ## 3 Problem Statement ### Notation If \(y_{e}\in\Re\), the function \([\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \(\int_{V^{*}}\phi(x)\delta(x-x^{*})\,dV=\phi(x^{*})\), \(\forall\ x^{*}\in V\), \(V^{*}\subset V\), on an arbitrary test function \(\phi(x)\in H^{1}(V)\). For later use, the Poincare's Inequality is recalled: For \(u(x)\in H^{1}(V)\) on a bounded subset \(V\) of the space \(\Re^{3}\) of zero-trace (_i.e._, \(u(x,t)=0\) for all \(x\in\partial V\)), the inequality \(\left|\left|u(x)\right|\right|^{2}\leq\gamma\left|\left|\nabla u(x)\right| \right|^{2}\) with \(\gamma>0\), is fulfilled. ### Statement of the control problem The coupled system (1)-(2), is written as follows \[\begin{split} u_{t}(x,t)&=c_{hp}\nabla^{2}u(x,t)+ \frac{1}{\beta}\left[\mathcal{B}_{s}(x)Q_{s}(t)+\mathcal{B}_{c}(x)Q_{c}(t) \right],\\ u(x,t)&=0\quad\forall\quad x\in S,\end{split} \tag{3}\] where \(u(x,t)\) is the fluid pressure change evolving in the space \(H^{1}(V)\), \(x\in\Re^{3}\), \(x=[x_{1},x_{2},x_{3}]^{T}\), is the space variable belonging to a bounded subset \(V\) of the space \(\Re^{3}\) with boundary \(S=\partial V\), and \(t\geq 0\) is the time variable. As mentioned above, \(c_{hp}\) is the hydraulic diffusivity and \(\beta\) is the compressibility of the rock-fluid mixture. \(Q_{s}(t)\in\Re^{m_{s}}\), \(Q_{s}(t)=[Q_{s_{1}}(t),...,Q_{s_{m_{s}}}(t)]^{T}\), are fixed (not controlled) fluxes applied at the injection points, \((x_{s}^{1},...,x_{s}^{m_{s}})\), trough the coefficient \(\mathcal{B}_{s}(x)\in\Re^{1\times m_{s}}\), \(\mathcal{B}_{s}(x)=[\delta(x-x_{s}^{1}),...,\delta(x-x_{s}^{m_{s}})]\), and \(Q_{c}(t)\in\Re^{m_{c}}\), \(Q_{c}(t)=[Q_{c_{1}}(t),...,Q_{c_{m_{c}}}(t)]^{T}\), are the controlled fluxes applied at the injection points, \((x_{s}^{1},...,x_{s}^{m_{c}})\), trough the coefficient \(\mathcal{B}_{c}(x)\in\Re^{1\times m_{c}}\), \(\mathcal{B}_{c}(x)=[\delta(x-x_{c}^{1}),...,\delta(x-x_{c}^{m_{c}})]\). Note that the number of original inputs, \(m\), in system (1) is equal to the sum of not controlled and controlled input of system (3), _i.e._, \(m=m_{s}+m_{c}\) (\(Q(t)=\mathcal{B}_{s}(x)Q_{s}(t)+\mathcal{B}_{c}(x)Q_{c}(t)\)). Since the right-hand side of (1) contains Dirac's distributions, the above boundary value problem is interpreted in the weak sense (see 1 for the definition of weak solution). The SR in equation (2) is defined over \(m_{c}\) regions, \(V_{i}\subset V\), \(i\in[1,m_{c}]\) as follows \[\begin{split}\dot{h}_{i}&=\frac{f}{t_{a}\tau_{0}V_ {i}}\int_{V_{i}}u_{t}(x,t)\,dV-\frac{1}{t_{a}\tau_{0}}(e^{h_{i}}-1),\\ R_{i}&=e^{h_{i}},\quad i\in[1,m_{c}],\quad V=\sum_{ i=1}^{m_{c}}V_{i}.\end{split} \tag{4}\] The objective of this work is to design the control input \(Q_{c}\) driving the output \(y\in\Re^{m_{c}}\) defined as \[y=[h_{1},...,h_{m_{c}}]^{T}, \tag{5}\] of the underlying BVP (3)-(4) to desired references \(r(t)\in\Re^{m_{c}}\), \(r(t)=[r_{1}(t),...,r_{m_{c}}(t)]^{T}\). **Remark 1**: _Note that solving the last tracking problem results in solving the tracking for the SR system (2), i.e., \(R_{i}(t)\) will be forced to follow the desired reference \(\bar{r}_{i}(t)=e^{r_{i}(t)}\) for \(i\in[1,m_{c}]\)._ The control design will be performed under the following assumptions for system (3)-(4): **Assumption 1**: _The diffusion system (3) fulfils_ \[\sum_{i=1}^{m_{c}}\frac{1}{V_{i}^{2}}\int_{V_{i}}\left[\nabla^{2}u_{t}(x,t) \right]^{2}\,dV\leq L_{u}, \tag{6}\] _with known constant \(L_{u}\geq 0\)._ **Assumption 2**: _The fixed, not controlled flux input, \(Q_{s}(t)\), in system (3) fulfils_ \[\left|\left|\dot{Q}_{s}(t)\right|\right|\leq L_{Q}, \tag{7}\] _with known constant \(L_{Q}\geq 0\)._ **Assumption 3**: _The references to be followed, \(r_{i}(t)\), are designed to fulfil_ \[\left|\left|\dot{r}_{i}(t)\right|\right|\leq L_{\dot{r}_{i}},\quad\left|\left| \ddot{r}_{i}(t)\right|\right|\leq L_{\ddot{r}_{i}}, \tag{8}\] _with known constants \(L_{\dot{r}_{i}}\geq 0\), \(L_{\ddot{r}_{i}}\geq 0\)._ **Assumption 4**: _All the parameters of the system (3)-(4) are uncertain and only nominal values are known (e.g., a nominal value, \(f_{0}\), is known for the parameter \(f\))._ **Remark 2**: _Assumption 1 is feasible due to energy conservation on the realistic system (3). Furthermore, assumptions 2-4 are easily met in control applications._ ## 4 Output Feedback Tracking Control Design We define the error variable, \(y_{e}\in\Re^{m_{c}}\), as follows \[y_{e}=y-r, \tag{9}\] Replacing (4) yields \[\dot{y}_{e_{i}}=\frac{f}{t_{a}\hat{\tau}_{0}V_{i}}\int_{V_{i}}u_{t}(x,t)\,dV- \frac{1}{t_{a}\hat{\tau}_{0}}(e^{y_{e_{i}}+r_{i}}-1)-\dot{r}_{i},\] for \(i\in[1,m_{c}]\). Using the 3D diffusion equation (3) and the divergence theorem, the error dynamics becomes \[\dot{y}_{e_{i}} =\frac{c_{hy}f}{t_{a}\hat{\tau}_{0}V_{i}}\int_{V_{i}}\nabla^{2}u(x,t)\,dV\] \[\quad+\frac{f}{t_{a}\hat{\tau}_{0}SV_{i}}\int_{V_{i}}\left[ \mathcal{B}_{s}(x)Q_{s}(t)+\mathcal{B}_{c}(x)Q_{c}(t)\right]\,dV\] \[\quad-\frac{1}{t_{a}\hat{\tau}_{0}}(e^{y_{e_{i}}+r_{i}}-1)-\dot{r }_{i}\] \[=\frac{c_{hy}f}{t_{a}\hat{\tau}_{0}V_{i}}\int_{V_{i}}\nabla^{2}u(x,t)\,dV\] \[\quad+\frac{f}{t_{a}\hat{\tau}_{0}SV_{i}}\sum_{j=1}^{m_{c}}\int_{V _{i}}\delta(x-x_{s}^{j})Q_{s_{j}}(t)dV\] \[\quad+\frac{f}{t_{a}\hat{\tau}_{0}SV_{i}}\sum_{j=1}^{m_{c}}\int_{V _{i}}\delta(x-x_{c}^{j})Q_{c_{j}}(t)dV\] \[\quad-\frac{1}{t_{a}\hat{\tau}_{0}}(e^{y_{e_{i}}+r_{i}}-1)-\dot{r }_{i},\] for \(i\in[1,m_{c}]\). The error dynamics can be represented in matrix form as follows \[\dot{y}_{e}=\Psi(t)+B_{s}Q_{s}(t)+B_{c}Q_{c}(t)-\Phi(t)-\dot{r}(t), \tag{10}\] where \(B_{s}=[b_{ij}^{s}]\in\Re^{m_{c}\times m_{s}}\), \(B_{c}=[b_{ij}^{c}]\in\Re^{m_{c}\times m_{c}}\), \(\Psi(t)\in\Re^{m_{c}}\) and \(\Phi(t)\in\Re^{m_{c}}\) are defined as \[b_{ij}^{s} =\left\{\begin{array}{cc}\frac{f}{t_{a}\hat{\tau}_{0}SV_{i}}& \mbox{if}&x_{s}^{j}\in V_{i}\\ 0&\mbox{if}&x_{s}^{j}\notin V_{i}\\ \end{array}\right.,\quad j\in[1,m_{s}]\enspace, \tag{11}\] \[b_{ij}^{c} =\left\{\begin{array}{cc}\frac{1}{t_{a}\hat{\tau}_{0}}(e^{y_{e _{i}}+r_{i}}-1)\\ \vdots\\ \frac{1}{t_{a}\hat{\tau}_{0}}(e^{y_{e_{m_{c}}}+r_{m_{c}}}-1)\end{array}\right.,\] \[\Psi(t) =\left[\begin{array}{cc}\frac{c_{hy}f}{t_{a}\hat{\tau}_{0}V_{i} }\int_{V_{i}}\nabla^{2}u(x,t)\,dV\\ \vdots\\ \frac{c_{hy}f}{t_{a}\hat{\tau}_{0}V_{m_{c}}}\int_{V_{m_{c}}}\nabla^{2}u(x,t)\, dV\end{array}\right],\] \[\Phi(t) =\left[\begin{array}{c}\frac{1}{t_{a}\hat{\tau}_{0}}(e^{y_{e_{1 }}+r_{1}}-1)\\ \vdots\\ \frac{1}{t_{a}\hat{\tau}_{0}}(e^{y_{e_{m_{c}}}+r_{m_{c}}}-1)\end{array}\right],\] where the definition of Delta's distribution has been used. The matrices \(B_{c}\), \(\Psi(t)\) and \(\Phi(t)\) are assumed to fulfil \[B_{c}=\Gamma B_{0},\quad\Big{|}\Big{|}\dot{\Psi}(t)\Big{|}\Big{|}\leq L_{1}, \quad\Big{|}\Big{|}\dot{\Phi}(t)\Big{|}\Big{|}\leq L_{2}, \tag{12}\] where \(B_{0}\in\Re^{m_{c}\times m_{c}}\) is a known regular matrix (consequently, \(B_{c}\) is assumed to be regular matrix as well), \(\Gamma\in\Re^{m_{c}\times m_{c}}\) is an uncertain matrix with positive diagonal entries, and \(L_{1}\geq 0\), \(L_{2}\geq 0\) are known constants. The assumption over the term \(\Phi(t)\) in (12) requires the boundedness of the error vector derivative, \(\dot{y}_{e}\), as \(||\dot{y}_{e}(t)||\leq L_{y_{e}}\), \(L_{y_{e}}>0\). Therefore, only local results on system (10) are considered in this paper. Furthermore, the condition over the term \(\Psi(t)\) requires further analysis, which will be performed in the next Lemma. **Lemma 1**: _The term \(\Psi(t)\) in system (10),(11) fulfils the condition (12) if Assumption 1 is fulfilled._ The proof is shown in IV. At this stage, let us define the control \(Q_{c}(t)\) given by \[Q_{c}(t) =B_{0}^{-1}\left(-K_{1}\left[y_{e}\right]^{\frac{1}{l-l}}+\nu+\dot{r} \right), \tag{13}\] \[\dot{\nu} =-K_{2}\left[y_{e}\right]^{\frac{1+l}{l-l}},\] where \(K_{1}\in\Re^{m_{c}\times m_{c}}\), \(K_{2}\in\Re^{m_{c}\times m_{c}}\) are matrices to be designed, and \(l\in[-1,0]\) is a freely chosen parameter [34, 35]. Note that such control has a discontinuous integral term when \(l=-1\) and it is known as the Super-Twisting algorithm [36]. Otherwise, when \(l\in(-1,0]\) the control function is continuous and degenerates to a linear integral control when \(l=0\). Note how the generated control signal is continuous even in the case of the discontinuous control algorithm (\(l=-1\)). Furthermore, the controller is designed with minimum information about the system (10), _i.e._ with only the measurement of \(y(t)\) and the knowledge of the nominal matrix \(B_{0}\), which is consistent with Assumption 4. The closed-loop system of (10) with control (13) reads as \[\dot{y}_{e} =\Gamma\left(-K_{1}\left[y_{e}\right]^{\frac{1}{l-l}}+x_{I}\right), \tag{14}\] \[\dot{x}_{I} =-K_{2}\left[y_{e}\right]^{\frac{1+l}{l-l}}+\dot{\Delta}(t),\] where \[x_{I} =\nu+\Delta(t), \tag{15}\] \[\Delta(t) =\Gamma^{-1}\Psi(t)+\Gamma^{-1}B_{s}Q_{s}(t)-\Gamma^{-1}\Phi(t)+ \left(I-\Gamma^{-1}\right)\dot{r}(t),\] \[\dot{\Delta}(t) =\Gamma^{-1}\dot{\Psi}(t)+\Gamma^{-1}B_{s}\dot{Q}_{s}(t)-\Gamma^{-1 }\dot{\Phi}(t)+\left(I-\Gamma^{-1}\right)\ddot{r}(t).\] As mentioned above, the system of equations (14)-(15) has a discontinuous right-hand side when \(l=-1\). In this special case, the solutions are understood in the sense of Filippov [37]. The term \(\Delta(t)\) is assumed to fulfil \[\Big{|}\Big{|}\dot{\Delta}(t)\Big{|}\Big{|}\leq L_{s}, \tag{16}\] with _a priori_ known constant \(L_{s}\geq 0\). This is always the case due to Assumptions 1-3 and (12). The tracking result for the output (4)-(5) is then in force. **Theorem 1**: _Let system (10) assumed to fulfil (8), (12), and (16), be driven by the control (13) with \(K_{1}>0\), \(K_{2}>0\). Then, the origin of the error closed-loop system (14)-(15), is said to be locally:_ 1. _Finite-time stable for any_ \(L_{s}\geq 0\)_, if_ \(l=-1\)_._ 2. _Finite-time stable for_ \(L_{s}=0\)_, if_ \(l=(-1,0)\)_._ 3. _Exponentially stable for_ \(L_{s}=0\)_, if_ \(l=0\)_._ _Consequently, system (3) is globally ultimately bounded as_ \[||u(x,t)|| \leq\frac{\gamma}{c_{hy}\beta}\Bigg{\|}(I-B_{0}^{-1}\Gamma^{-1}B_{s})Q _{s}(t) \tag{17}\] \[\qquad\qquad+B_{0}^{-1}\Gamma^{-1}\left[\dot{r}(t)+\Phi(t)-\Psi(t) \right]\Bigg{\|},\] _for some \(\gamma>0\)._ **Remark 4**: _As a consequence of the stability of the closed-loop system trajectories, \((y_{e},x_{I})\), and due to the definition of the perturbation term \(\Delta(t)\) in (15), the integral term, \(\nu\), of the control (13) is able to provide an estimate of such term, _i.e._, \(\nu(t)\to-\Delta(t)\) as \(t\to\infty\). Following [34, 35], the trajectories \((y_{e},x_{I})\) of system (14)-(15) are ensured to reach the origin if the control gains are designed as \(K_{1}>0\), \(K_{2}>0\). Then, the stability of the diffusion system (3) has to be analysed. Consider the positive definite and radially unbounded Lyapunov functional candidate \[\mathcal{V}=\frac{1}{2}\left||u(x,t)\right||^{2}. \tag{18}\] Its derivative along the system (3) reads as \[\dot{\mathcal{V}} =\int_{V}u(x,t)u_{t}(x,t)\,dV\] \[=c_{hy}\int_{V}u(x,t)\nabla^{2}u(x,t)\,dV\] \[\quad+\frac{1}{\beta}\int_{V}u(x,t)\left[\mathcal{B}_{s}(x)Q_{s} (t)+\mathcal{B}_{c}(x)Q_{c}(t)\right]\,dV.\] Applying integration by parts, it follows that \[\dot{\mathcal{V}} =c_{hy}\int_{V}\nabla[u(x,t)\nabla u(x,t)]\,dV-c_{hy}\int_{V}[ \nabla u(x,t)]^{2}\,dV\] \[\quad+\frac{1}{\beta}[u(x_{s}^{1},t),...,u(x_{s}^{m_{s}},t)]Q_{ s}(t)\] \[\quad+\frac{1}{\beta}[u(x_{c}^{1},t),...,u(x_{c}^{m_{c}},t)]Q_{c }(t)\] \[=c_{hy}\int_{S}u(x,t)\nabla u(x,t)\cdot\hat{e}\,dS-c_{hy}\int_{V}[ \nabla u(x,t)]^{2}\,dV\] \[\quad+\frac{1}{\beta}[u(x_{s}^{1},t),...,u(x_{s}^{m_{s}},t)]Q_{ s}(t)\] \[\quad+\frac{1}{\beta}[u(x_{c}^{1},t),...,u(x_{c}^{m_{c}},t)]Q_{c }(t),\] where the divergence theorem and the definition of Dirac's distribution have been used. The vector, \(\hat{e}\), is the outward pointing unit normal at each point on the boundary \(S\). Employing the BC of system (3), the first term in the last expression is equal to zero. Then, by using Poincare's inequality, the derivative reads as \[\dot{\mathcal{V}} \leq-c_{hy}\left||\nabla u(x,t)\right||^{2}+\frac{1}{\beta} \left||u(x,t)\right||||Q_{s}(t)+Q_{c}(t)||\] \[\leq-\frac{c_{hy}}{\gamma}\left||u(x,t)\right||^{2}+\frac{1}{ \beta}\left||u(x,t)\right||||Q_{s}(t)+Q_{c}(t)||\,.\] Using the definition of the Lyapunov functional (18), the derivative can be upper-estimated as \[\dot{\mathcal{V}}\leq-\frac{2c_{hy}}{\gamma}\mathcal{V}+\frac{1}{\beta}\sqrt {2\mathcal{V}}\left||Q_{s}(t)+Q_{c}(t)\right||.\] Its solution can be upper bounded as \[\sqrt{\mathcal{V}(t)}\leq e^{-\frac{c_{hy}}{\gamma}t}\sqrt{\mathcal{V}(0)}+ \frac{\gamma}{\sqrt{2c_{hy}\beta}}\left||Q_{s}(t)+Q_{c}(t)\right||.\] Using again the definition of the Lyapunov functional (18), the stability result for system (3) is obtained as \[||u(x,t)||\leq e^{-\frac{c_{hy}}{\gamma}t}||u(x,0)||+\frac{\gamma}{c_{hy}\beta }\left||Q_{s}(t)+Q_{c}(t)\right||,\] which guarantees the global ISS result of (3) w.r.t. to the sum of the inputs \(Q_{s}(t)\) and \(Q_{c}(t)\). Finally, the ultimate bound (17) can be obtained by the definition of \(Q_{c}(t)\) in (13) and \(\Delta(t)\) in (15) when \((y_{e},x_{I})\) tend to zero, _i.e._, \[Q_{c}(t) \to B_{0}^{-1}\left[-\Delta(t)+\dot{r}\right]\] \[\to B_{0}^{-1}[-\Gamma^{-1}\Psi(t)-\Gamma^{-1}B_{s}Q_{s}(t)+ \Gamma^{-1}\Phi(t)\] \[\quad-\left(I-\Gamma^{-1}\right)\dot{r}(t)+\dot{r}]\] \[\to B_{0}^{-1}\Gamma^{-1}\left[-\Psi(t)-B_{s}Q_{s}(t)+\Phi(t)+ \dot{r}(t)\right].\] ### Energy demand and production constraints We will consider a new scenario where an additional number of restrictions, \(m_{r}\), to the fluid injection of the controlled injection points is considered. In other words, we will impose the weighted sum of the injection rates of some of the wells to be equal to a time-variant, possibly intermittent production rate. For this purpose, we will augment the vector of controlled injection points of system (10) as follows \[\dot{y}_{e}=\Psi(t)+B_{s}Q_{s}(t)+\bar{B}_{c}\bar{Q}_{c}(t)-\Phi(t)-\dot{r}(t), \tag{19}\] where \(\bar{Q}_{c}(t)\in\Re^{m_{e}+m_{r}}\), \(\bar{Q}_{c}(t)=[\bar{Q}_{c_{1}}(t),...,\bar{Q}_{c_{m_{e}+m_{r}}}(t)]^{T}\) are the new controlled fluxes and \(\bar{B}_{c}=[\bar{b}_{ij}^{c}]\in\Re^{m_{e}\times(m_{e}+m_{r})}\) is the new input matrix defined as \[\bar{b}_{ij}^{c}=\left\{\begin{array}{cc}\frac{f}{t_{s}\tau_{0}\beta V_{i}}& \text{if}&x_{c}^{j}\in V_{i}\\ 0&\text{if}&x_{c}^{j}\notin V_{i}\end{array}\right.,\quad i\in[1,m_{c}]\\ j\in[1,m_{c}+m_{r}]\end{array}\right., \tag{20}\] where \(x_{c}^{j}\), \(j\in[1,m_{c}+m_{r}]\), are the injection points over the total region \(V\). Notice that the number of controlled injection points, \(m_{c}\), has to be increased to \(m_{r}+m_{c}\). This does not change the previous theoretical results as shown below. The condition imposed over the control input, \(\bar{Q}_{c}(t)\), is \[W\bar{Q}_{c}(t)=D(t), \tag{21}\] where \(W\in\Re^{m_{r}\times(m_{c}+m_{r})}\) is a unitary constant matrix whose elements represent the weighted participation of the well's fluxes for ensuring the demand \(D(t)\in\Re^{m_{r}}\). In order to follow this, the new control input will be designed as \[\bar{Q}_{c}(t)=\overline{W}Q_{c}(t)+W^{T}D(t), \tag{22}\] where \(Q_{c}(t)\) is the original control input designed as (13) and \(\overline{W}\in\Re^{(m_{c}+m_{r})\times m_{c}}\) is the null space of \(W\). Note that if we replace (22) in (21), the demand over the controlled injection points will be strictly fulfilled at any time \(t\). Furthermore, if we replace (22) in (19), the link between the new input matrix, \(\bar{B}_{c}\), and the original input matrix, \(B_{c}\), defined in (11) is stated as \(\bar{B}_{c}\overline{W}=B_{c}\). Control (22) will ensure the linear combination of the new controlled fluxes \(\bar{Q}_{c}(t)\) to be equal to a predetermined flux \(D(t)\), which we called demand, according to (21), while keeping the original output tracking result of the previous section. This can be of interest in geothermal applications to cope with intermittent energy demand and production constraints. ## 5 Simulations In order to demonstrate our control approach, numerical simulations of (3) and (4) have been done in Python (scripts available upon request). Without loss of generality and for a simpler presentation of the results, we chose to depth average Equation 3 as shown in II and integrate the resulting two-dimensional partial differential equation in time and space using a spectral decomposition method as explained in III and the Runge-Kutta method RK23 [38]. The same parameters with the numerical simulations performed in section 2 were used (see Table 1). Simulations were performed for three scenarios, i.e. one without any predetermined demand, one with constant demand and one with intermittent demand. In all scenarios we consider a fixed injection well with flux \(Q_{s}(t)=Q_{s_{1}}(t)=0.32\) [m\({}^{3}\)/hr] situated at the same location as the fixed injection well of the example presented in section 2. Moreover, again following the same example, we consider two different regions, \(V_{1}\),\(V_{2}\) over which we calculate the SR, \(R_{1}\),\(R_{2}\). Consequently, the number of outputs to be tracked is equal to two and, thus, at least two control inputs have to be designed (\(Q_{c}(t)=[Q_{c_{1}}(t),Q_{c_{2}}(t)]^{T}\), \(m_{c}=2\)). In Fig. 4 (left) we show the chosen location of the control wells. The initial conditions of the systems (3) and (4) were set as \(u(x,0)=0\) and \(h_{1}(0)=h_{2}(0)=0\) (_i.e._, \(\bar{R}_{1}=\bar{R}_{2}=1\)). The matrices \(B_{s},B_{c}\) of system (10) are then \[B_{s}=\frac{f}{t_{a}\hat{\tau}_{0}\beta}\left[\begin{array}{c}0\\ \frac{1}{V_{2}}\end{array}\right],\quad B_{c}=\frac{f}{t_{a}\hat{\tau}_{0} \beta}\left[\begin{array}{cc}\frac{1}{V_{1}}&0\\ 0&\frac{1}{V_{2}}\end{array}\right]. \tag{23}\] For the scenarios with predetermined demand, we will apply a single restriction, as explained in IV-A, _i.e._, \(m_{r}=1\), with \(W=[0.2673,0.5345,0.8018]\). As a result, an addition control will be needed (_i.e._\(m_{c}+m_{r}=3\)), whose location is depicted in Fig. 4, right. Therefore, the matrices \(\bar{B}_{c},B_{c}\) of system (19) become \[\begin{split}\bar{B}_{c}&=\frac{f}{t_{a}\hat{\tau}_{0}\beta} \left[\begin{array}{ccc}\frac{1}{V_{1}}&\frac{1}{V_{1}}&0\\ 0&0&\frac{1}{V_{2}}\end{array}\right],\\ B_{c}&=\bar{B}_{c}\overline{W}=\frac{f}{t_{a}\hat{\tau}_{0}\beta} \left[\begin{array}{ccc}\frac{0.24}{V_{2}}&\frac{-1.14}{V_{2}}\\ -\frac{0.3382}{V_{2}}&\frac{0.4927}{V_{2}}\end{array}\right]\end{split} \tag{24}\] and the matrix \(B_{s}\) is given in (23). Finally, in all scenarios, the reference \(r(t)\) was selected as \(r(t)=[0,\ln(5)]^{T}\) so the SR on every region, \(V_{1},V_{2}\) to converge to the values \(1\) and \(5\), respectively. This choice aims at forcing the SR in the extended region \(V_{1}\) to be the same as the natural one. Regarding, region \(V_{2}\) we opt for an increase of the SR in order to facilitate the circulation of fluids and improve the production of energy. This selection of \(r(t)\) is coherent with assumption (8), resulting in \(L_{r}=L_{\bar{r}}=0\). ### Scenario 1: SR tracking without demand In this scenario, the control (13) was implemented with a nominal matrix \(B_{0}\) as \[B_{0}=\frac{f_{0}}{t_{a_{0}}\hat{\tau}_{0}\hat{\beta}_{0}}\left[\begin{array}[] {cc}\frac{1}{V_{1_{0}}}&0\\ 0&\frac{1}{V_{2_{0}}}\end{array}\right], \tag{25}\] where the subscript '0' corresponds to the nominal values of the system's parameters. In our case, we have chosen all the nominal values 10\(\%\) higher than the real ones, _e.g._, \(f_{0}=1.1f\). Note that assumption (12) is fulfilled with this selection of \(B_{0}\) and the definition of \(B_{s}\) in (23). The gain parameters of the control (13) were selected as \[\begin{split} K_{1}&=\left[\begin{array}{ccc}3.35\times 1 0^{-2}&0\\ 0&10.61\times 10^{-2}\end{array}\right],\\ K_{2}&=\left[\begin{array}{ccc}5.5\times 10^{-4}&0\\ 0&55\times 10^{-4}\end{array}\right],\\ l&=-1.\end{split} \tag{26}\] The results are shown in Figs. 5-7. The seismicity rates in both regions follow the desired constant references after \(\sim 2\) months, reaching faster to a steady state than the system without control (see Fig. 3), which reached a steady state in approximately one year. This is performed robustly, due to the presence of uncertainties in the system, _i.e._, the control only uses the nominal matrix \(B_{0}\) and compensates the rest of the error dynamics. The control signals generated shown in Fig. 6 present some oscillations due to the discontinuous nature of the used controller (\(l=-1\)). Nevertheless, these oscillations are not of high frequency (see zoom in Fig. 6). Finally, the norms of the error, \(y_{e}(t)\), and the pressure, \(u(x,t)\), show how the errors are driven to the origin in finite time while the solution of the 3D diffusion equation remains bounded, as proven in Theorem 1 (see Fig. 7). ### Scenario 2: SR tracking with constant demand In this scenario we consider the demand to be equal to \(D(t)=-Q_{s_{1}}=-0.32\) [m\({}^{3}\)/hr] (see Fig. 9). This is interesting in applications where the extracted fluid is re-injected into the reservoir. The control \(\bar{Q}_{c}(t)\) was designed as (13),(22) with the nominal matrix \(B_{0}\) \[B_{0}=\frac{f_{0}}{t_{a_{0}}\hat{\tau}_{0_{0}}\beta_{0}}\left[\begin{array}[] {cc}\frac{0.24}{V_{1_{0}}}&\frac{-1.14}{V_{2_{0}}}\\ -\frac{0.3382}{V_{2_{0}}}&\frac{0.4927}{V_{2_{0}}}\end{array}\right], \tag{27}\] where the subscript '0' corresponds to the nominal values of every system parameter. Again, we have chosen all the nominal values 10\(\%\) larger than the real ones to test robustness. The control gains were selected as in (26) and the results are shown in Figs. 8-10. Similarly to the previous results, the control succeeds in tracking the SR of two regions, but, in this case, under the restriction (21) applied over the control wells (see Fig 9). Finally, Fig. 10 shows the precision of the error norm due to the discontinuous algorithm and the boundedness of the norm of the diffusion equation solution. demand provokes many transients in the SR, the control objectives are ensured quite fast, as shown in Fig. 11. It is worth noting that the control signals generated by the discontinuous controller (\(l=-1\)) are able to drive the error norm to zero in finite time (see Fig. 13) while respecting the intermittent demand \(D(t)\) strictly (see Fig. 12). Finally, in order to test the versatility of the control algorithm (13), the previous simulation was repeated, but with \(l=0\) instead of \(l=-1\). This implies a linear controller and exponential convergence, according to Theorem 1. Indeed, in Fig. 12 we observe the linear control to generate a smoother control signal than the discontinuous one, but its convergence Figure 4: Regions \(\mathbf{V_{1}}\) and \(\mathbf{V_{2}}\) and location of the injection wells in the cases without demand (left) and with demand (right). is slower and less precise (see residual error in Fig. 14). Choosing \(l\in(0,-1)\) offers even more flexibility depending on the requirements of specific practical applications. ## 6 Conclusions A new control strategy for minimizing induced seismicity while assuring fluid circulation for energy production in underground reservoirs is designed in this paper. Contrary to existing approaches for induced seismicity mitigation due to fluid injections in the earth's crust, the proposed control strategy ensures the robust tracking of desired seismicity rates over different regions in geological reservoirs. For this purpose, a MIMO Super-Twisting controller was developed, which is able to generate continuous control signals for stabilizing the error dynamics, despite the presence of system uncertainties and unknown error dynamics. A series of numerical simulations confirm the effectiveness of the presented theory under different scenarios. This is achieved for the first time and provides a new direction for using robust control theory for this challenging application that involves an uncertain, underactuated, non-linear system of infinite dimensionality. The consideration of poroelastodynamic phenomena, more realistic and complex dynamics, errors in measurements, discrete-time dynamics, optimization and non-linear constraints for the fluxes of the control wells exceed the scope of this study and are left for future work. Nonetheless, the numerical examples provided illustrate the potential of the proposed approach for mitigating induced seismicity while maximizing renewable energy production and storage. Therefore, for the first time, this research opens new perspectives for field experiments based on real-time regulators and control theory. ## Acknowledgement The authors would like to acknowledge the European Research Council's (ERC) support under the European Union's Horizon 2020 research and innovation program (Grant agreement no. 757848 CoQuake and Grant agreement no. 101087771 INJECT). The second author would like to thank Prof. Jean-Philippe Avouac for the fruitful discussions about human-induced seismicity and mitigation. ## Conflict of interest None declared. ## Weak solution of the 3D diffusion equation **Definition 1**: _[_39_]__, [40] A continuous function \(u(x,t)\in H^{1}(V)\) is said to be a weak solution of the BVP (3) on \(t\geq 0\) Figure 8: Seismicity rate in regions, \(\mathbf{V_{1},V_{2}}\) and pressure distribution, \(\mathbf{u(x,t)}\), in the reservoir after one year, under the constraint of constant demand. Figure 9: Injection flux of fixed well \(Q_{\mathbf{x_{1}}}\) and injection fluxes of the controlled wells \(\bar{\mathbf{Q}}_{\mathbf{c_{1}}}\), \(\bar{\mathbf{Q}}_{\mathbf{c_{2}}}\), \(\bar{\mathbf{Q}}_{\mathbf{c_{3}}}\), under the constraint, \(\mathbf{D(t)=WQ_{e}(t)=Q_{\mathbf{s_{1}}}}\). The demand (right plot) is always ensured, as designed. if for every \(\phi(x)\in H^{1}(V)\), the function \(\int_{V}u(x,t)\phi(x)\,dV\) is absolutely continuous on \(t\geq 0\) and the relation \[\begin{split}\frac{d}{dt}\int_{V}u(x,t)\phi(x)\,dV&=c_{ hy}\int_{S}\left[\nabla u(x,t)\right]\cdot\hat{e}\,\phi(x)\,dS\\ &\quad-c_{hy}\int_{V}\nabla u(x,t)\left[\nabla\phi(x)\right]^{T} \,dV\\ &\quad+\frac{1}{\beta}[\phi(x_{s}^{1}),...,\phi(x_{s}^{m_{s}})]Q _{s}(t)\\ &\quad+\frac{1}{\beta}[\phi(x_{c}^{1}),...,\phi(x_{e}^{m_{c}})]Q _{c}(t),\end{split} \tag{28}\] holds for almost all \(t\geq 0\). The vector, \(\hat{e}\), is the outward pointing unit normal at each point on the boundary \(S\). The weak solution (28) is retrieved by integration by parts, using the divergence theorem, and the definition of the Dirac's distribution. ## Appendix B Depth average of the 3D diffusion equation The system described by (3) is a three-dimensional system, whose solution would be difficult to plot in a simplified manner. For this purpose and without loss of generality of the theoretical results presented in this study, we chose to limit our numerical simulations to a two-dimensional boundary value problem, which was derived by depth averaging the full, three-dimensional problem given in (3). The depth averaging was performed as follows \[\begin{split}\frac{1}{D_{z}}\int_{0}^{H}u_{t}(x,t)\,dx_{3}& =\frac{c_{hy}}{D_{z}}\int_{0}^{H}\nabla^{2}u(x,t)\,dx_{3}\\ &\quad+\frac{1}{\beta D_{z}}\int_{0}^{H}\left[\mathcal{B}_{s}(x) Q_{s}(t)+\mathcal{B}_{c}(x)Q_{c}(t)\right]\,dx_{3}\\ &=\frac{c_{hy}}{D_{z}}\int_{0}^{H}\frac{\partial^{2}u(x,t)}{ \partial x_{1}^{2}}\,dx_{3}\\ &\quad+\frac{c_{hy}}{D_{z}}\int_{0}^{H}\frac{\partial^{2}u(x,t)}{ \partial x_{2}^{2}}\,dx_{3}\\ &\quad+\frac{c_{hy}}{D_{z}}\int_{0}^{H}\frac{\partial^{2}u(x,t)}{ \partial x_{3}^{2}}\,dx_{3}\\ &\quad+\frac{1}{\beta D_{z}}\left[\bar{\mathcal{B}}_{s}(\bar{x} )Q_{s}(t)+\bar{\mathcal{B}}_{c}(\bar{x})Q_{c}(t)\right],\end{split}\] where \(H\) is the height of the reservoir, the new space variable \(\bar{x}\in\Re^{2}\), \(\bar{x}=[x_{1},x_{2}]^{T}\). \(\bar{\mathcal{B}}_{s}(\bar{x})=[\delta(\bar{x}-\bar{x}_{s}^{1}),...,\delta( \bar{x}-\bar{x}_{s}^{m_{s}})]\) and \(\bar{\mathcal{B}}_{c}(\bar{x})=[\delta(\bar{x}-\bar{x}_{s}^{1}),...,\delta( \bar{x}-\bar{x}_{e}^{m_{e}})]\). We note that \(\int_{0}^{H}\frac{\partial^{2}u(x,t)}{\partial x_{3}^{2}}\,dx_{3}=0\) due to the BC, \(u(x,t)=0\quad\forall\quad x\in\bar{S}\). Defining the depth average pressure as \(\bar{u}(\bar{x},t)=\frac{1}{D_{z}}\int_{0}^{H}u(x,t)\,dx_{3}\), the last expression becomes \[\begin{split}\bar{u}_{t}(\bar{x},t)&=c_{hy}\nabla ^{2}\bar{u}(\bar{x},t)+\frac{1}{\beta D_{z}}\left[\bar{\mathcal{B}}_{s}(\bar{ x})Q_{s}(t)+\bar{\mathcal{B}}_{c}(\bar{x})Q_{c}(t)\right],\\ \bar{u}(\bar{x},t)&=0\quad\forall\quad\bar{x}\in \partial S.\end{split} \tag{29}\] Note how the systems (3) and (29) obtain finally the same form, allowing the theoretical developments of section IV to be applied without any change. This 2D diffusion equation is numerically solved in Sections B and V using the spectral decomposition presented in the following appendix. ## Appendix C Spectral Decomposition of the 2D diffusion equation We decompose the function \(\bar{u}_{t}(\bar{x},t)\) of the BVP (29) according to \[\bar{u}_{t}(\bar{x},t)=\sum_{\begin{subarray}{c}n=1\\ nm=1\end{subarray}}^{\infty}z_{nm}(t)\phi_{nm}(\bar{x}), \tag{30}\] where \(z_{nm}(t)=\langle\bar{u}_{t}(\bar{x},t),\phi_{nm}(\bar{x})\rangle\) is the \(nm\)-th Fourier coefficient of \(\bar{u}_{t}(\bar{x},t)\) and \(\phi_{nm}(\bar{x})\) is the \(nm\)-th orthonormal eigenfunction satisfying the BC. The expression \(\langle\cdot,\cdot\rangle\) denotes the inner product, _i.e._, \(\langle f(\cdot),g(\cdot)\rangle=\int_{S}f(\cdot)g(\cdot)\,dS\). For the case of the BVP (29), the eigenfunction, \(\phi_{nm}(\bar{x})\), and the corresponding eigenvalues \(\lambda_{nm}\) are \[\begin{split}\phi_{nm}(\bar{x})&=2\sin\left(\frac{n \pi\bar{x}}{D}\right)\sin\left(\frac{m\pi\bar{x}}{D}\right),\\ \lambda_{nm}&=2\frac{\pi^{2}}{D^{2}}\left(n^{2}+m^{2} \right).\end{split} \tag{31}\] In order to simplify the notation, we adopt the mapping \(k=h(n,m)\), which leads to the more compact form \[\bar{u}_{t}(\bar{x},t)=\sum_{k=1}^{\infty}z_{k}(t)\phi_{k}(\bar{x}). \tag{32}\] Fig. 10: Norm of the tracking error, \(\mathbf{y_{e}}\), and norm of the solution, \(\mathbf{u(x,t)}\), with constant demand. The distribution of the fluid pressure in the reservoir reaches a steady state in approximately one year, while the control objectives are reached much faster, _i.e._, in approximately 0.3 months. Substituting expression (32) in (29) results in \[\dot{z}_{k}(t) =-c_{hp}\lambda_{k}z_{k}(t)+\frac{1}{\beta D_{z}}[\phi_{k}(\bar{x}_{ s}^{1}),...,\phi_{k}(\bar{x}_{s}^{m_{s}})]Q_{s}(t)\] \[\qquad+\frac{1}{\beta D_{z}}[\phi_{k}(\bar{x}_{c}^{1}),...,\phi_{ k}(\bar{x}_{c}^{m_{s}})]Q_{c}(t),\] \[z_{k}(0) =\langle\bar{u}_{t}(\bar{x},0),\phi_{k}(\bar{x})\rangle\,\quad\forall\quad k\in[1,\infty). \tag{33}\] Systems (29) and (32)-(33) are equivalent when \(k\rightarrow\infty\), but the significant difference is that system (33) is an ODE that can be easily implemented numerically with \(k\) finite. In our numerical simulations, we were limited to 160 eigenmodes, which was more than enough according to convergence analyses. These convergence analyses are standard and were omitted from the manuscript. ## Appendix D Proof of Lemma 1 Calculating the norm of the term \(\Psi(t)\) defined in (11) results in \[||\Psi(t)|| =\frac{c_{hp}f}{t_{a}\tilde{\tau}_{0}}\sqrt{\sum_{i=1}^{m_{c}} \frac{1}{V_{i}^{2}}\left[\int_{V_{i}}\nabla^{2}u(x,t)\,dV\right]^{2}}\] \[\leq\frac{c_{hp}f}{t_{a}\tilde{\tau}_{0}}\sqrt{\sum_{i=1}^{m_{c}} \frac{1}{V_{i}^{2}}\int_{V_{i}}\left[\nabla^{2}u(x,t)\right]^{2}\,dV}.\] Fig. 11: Seismicity rate in regions, \(\mathbf{V_{1}}\), \(\mathbf{V_{2}}\), and fluid pressure distribution, \(\mathbf{u(z,t)}\), in the reservoir after one year, under the constraint of intermittent demand. Fig. 12: Injection flux \(\mathbf{Q_{s_{s}}}\) of the fixed well and injection fluxes \(\mathbf{Q_{c_{1}}},\mathbf{Q_{c_{2}}},\mathbf{Q_{c_{3}}}\) of the control well under the constraint of intermittent demand using discontinuous (top left) and continuous control (top right). The demand (bottom) is strictly ensured. Taking the time derivative of the last expression reads as \[\Big{|}\Big{|}\dot{\Psi}(t)\Big{|}\Big{|}\leq\frac{c_{hy}f}{t_{a}\bar{\tau}_{0}} \sqrt{\sum_{i=1}^{m_{c}}\frac{1}{V_{i}^{2}}\int_{V_{i}}\left[\nabla^{2}u_{t}(x,t )\right]^{2}\,dV},\] which is bounded as in (12) if the assumption (6) is fulfilled. \(\blacksquare\)
2305.08925
Discrete Time Crystals with Absolute Stability
We show that interacting bosons on a ring which are driven periodically by a rotating potential can support discrete time crystals whose absolute stability can be proven. The absolute stability is demonstrated by an exact mapping of discrete time crystal states to low-lying eigenstates of a time-independent model that reveals spontaneous breaking of space translation symmetry. The mapping ensures that there are no residual time-dependent terms that could lead to heating of the system and destruction of discrete time crystals. We also analyze periodically kicked bosons where the mapping is approximate only and cannot guarantee the absolute stability of discrete time crystals. Besides illustrating potential sources of instability, the kicked bosons model demonstrates a rich field for investigating the interplay between different time and space symmetry breaking, as well as the stability of time crystal behavior in contact with a thermal reservoir.
Krzysztof Giergiel, Jia Wang, Bryan J. Dalton, Peter Hannaford, Krzysztof Sacha
2023-05-15T18:02:06Z
http://arxiv.org/abs/2305.08925v2
# Absolutely Stable Discrete Time Crystals ###### Abstract We show that interacting bosons on a ring which are driven periodically by a rotating lattice potential can support absolutely stable discrete time crystals. The absolute stability is demonstrated by an exact mapping of discrete time crystal states to low-lying eigenstates of a time-independent model that reveals spontaneous breaking of space translation symmetry. The mapping ensures that there are no residual time-dependent terms that could lead to heating of the system and destruction of discrete time crystals. With the help of the Bethe ansatz solutions we also analyze periodically kicked bosons where the mapping is approximate only and cannot guarantee the absolute stability of discrete time crystals. However, the kicked boson model shows a richer interplay between time and space symmetry breaking. Ordinary space crystals correspond to periodic distributions of particles in space which form despite the fact that the Hamiltonian is invariant under a translation of all the particles by an arbitrary vector. This is the phenomenon of spontaneous breaking of the continuous space translation symmetry into a discrete space translation symmetry in a time-independent many-body system. In 2012, research on time crystals was initiated where spontaneous breaking of time translation symmetry is responsible for the formation of a crystalline structure [1]. Periodically driven many-body closed systems can form discrete time crystals as a non-equilibrium quantum state in which the long-time stable system response occurs at integer multiples of the drive period [2; 3; 4; 5]. The discrete time translation symmetry of a time-periodic Hamiltonian is spontaneously broken and new periodic motion emerges -- a novel crystalline structure appears in time [2]. The stability of discrete time crystals is much less obvious than for space crystals because the quasi-energy spectrum of the Floquet Hamiltonian has no lower bound [6]. The key question arises whether discrete time crystals really break ergodicity and are absolutely stable or they gradually absorb energy from the drive and eventually heat up to a structureless infinite temperature state as expected for a generic periodically driven many-body system [7; 8; 9]. Systems which do not thermalize are typically integrable systems but integrability is fragile and usually requires fine-tuning of system parameters [10; 11; 12; 13; 14; 15]. Time-independent many-body systems in the presence of disorder can reveal many-body localization (MBL) and emergent integrability [16; 17; 18; 19]. Therefore, periodically driven MBL systems are potential candidates for realization of absolutely stable discrete time crystals [3; 4; 20]. While in the time-independent case, the MBL can be proven [21], the absolute stability of discrete time crystals in driven MBL systems [22; 23; 24] is far from being obvious. Also, the absolute stability of the first proposed discrete time crystal in ultra-cold bosonic atoms bouncing resonantly on an oscillating atom mirror [2] has not been proven. While previous studies using the truncated Wigner approximation, Bogoliubov approach and two-mode theory have demonstrated the absence of heating and ensured the stability of discrete time crystals for evolution times longer than those of realistic experiments [25; 26; 27], their validity in the context of an infinite number of bosons and infinite evolution time remains unproven. It should be stressed though that even if theoretical analyses of the discrete time crystals in both ultra-cold atoms bouncing on a mirror and MBL systems can only guarantee the lack of heating and the stability of the discrete time crystals for a finite evolution time, this time is much longer than the duration of the experiments [28; 29; 30; 31; 32; 33; 34; 35]. Here, we show that resonantly driven bosons by a rotating lattice on a ring in the Lieb-Liniger (LL) model can reveal absolutely stable discrete time crystals. Eigenstates of the Floquet Hamiltonian that break the discrete time translation symmetry can be mapped to low energy eigenstates of a time-independent Hamiltonian in the rotating frame. The latter eigenstates are stable and reveal spontaneous breaking of space translation symmetry in the rotating frame, which corresponds to spontaneous breaking of the discrete time translation symmetry of the original model. This model concretely provides an example of the absence of quantum thermalization and the existence of stable time crystals in a closed quantum many-body system without the need for either disorder or integrability. Furthermore, an experimental realisation of this model should be possible, since ultra-cold atoms on a ring have been created and applying a periodic drive potential should be straightforward [36]. Let us consider \(N\) bosons with contact interactions on a ring with circumference \(2\pi\) which are periodically driven by a rotating lattice potential \(\cos(sx-\omega t)\). In this Letter we focus on \(s=2\) but it can be any integer number and allows realization of big discrete time crystals, see [37; 38]. The system reduces to the LL Hamiltonian with an additional time-periodic drive \[H = H_{\rm LL}+\lambda\sum_{i=1}^{N}\cos(2x_{i}-\omega t), \tag{1}\] \[H_{\rm LL} = \sum_{i=1}^{N}\frac{p_{i}^{2}}{2}+g_{0}\sum_{i<j}^{N}\delta(x_{i}- x_{j}), \tag{2}\] where we use \(R\) and \(\hbar^{2}/mR^{2}\) for the length and energy units, respectively, where \(R\) is the ring radius and \(m\) the mass of the bosons, \(\lambda\) is the amplitude of the external driving potential and \(g_{0}\) stands for the strength of the contact interactions [55]. The system possesses discrete time and space translation symmetries, i.e., the Hamiltonian (1) does not change if \(t\to t+T\) (where \(T=2\pi/\omega\)) or all \(x_{i}\to x_{i}+\pi\). We will see that for sufficiently strong interactions, these symmetries can spontaneously be broken and the system starts evolving with a period longer than \(T\) forming a discrete time crystal. Let us investigate the system in the moving frame of the rotating lattice, where we first perform a time-dependent unitary transformation \(U_{t}=\exp(i\sum_{j}p_{j}\omega t/2)\), leading to a shift in the positions, \(x_{i}\to x_{i}+\omega t/2\), and next a second time-independent unitary transformation \(U_{p}=\exp\left(-i\sum_{j}x_{j}\omega/2\right)\), leading to a shift in the momenta, \(p_{i}\to p_{i}+\omega/2\)[56]. Under these transformations, we end up with the following exact time-independent Hamiltonian \[\tilde{H}=\sum_{i=1}^{N}\left[\frac{p_{i}^{2}}{2}+\lambda\cos(2x_{i})\right] +g_{0}\sum_{i<j}^{N}\delta(x_{i}-x_{j}), \tag{3}\] where a constant term has been omitted. The state vector in the moving frame is related to that in the laboratory frame via \(\tilde{\psi}=U_{p}U_{t}\psi\)[39]. Suppose that the discrete space translation symmetry of the Hamiltonian (3), i.e., the invariance under the shift \(x_{i}\to x_{i}+\pi\), is spontaneously broken in the ground state and low-energy eigenstates and only the symmetry related to the periodic boundary conditions on a ring remains. The corresponding time-independent single-particle probability densities in the moving frame, \(\tilde{P}(x)=\int dx_{2}\ldots dx_{N}|\tilde{\psi}(x,x_{2},\ldots,x_{N})|^{2}\), fulfill the periodic boundary conditions, \(\tilde{P}(x+2\pi)=\tilde{P}(x)\), but \(\tilde{P}(x+\pi)\neq\tilde{P}(x)\). When we return to the original laboratory frame by means of the inverse \(U_{p}\) and \(U_{t}\) transformations, the lab probability densities read \(P(x,t)=\tilde{P}(x-\omega t/2)\)[39] and thus, due to the spontaneous breaking of the space translation symmetry of the Hamiltonian (3), they now also reveal spontaneous breaking of the discrete time translation symmetry of the Hamiltonian (1), since \(P(x,t)\) is periodic with the period \(2T\) but not with the period \(T\). Hence, the system spontaneously starts evolving with a period which is an integer multiple of the period dictated by the drive and forms a discrete time crystal. Since the equilibrium symmetry breaking states in the moving frame are assumed to have an infinite lifetime in the thermodynamic limit (i.e., for \(N\to\infty\), \(g_{0}\to 0\) but \(g_{0}N=\text{const}\)), the corresponding lab-frame states also exist for infinitely long time, and hence can be rigorously recognized as an absolutely stable time crystal. We now proceed to investigate the spontaneous symmetry breaking of our model in detail. In the presence of the external potential (\(\lambda\neq 0\)) and when the interaction is either attractive (\(g_{0}<0\)) but small, or is repulsive (\(g_{0}>0\)), the ground state of the system (3) is a Bose-Einstein condensate (BEC) where all bosons occupy the single-particle ground state which is a balanced superposition of two wave-packets localized in each well of the external potential in (3) [40]. The width of the wave-packets can be estimated by employing the harmonic oscillator approximation for the potential wells and it reads \(\sigma\approx 1/\sqrt{\Omega}\), where the frequency of the harmonic oscillator \(\Omega=2\sqrt{\lambda}\). In the presence of the attractive interactions (\(g_{0}<0\)) and increasing their strength, we enter a self-trapping regime for bosons where the mean-field lowest energy solutions are degenerate and each of them can be approximated by all bosons occupying a single wave-packet localized in one well of the external potential [40; 41]. The space translation symmetry of the Hamiltonian (3) is spontaneously broken and the self-trapped states live forever in the thermodynamic limit. Increasing further the strength of the attractive interactions, we enter the bright soliton regime [40], i.e., the wave-packets localized in the potential wells start shrinking due to strong attractive interactions and resembling the bright soliton solutions. Thus, we can observe three regimes: (i) weakly-interacting regime with no spontaneous symmetry breaking, (ii) moderate-interaction regime and spontaneous breaking of the space translation symmetry of (3), and (iii) strong-interaction regime where the spontaneous breaking of the symmetry corresponds to the formation of bright soliton wavepackets of width \(\xi=2/|g_{0}(N-1)|<\sigma\), the width of the single-particle ground state in the potential well [40]. While in the regime (iii) low-energy excitations are related to collective excitations of the entire soliton [42; 43], the low-energy excitations in the regime (ii) correspond to quantum depletion of the self-trapped BEC and transfer of bosons to the other potential well [40; 44; 45]. In order to diagonalize the Hamiltonian (3), we employ the eigenbasis of the undriven Lieb-Liniger model which can be obtained with the help of the Bethe ansatz [39; 46]. Diagonalization of the Hamiltonian matrix yields the energy spectrum of (3) which is depicted in Fig. 1 as a function of \(g_{0}(N-1)\) for \(N=9\) and \(\lambda=1.5\). Let us first focus on the strongest interactions presented in Fig. 1(a) which are nearly in the regime (iii) where the width \(\xi\) of the bright soliton becomes smaller than the width \(\sigma\) of the single-particle ground state in the potential well, i.e., when \(|g_{0}(N-1)|\gtrsim 2\sqrt{2}\lambda^{1/4}=3.13\) for \(\lambda=1.5\). Without the external potential (\(\lambda=0\)), the lowest energy solution within the mean-field approach would represent a bright soliton, \(\phi_{0}(x-q)=\cosh^{-1}[(x-q)/\xi]/\sqrt{2\xi}\) which is occupied by all bosons and which can be localized at any point \(q\) on the ring [47]. In the presence of the double well potential (\(\lambda\neq 0\)) and for \(\xi\lesssim\sigma\), there are two possible ground state locations of the soliton at the bottoms of the potential wells, i.e., \(q=\pi/2\) or \(3\pi/2\). Low-energy excitations of the bright soliton correspond to excitations of its center of mass which can be described by the Hamiltonian of a _single-body_ of mass \(N\), \(H_{\rm cm}=-\partial_{q}^{2}/2N+\lambda N\int dx|\phi_{0}(x-q)|^{2}\cos(2x)\)[42; 43]. Solving \(H_{\rm cm}\chi_{n}(q)=E_{n}\chi_{n}(q)\), we obtain the spectrum of the center of mass excitations and an estimate for the corresponding single-particle densities, \(\hat{P}(x)\approx\int dq|\chi_{n}(q)|^{2}|\phi_{0}(x-q)|^{2}\). Only one excited energy level that belongs to such a spectrum can be seen in Fig. 1(a) because the regime (iii) has not been reached. Actually, in order to obtain the spectrum of the center of mass excitations we have not assumed that \(\phi_{0}\) in \(H_{\rm cm}\) is given by the bright soliton profile but applied the self-consistency method [39]. In the regime (iii), the low-energy spectrum of (3) corresponds to center of mass excitations of the bright soliton. When we decrease the strength of the attractive interactions, we lose such a _single-body_ character of the low energy spectrum which takes place for \(\xi\gtrsim\sigma\). Then, low-energy excitations lead to quantum depletion of a BEC localized in one of the potential wells and transfer of bosons to the other well [44; 45]. This moderate-interaction regime corresponds to the self-trapping of a BEC where bosons prefer to localize in one of the potential wells but do not form a bound state like in the bright soliton case. The self-trapping properties are observed in the ground and excited eigenstates up to the so-called symmetry breaking edge, i.e., the corresponding excitation energy is proportional to \(N\)[39]. One can ask how weak the attractive interactions should be in order to recover the space translation symmetry of (3)? The critical interaction strength for the phase transition between the symmetry-broken and symmetry-preserving phases can be estimated by means of the two-mode approach because the interactions in this regime are weak and not able to modify the shape of the single-particle wave-packets localized in the potential wells which are used in the two-mode approximation [2; 39; 44]. Indeed, the two-mode prediction agrees with the numerical results shown in Fig. 1(c) where at the critical value of \(g_{0}(N-1)\approx-0.18\), the energy gap between the lowest energy eigenstates decreases algebraically with \(N\). For stronger interactions the gap decreases exponentially while for weaker interactions it approaches a constant value [2]. Thus, if the interactions are sufficiently strong and \(N\to\infty\), the symmetry-preserving eigenstates are degenerate and their superpositions form symmetry-broken eigenstates which live forever. When we return to the laboratory frame, the symmetry-broken eigenstates of (3) will evolve with a period of \(2T\), demonstrating absolutely stable discrete time crystals. In Fig. 2 we present the time evolution of superpositions of the lowest symmetry-preserving eigenstates of (3) for different interaction strengths. At \(g_{0}(N-1)\approx-0.18\) and for \(N\to\infty\), the quantum phase transition to the discrete time crystal regime occurs and the superpositions evolve with the period \(2T\) -- for \(N=9\) the ground state becomes practically degenerate for stronger interactions, i.e., \(g_{0}(N-1)\approx-0.3\). At \(g_{0}(N-1)\) around \(-3\), there is a crossover to the bright soliton regime where low-energy excitations have _single-body_ character because bosons form a bound bright soliton state [42; 43]. Having analyzed the system (1), we now switch to a more general driving when the mapping of discrete time crystal states to the low-lying eigenstates of a time-independent Hamiltonian is approximate only but which shows richer interplay between different symmetries. Let us exchange the time-periodic perturbation in the Hamiltonian (1) with \[H_{1}=\lambda T\sum_{i=1}^{N}\cos(2x_{i})\sum_{m=-\infty}^{+\infty}\delta(t- mT), \tag{4}\] Figure 1: (a) Solid black lines show the excitation spectrum of (3), i.e. eigenenergies minus the ground state energy, for \(N=9\) and \(\lambda=1.5\). Vertical dashed line indicates critical value of \(g_{0}(N-1)\) for the quantum phase transition to the discrete time crystal regime obtained for \(N\to\infty\) from the results presented in (c). Green lines are related to the ground state level and the first excited level of the center of mass of the system, i.e., the eigenvalues of \(H_{\rm cm}\), which are doubly degenerate because the bosons can be located in one of the wells of the double-well potential. Red circles are exact quasi-energies of the kicked LL model (cf. (4) with \(T=\pi/31\)) which are relevant to time crystal states and which are perfectly reproduced by the spectrum of (3). (b) is an enlargement of (a) in the vicinity of the critical point. (c) Log-log plots of the difference, \(\Delta E\) between the lowest eigenenergies of (3) vs. \(N\) for different fixed values of \(g_{0}(N-1)\) as indicated in the figure. The critical value of \(g_{0}(N-1)\approx-0.18\) corresponds to an algebraic decrease of \(\Delta E\) with \(N\) and it agrees with the two-mode prediction for \(N\to\infty\)[39]. For weaker interactions \(\Delta E\) approaches a constant value while for stronger interactions, \(\Delta E\) decreases exponentially with \(N\). which describes periodic kicking of the particles with the period \(T=2\pi/\omega\). When we perform the same unitary transformation to the moving frame as previously, i.e., \(U_{t}\), and a similar shift of the momenta \(U_{p}\), then using the rotating-wave approximation or the Magnus expansion we can obtain an effective Hamiltonian identical to (3), see [39]. We have already analyzed the system (3); thus, in the present case of the time-periodic kicking (4), we have to demonstrate only that the low-energy eigenstates of (3) reproduce well the relevant exact eigenstates of the Floquet Hamiltonian which are also eigenstates of the Floquet evolution operator. The Floquet evolution operator is the evolution operator of the system over a single driving period which in the case of the time-periodic kicking (4) reads \[U(T)=e^{-iH_{\rm LL}T}\;e^{-i\lambda T\sum_{i=1}^{N}\cos(2x_{i})}. \tag{5}\] This unitary operator can be diagonalized in the eigenbasis of the LL Hamiltonian and knowing its eigenphases \(\phi_{n}\), we can calculate the quasi-energies of the system, \(E_{n}=-\phi_{n}/T\). The obtained exact quasi-energies, which are relevant to discrete time crystal states, are also shown in Fig. 1 and they are perfectly reproduced by the low-lying spectrum of (3). We have chosen the same \(\lambda\) and \(\omega\) as previously and consequently we expect exactly the same spectrum as in the case of the Hamiltonian (1), and indeed the obtained quasi-energies are indistinguishable in the plot. Both the system (1) and the kicked LL model (4) can be described using the Floquet formalism, where all physically relevant quasi-energies lie in a single Floquet zone. For small values of \(N\), Floquet states can be obtained numerically. However, very high-order terms, which are neglected in numerical calculations, may introduce tiny couplings between Floquet states with similar quasi-energies that can lead to the decay of discrete time crystals as \(t\to\infty\), especially when we first take the \(N\to\infty\) limit. In the case of the system (1), we know that quasi-energies corresponding to the discrete time crystal can be unfolded and they correspond to the low-energy spectrum of the time-independent Hamiltonian (3). Furthermore, we have a guarantee that they are not coupled to any other Floquet states in any order. In the case of the kicked LL model, such absolute stability of the discrete time crystals can not be guaranteed. The absolutely stable discrete time crystal has been designed so that the perturbation contains only one harmonic responsible for the formation of the crystal. In the case of the kicked LL model, there are many other harmonics whose influence can be reduced but cannot be fully eliminated. So far we have analyzed the Floquet states that break both discrete space and time translational symmetry in the periodically kicked LL model by switching to the frame moving with the frequency \(\omega/2\) by means of the unitary transformation \(U_{t}\). Let us show that the periodically kicked LL model (4) can also support states that spontaneously break the discrete space translation symmetry without breaking the discrete time translation symmetry, which does not exist in the corresponding LL model driven by the rotating lattice. To investigate these states, we can also switch to the moving frame but with the frequency \(\omega\), and we still obtain the same effective Hamiltonian (3) [39]. Then, however, eigenstates of (3) that reveal spontaneous breaking of the discrete space translation symmetry, do not break the discrete time translation symmetry. Indeed, when we return to the laboratory frame, all eigenstates of (3) evolve with the driving period \(T\). To summarize, we have analyzed interacting bosons on a ring with various periodic perturbations which turns out to be a suitable system for realization of discrete time crystals. Most importantly, this system can reveal absolutely stable discrete time crystals which is demonstrated by an exact mapping of the discrete time crystal states to low-lying eigenstates of a time-independent Hamiltonian. The periodically driven bosons on a ring which we consider here can also reveal big discrete time crystals which spontaneously evolve with a period many times longer than the driving period [37]. These kind of systems possess many sites in the time dimension and are suitable for investigation of condensed matter phenomena in the time domain [38; 48], see also condenced matter in phase space crystals [49; 50]. This research was funded by the National Science Centre, Poland, Projects No. 2018/31/B/ST2/00349 Figure 2: Single-particle probability densities corresponding to superpositions of the two lowest energy eigenstates of (3) plotted in the lab frame for different moments of time as indicated in the panels for \(N=9\). In each panel, collections of the densities for different interaction strength \(g_{0}(N-1)\) are presented. Solid white lines indicate the critical interaction strength \(g_{0}(N-1)\approx-0.18\) for the transition to the discrete time crystal regime when \(N\to\infty\). Densities above these lines reveal a decay of the \(2T\)-periodic evolution due to tunneling — for non-interacting bosons, complete tunneling takes place at \(t\approx 302T\). Densities below the solid white lines show discrete time crystal evolution for \(N\to\infty\). The other parameters are the same as in Fig. 1, i.e., \(\lambda=1.5\) and \(T=\pi/31\). (K.G.) and No. 2021/42/A/ST2/00017 (K.S.) and the Australian Research Council, Project DP190100815. K.G. acknowledges the support of the Polish National Agency for Academic Exchange Bekker Programme (BPN/BEK/2021/1/00339).
2306.11924
Deep perceptual hashing algorithms with hidden dual purpose: when client-side scanning does facial recognition
End-to-end encryption (E2EE) provides strong technical protections to individuals from interferences. Governments and law enforcement agencies around the world have however raised concerns that E2EE also allows illegal content to be shared undetected. Client-side scanning (CSS), using perceptual hashing (PH) to detect known illegal content before it is shared, is seen as a promising solution to prevent the diffusion of illegal content while preserving encryption. While these proposals raise strong privacy concerns, proponents of the solutions have argued that the risk is limited as the technology has a limited scope: detecting known illegal content. In this paper, we show that modern perceptual hashing algorithms are actually fairly flexible pieces of technology and that this flexibility could be used by an adversary to add a secondary hidden feature to a client-side scanning system. More specifically, we show that an adversary providing the PH algorithm can ``hide" a secondary purpose of face recognition of a target individual alongside its primary purpose of image copy detection. We first propose a procedure to train a dual-purpose deep perceptual hashing model by jointly optimizing for both the image copy detection and the targeted facial recognition task. Second, we extensively evaluate our dual-purpose model and show it to be able to reliably identify a target individual 67% of the time while not impacting its performance at detecting illegal content. We also show that our model is neither a general face detection nor a facial recognition model, allowing its secondary purpose to be hidden. Finally, we show that the secondary purpose can be enabled by adding a single illegal looking image to the database. Taken together, our results raise concerns that a deep perceptual hashing-based CSS system could turn billions of user devices into tools to locate targeted individuals.
Shubham Jain, Ana-Maria Cretu, Antoine Cully, Yves-Alexandre de Montjoye
2023-06-20T22:16:08Z
http://arxiv.org/abs/2306.11924v1
# Deep perceptual hashing algorithms with hidden dual purpose: ###### Abstract End-to-end encryption (E2EE) provides strong technical protections to individuals from interferences. Governments and law enforcement agencies around the world have however raised concerns that E2EE also allows illegal content to be shared undetected. Client-side scanning (CSS), using perceptual hashing (PH) to detect known illegal content before it is shared, is seen as a promising solution to prevent the diffusion of illegal content while preserving encryption. While these proposals raise strong privacy concerns, proponents of the solutions have argued that the risk is limited as the technology has a limited scope: detecting known illegal content. In this paper, we show that modern perceptual hashing algorithms are actually fairly flexible pieces of technology and that this flexibility could be used by an adversary to add a secondary hidden feature to a client-side scanning system. More specifically, we show that an adversary providing the PH algorithm can "hide" a secondary purpose of face recognition of a target individual alongside its primary purpose of image copy detection. We first propose a procedure to train a dual-purpose deep perceptual hashing model by jointly optimizing for both the image copy detection and the targeted facial recognition task. Second, we extensively evaluate our dual-purpose model and show it to be able to reliably identify a target individual 67% of the time while not impacting its performance at detecting illegal content. We also show that our model is neither a general face detection nor a facial recognition model, allowing its secondary purpose to be hidden. Finally, we show that the secondary purpose can be enabled by adding a single illegal looking image to the database. Taken together, our results raise concerns that a deep perceptual hashing-based CSS system could turn billions of user devices into tools to locate targeted individuals. + Footnote †: \(\lx@sectionsign\). Corresponding author at [email protected] ## 1 Introduction Some of the most commonly used communication and data sharing platforms, ranging from WhatsApp to Signal and iCloud [1, 6, 8] are protected by end-to-end encryption (E2EE). Together, these platforms are used by more than 2B users to privately exchange more than 100B messages and 4.5B images daily [20, 21, 59]. E2EE provides a strong protection to these exchanges. The content of the messages and images shared is indeed not accessible to anyone else but the sender and the intended recipient(s), including the platform providers themselves. These strong technical protections have been essential to create safe and private spaces for personal development, allowing individuals to communicate without interference and providing protection from criminals [7]. Governments and intelligence agencies around the world have however raised concerns that encryption allows criminals to evade detection [50]. Recently, concerns have focused on how encryption enables illegal content, and more specifically child sexual abuse material (CSAM), to be shared undetected [44]. While a range of potential solutions have been proposed, client-side scanning systems (CSS), have recently been seen as a compromise to detect illegal content while maintaining the privacy of communication. Such systems would indeed flag known illegal images directly on devices, before they are encrypted [4, 38, 40], and share them unencrypted with an authority. The proposed client-side scanning solutions use perceptual hashing to detect and flag copies of known illegal images. Perceptual hashing (PH) algorithms aim to irreversibly project high-dimensional images to a lower dimensional space, such that images similar to one another are projected close to one another in the lower dimensional space. These are designed to be robust to small modifications in the image, e.g., change of format, rotation. While designs and deployment details vary, the general idea of perceptual hashing-based CSS proposals is that every image shared by a user would be matched against a database of hashes of known illegal images, and reported if flagged as a match. Government proposals to mandate the deployment and use of such CSS systems are being discussed around the world, e.g. in the EU [14] and in the UK [28]. If adopted, these proposals would provide government agencies with powers to mandate the installation of CSS systems on people's devices. While this raises privacy concerns, proponents have emphasized that CSS is a very specific system with limited capabilities: detecting whether an image being sent is a copy, duplicate or edited, of known illegal content [13]. State-of-the-art PH algorithms, including NeuralHash [4] the algorithm used by Apple, however are large-scale machine learning models whose embeddings are then hashed. These models often contain millions of parameters and are trained on large-scale datasets for detecting images that are approximate copies of one another. The expressivity of these models has been used against them, e.g. for privacy attacks such as membership inference but also to embed triggers that e.g. lead a model to misclassify the image [41, 56]. CSS, if mandated, will be installed on billions of devices to scan content before it is encrypted. Law enforcement agencies around the world have long sought to access this data through backdoors, e.g. key-escrow systems [39] but also going as far as to add an intentional flaw in a cryptographic standard [11], making it realistic to assume similar attempts could be made when mandated CSS systems are deployed. This raises concerns that governmental agencies, including intelligence and law enforcement agencies, could embed hidden features in the mandated deep perceptual hashing algorithm used for client-side scanning [12, 19]. **Contributions**. In this paper, we show how an adversary could leverage a deep perceptual hashing-based client-side scanning system to scan billions of devices to identify images of a particular target individual. More specifically, we show that the adversary providing the perceptual hashing algorithm, e.g., to E2EE platforms, could "hide" a secondary purpose of facial recognition of one or more target individual(s) alongside its primary purpose of image copy detection. More specifically, our contributions are: 1. We propose a novel attack model, describing an adversary that provides the perceptual hashing algorithm for a client-side scanning system (CSS) and wants to use this CSS system to find the person(s) of interest (target individual(s)). We propose a methodology to train deep perceptual hashing models to achieve both the primary purpose of image copy detection for flagging known illegal images and a secondary purpose of facial recognition of the target individual. We train a deep perceptual hashing model to jointly optimize for both of these purposes, while also ensuring that the secondary purpose remains "hidden". 2. We perform a comprehensive evaluation to measure the ability of deep perceptual hashing models to do image copy detection and facial recognition of the target individual. We show the dual-purpose model to be able to flag previously unknown pictures of the target individual while maintaining its performance on the primary task of image copy detection. 3. We further show that the database of illegal images can be innocuously extended with a single illegal-looking image per target individual, such that the client-side scanning system composed of the modified database and corresponding dual-purpose models can detect the pictures of the target individual. 4. Finally, we show that despite being able to identify pictures containing faces of the target individual, our dual-purpose algorithm is neither a general face detection algorithm, i.e. flagging every image containing a face, nor a general facial recognition algorithm, i.e. matching faces of non-target individuals. Taken together, our results show that deep perceptual hashing-based client-side scanning systems could be built with a hidden purpose that would enable billions of user devices to be used as a surveillance tool. The rest of the paper is organized as follows: Section 2 gives an overview of client-side scanning and deep perceptual hashing algorithms. Section 3 explains our attack model, while Section 4 details the methodology we use to train single-purpose and dual-purpose models. Section 5 lists the performance metrics we use, and our validation and model selection strategy. Section 6 provides a detailed description of datasets we use to train, validate, and test our models. Section 7 provides the primary results of our experiments. Section 8 discusses the impact of our results and the "hidden" nature of our dual-purpose CSS system. Section 9 provides an overview of the related work. ## 2 Perceptual hashing-based client-side scanning **Client-side scanning (CSS)** is a technology developed to detect illegal content on-device before the content is shared, encrypted, e.g. on WhatsApp, iCloud, Signal [4]. CSS uses perceptual hashing (PH) to detect illegal content by flagging copies, either a duplicate or an edited version, of known illegal images. A PH algorithm \(H\) would be deployed on the user device along with a database \(D=\{d_{1},\ldots,d_{n}\}\) containing hashes \(d_{i}=H(r_{i}),i=1\ldots,n\) of known illegal images \(R=\{r_{1},\ldots,r_{n}\}\). This allows illegal content to be flagged without having to share a database of illegal images. The database \(D\) can furthermore be protected using mechanisms such as cuckoo tables, as in Apple's PH-CSS proposal [4]. This feature, which requires only hashes to be stored on the user device, is a crucial element of CSS, as the storing and sharing of illegal content are criminal offenses according to law in many countries [3, 45]. **Perceptual hashing (PH) algorithms** aim to irreversibly project high-dimensional images to a lower dimensional space, such that images similar to one another are projected close to one another in the lower dimensional space. Thus, to retrieve images, from a large database of images, which are similar to a given a query image, one can simply compare the projections generated using PH algorithm. This makes PH algorithms an ideal choice to solve the problem of image copy detection, i.e. finding copies of a given image in a large database [18]. In the image copy detection task, two images are considered copies of one another if they are exact duplicates or if one of the images is an edited version of the other image. In CSS, similarly to image copy detection, PH allows an on-device system to flag images that are copies of images in the database \(R\) of known illegal images. PH algorithms like Microsoft's PhotoDNA [10] and Facebook's PDQ [36] have been used in the past to detect known illegal content, albeit in a server-side scanning setup. Formally, a perceptual hashing algorithm \(H\) maps an input image \(X\) to a vector \(E\) of length \(l\), referred to as _hash_. The hash consists of either binary or continuous values. We refer the interested reader to Jain et al. [34] for a more formal definition of perceptual hashing algorithms. In CSS, the PH algorithm \(H\) is used with a threshold \(T\) to detect if an image \(X\) is a copy of image in the database \(R\). More specifically, a _query image_\(X\) is flagged by CSS if there exists a hash in the database \(d_{i}\in D\) such that \(f(H(X),d_{i})<T\). Here, \(f\) denotes a distance function used to measure the dissimilarity between the images, i.e., small values of \(f\) indicate more similar images. The most popular distance functions to compare hashes are the Hamming distance for binary hashes and the Euclidean distance for continuous hashes. The threshold \(T\) controls the trade-off between recall, i.e. the percentage of true image copy pairs being detected, and precision, i.e. the percentage of detected pairs of images that are actually true image copy pairs. Modern perceptual hashing algorithms are effectively a trained machine learning model that creates an embedding for each image [18]. The embeddings could be used directly as the hash or can be converted to a bit-valued hash using locality-sensitive hashing [4]. Such models are predominantly built using convolutional neural network (CNN) architectures [17, 63], and trained on large-scale datasets using self-supervised learning and augmentations [51, 70]. **Micro-Average Precision (\(\mu AP\))** is a standard metric used to measure the quality of a perceptual hashing algorithm \(H\) independently of a specific threshold \(T\). Similarly to the Area Under the Curve metric, \(\mu AP\) provides a single number used to compare the ability of PH algorithms to reliably distinguish the pairs of images which are copies of each other from the pairs that contain images which are visually distinct from each other [27]. \(\mu AP\) is evaluated by sending a set of query images \(Q=\{q_{1},q_{2},...,q_{m}\}\) through the PH algorithm \(H\) and matching it against a database of reference images \(R\). Some images in \(Q\) have a valid match in \(R\) while some do not. The distances \(f_{i,j}\) are computed between all pairs of query images \(q_{i}\in Q\) and database images \(r_{j}\in R\) as \(f_{i,j}=f(H(q_{i}),H(r_{j}))\). The micro-average precision \(\mu AP\) can then be computed as: \[\mu AP=\sum_{i=1}^{nsm}p(i)\Delta r(i)\in[0,1], \tag{1}\] where \(p(i)\) denotes the precision at threshold \(T_{i}\), where \(T_{i}\) is the \(i^{th}\) value in the sorted list of distances \(f_{i,j}\), and \(\Delta r(i)\) denotes the difference of recall values computed at thresholds \(T_{i}\) and \(T_{i-1}\). The \(\mu AP\) is equivalent to area under precision-recall curve when all pairs between the queries and database are considered. ## 3 Attack model We here consider an attacker who is providing the perceptual hashing capabilities, e.g. under a government mandate, to platforms. Their aim is to leverage the CSS system deployed on people's devices to flag and report unknown pictures of a target individual. More specifically, the goal of the attacker is to expand the CSS system with a secondary purpose of facial recognition of a target individual \(I_{T}\) (secondary purpose). Such a _dual-purpose client-side scanning system_ should be able to have both: 1. its original _primary purpose_: flag images which are copies of known illegal images in \(R\); and 2. a _hidden secondary purpose_: flag and report unseen pictures of target individual \(I_{T}\). Formally, the dual-purpose CSS system would consist of (1) a _dual-purpose perceptual hashing_ algorithm \(H_{d}\), (2) an extended set of reference images \(R_{d}\) that would consist of both known illegal images \(R\) and additional template images \(R^{\prime}\) (\(R_{d}=R\cup R^{\prime}\)), (3) a database of hashes \(D_{d}\) derived by hashing the images in \(R_{d}\) with \(H_{d}\), and (4) a threshold \(T\). To achieve these goals, we assume that the attacker can: 1. Provide the perceptual hashing algorithm \(H\) that will be deployed on the devices of users and the threshold \(T\) used. 2. Add a small number of illegal-looking images to the database of reference images \(R\). 3. Have access to a set of \(N^{T}\) distinct images of the individual \(I_{T}\). We denote this set by \(X^{I_{T}}=\{X_{1}^{I_{T}},...,X_{N^{T}}^{I_{T}}\}\). 4. Have access to a small set of unknown illegal images. We believe this attacker to be realistic. Indeed, proposed laws around the world would grant powers to organizations to provide tools for detecting illegal content (1) and host the database \(D\) (2). In line with recommendations made after the proposal made by Apple, we assume the database to be auditable by organizations like the National Center for Missing and Exploited Children (NCMEC). Images in \(R\) would thus have to be composed of actually illegal images. Finally, we assume an attacker would have access to a small number of unknown illegal images (4) e.g. through the dark web or internal means. We also assume the attacker to have access to the flagged (decrypted) images, enabling them to access the flagged images of the target individual. We believe this assumption to be realistic as flagged images will need to be manually processed and acted upon as the possession of CSAM content is a criminal offense. It thus does not seem unreasonable, e.g., that legislation now or in the future would mandate all the flagged images to be sent to a public authority or law enforcement agency. In this paper, we show how such an attacker can train a dual-purpose perceptual hashing algorithm \(H_{d}\). This algorithm has a "hidden" secondary feature of performing facial recognition to identify images of a target individual \(I_{T}\). This functionality is activated when the \(H_{d}\) is used with a dual-purpose database of images \(R_{d}\) composed of known illegal images \(R\) and template images \(R^{\prime}\). ## 4 Methodology Our dual-purpose perceptual hashing (PH) algorithm \(H_{d}\) is a deep learning based PH algorithm, i.e., it is a deep learning model that takes input an image \(X\), and outputs the hash \(H_{d}(X)\). In this section, we propose a methodology to train such a dual-purpose PH model \(H_{d}\). A large range of pre-trained models now exist for images provided through libraries like timm [67] and torchvision [49]. While many could be fine-tuned for image copy detection, we decided here to rely on a model that has been developed and used for image copy detection. Thus, we use the model developed by Shuhei Yokoo which won the Facebook Image Similarity Challenge [70] and use a similar training procedure. The same architecture, Efficient-Netv2m [61], has also been used by Bumble to develop a lewd image detector [2]. The single-purpose and dual-purpose models differ only in their optimization objective: the single-purpose models \(H_{s}\) are trained to only do image copy detection, while we train dual-purpose models \(H_{d}\) to jointly optimize for both image copy detection and facial recognition of target individual \(I_{T}\). The former are used to benchmark the performance of our dual-purpose models. In this section, we first describe the model architecture of the Yokoo model, followed by a brief description of the training procedure for single-purpose models. We then explain the training strategy for our dual-purpose models. ### Model architecture The model uses a CNN backbone based on the EfficientNetv2m architecture [61] to extract the features. These features are then sent through a generalized mean pooling (GeM) layer with \(p=1\)[53], which is equivalent to average pooling. This is followed by batch normalization, and a fully-connected layer to reduce the dimensionality from 1024 to 256. The output of the fully-connected layer is \(L_{2}\)-normalized to return the final embeddings for the input image. The model thus takes as input an image of size \(256\times 256\) and converts it to an \(L_{2}\) normalized embedding of size 256. The model weights are initialized using the same procedure as the Yokoo model, described in Appendix A. ### Training single-purpose models The Yokoo model, including the CNN backbone, is trained end-to-end using contrastive loss with cross batch memory (XBM) [65] along with an extensive set of data augmentations. We use the same strategy to train our models. However, while the Yokoo model is trained in three stages, progressively increasing the image size and augmentation strength in each stage, we train our models only in one stage using Yokoo's approach in the first stage. Fine-tuning the model indeed only leads to marginal gains in performance while requiring two times the time and four times the resources required for training the first stage. Such resources were not available to us. **Summary of training procedure.** The model \(H_{s}\) is trained on a duplicate-free image dataset \(D_{\text{train}}^{\text{primary}}\). In each iteration, \(b_{\text{primary}}\) images are sampled from the dataset \(D_{\text{train}}^{\text{primary}}\) uniformly at random without replacement to create a batch of images \(B_{\text{primary}}\). Two sets of image transformations \(A_{m}\) and \(A_{h}\) are applied on each image \(X_{i}\) in \(B_{\text{primary}}\) to generate two edited copies of each image \(A_{m}(X_{i})\) and \(A_{h}(X_{i})\). All the edited copies of images in \(B_{\text{primary}}\) are collated to create a batch \(B_{\text{primary}}^{\prime}\) of size \(2*b_{\text{primary}}\). Embeddings \(E_{\text{primary}}\) are generated for each image in the \(B_{\text{primary}}^{\prime}\) using \(H_{s}\). Loss is computed on the embeddings \(E_{\text{primary}}\) using contrastive loss with XBM. SGD optimizer is then used to backpropagate the loss and update the weights of model \(H_{s}\). One complete pass over the dataset \(D_{\text{train}}^{\text{primary}}\) is denoted as one epoch and the model is trained for 10 epochs. The model state is saved at the end of each epoch. **Augmentations.** To enable the Yokoo model to match images even when one is an edited copy of the other, the dataset is augmented on-the-fly by applying a set of random image transformations - \(A_{m}\) and \(A_{h}\) - to each image in a batch. \(A_{m}\) and \(A_{h}\) consists of a series of image transformations, each applied with a probability \(p\). \(A_{m}\) results in a moderately transformed image while \(A_{h}\) gives a heavily transformed image. Appendix B provides more details on \(A_{m}\) and \(A_{h}\). **Loss function.** Yokoo model is trained to optimize the contrastive loss with cross-batch memory (XBM) [65]. The contrastive loss takes as input a set of embeddings and the corresponding labels. It is designed to cluster together the embeddings of the images that have same labels, while separating the embeddings of the images that have different labels. To achieve this, for each input embedding \(E_{i}\) (derived from image \(X_{i}\) with label \(Y_{i}\)), it requires - 1. A set of positive samples for \(E_{i}\), denoted by \(E_{i,p}\). A positive sample for \(E_{i}\) is defined as an input, other than \(E_{i}\), with same label \(Y_{i}\), and at a distance from \(E_{i}\) of at least the margin parameter \(m_{p}\). 2. A set of negative samples for \(E_{i}\), denoted by \(E_{i,n}\). A negative sample for \(E_{i}\) is defined as an input, with label other than \(Y_{i}\), and within the distance of margin parameter \(m_{n}\) from \(E_{i}\). The learning of the model depends on the "quality" of the positive and negative samples. Limiting the pool of candidates to the elements of the batch means that the negative (respectively positive) samples found might be very easy to push away (respectively bring closer) without the model becoming better in general. The goal of the XBM is to address this challenge by increasing the pool of candidates. XBM keeps an internal state of latest \(M\) embeddings \(M_{E}\) and corresponding labels \(M_{Y}\). During every call to the XBM function, the internal state is updated in a first-in first-out fashion with the latest embeddings and their labels to ensure size of the internal state, i.e. \(|M_{E}|\) and \(|M_{Y}|\) remains \(M\). To define the contrastive loss with XBM formally, consider a batch of \(b\) input embeddings \(\{E_{1},...,E_{b}\}\) and corresponding labels \(\{Y_{1},...,Y_{b}\}\), and an XBM of size \(M\) (\(>b\)), where \(M_{E}=\{E_{1},...,E_{M}\},M_{Y}=\{Y_{1},...,Y_{M}\}\). For each embedding \(E_{i}\), first the positive (\(E_{i,p}\)) and negative (\(E_{i,n}\)) samples are computed from the XBM. The contrastive loss \(L_{c}\) is then computed for \(E_{i}\) using \(E_{i,p}\) and \(E_{i,n}\). \[E_{i,p}=\left\{\begin{array}{l}E_{j}\ \left|\begin{array}{l}E_{j}\in M_{E },||E_{i}-E_{j}||_{2}>m_{p}\\ Y_{j}=Y_{i},j\neq i,\end{array}\right.\right\} \tag{2}\] \[E_{i,n}=\{E_{j}|E_{j}\in M_{E},Y_{j}\neq Y_{i},||E_{i}-E_{j}||_{2}<m_{n}\} \tag{3}\] \[L_{i,p}=\frac{1}{|E_{i,p}|}*\sum_{E_{j}\in E_{i,p}}\left(||E_{i}-E_{j}||_{2}-m _{p}\right) \tag{4}\] \[L_{i,n}=\frac{1}{|E_{i,n}|}*\sum_{E_{j}\in E_{i,n}}\left(m_{n}-||E_{i}-E_{j}|| _{2}\right) \tag{5}\] \[L_{c}=\frac{\sum_{i=1}^{n}\left(L_{i,p}+L_{i,n}\right)}{n} \tag{6}\] where \(||\cdot||_{2}\) denotes the Euclidean distance. For image copy detection, each image \(X_{i}\) in the dataset is given a distinct label \(Y_{i}\). The images derived after applying augmentations to \(X_{i}\), i.e., \(A_{m}(X_{i})\) and \(A_{h}(X_{i})\) have the same label as \(Y_{i}\). The model thus learns to match images derived from the same image while separating the images which are not. Algorithm 1 describes the procedure to compute the loss \(L_{\text{primary}}\) in single-purpose models, given a single-purpose model \(H_{s}\) after \(t-1\) updates to it weights, and a batch of images \(B_{\text{primary}}\). **Optimizer.** We perform mini-batch gradient descent using the SGD optimizer with weight decay and momentum. While the Yokoo model uses a constant learning rate \(\eta\), we found this approach to lead to instability in training in the last few epochs. Instead, we decrease the learning rate by multiplying it by \(\gamma\) every epoch, with a minimum learning rate set to \(\eta_{min}\). Thus, for an epoch \(i\in{1,2,...,10}\), the learning rate is given by \(min(\eta*\gamma^{i-1},\eta_{min})\). ### _Training dual-purpose models_ We train our dual-purpose models by modifying the single-purpose training procedure with an additional loss term \(L_{\text{secondary}}\) to jointly optimize for both tasks. \(L_{\text{secondary}}\) also uses contrastive loss with cross-batch memory and is computed on a secondary training dataset \(D_{\text{train}}^{\text{secondary}}\). The secondary dataset \(D_{\text{train}}^{\text{secondary}}\) and the training procedure for dual-purpose models are designed to enable the model to learn to identify and matches faces of the target individual while ensuring the model does not become a general face detection model, i.e. matching every face to the target individual, or a general face recognition model, i.e. is able to identify and match not only the faces of target individual but also non-target individuals. We construct the secondary task dataset \(D_{\text{train}}^{\text{secondary}}\) to contain images from both the target and non-target individuals. \(D_{\text{train}}^{\text{secondary}}\) contains (a) \(N_{\text{train}}^{T}\) images of the target individual \(I_{T}\), denoted by \(X_{\text{train}}^{I_{T}}\), and (b) images from \(N_{\text{train}}^{T^{\prime}}\) non-target individuals \(I_{T^{\prime},\text{train}}=\{I_{T^{\prime},1},...I_{T^{\prime},N_{\text{train }}^{T^{\prime}}}\}\), denoted by \(X_{\text{train}}^{I_{T^{\prime}}}=\bigcup_{i=1}^{N_{i}^{T^{\prime}}}X_{i}^{I_{ T^{\prime}}}\) where \(X_{i}^{I_{T^{\prime}}}\) denotes images of individual \(I_{T^{\prime},i}\). Thus, \[D_{\text{train}}^{\text{secondary}}=X_{\text{train}}^{I_{T}}\cup X_{\text{ train}}^{I_{T^{\prime}}} \tag{7}\] As mentioned in the previous section, the contrastive loss clusters together the embeddings of the images that have the same label, while separating the embeddings of the images that have different labels. We thus assign the same label to all the images of target individual in \(D_{\text{train}}^{\text{secondary}}\), and a different label to all the images from the non-target individuals, irrespective of whether it comes from the same individual. The cross batch memory used to compute the contrastive loss is shared between the \(L_{\text{primary}}\) and \(L_{\text{sec}}\), i.e., it contains embeddings of images from both the \(D_{\text{train}}^{\text{secondary}}\) and \(D_{\text{train}}^{\text{primary}}\). This strategy of dataset creation, image labeling and sharing the XBM together ensures that the model learns to: 1. Cluster different images of target individual \(I_{T}\) together, while separating them from images of non-target individuals \(I_{T^{\prime}}\) and from other images in the dataset \(D_{\text{train}}^{\text{primary}}\). The model thus learns to separate the images of target individual from all the other images while not becoming a general face detector algorithm. 2. Separate the different images of same non-target individual, ensuring that the model does not become a general face recognition algorithm. **Training procedure.** The dual-purpose model \(H_{d}\) is trained on both \(D_{\text{train}}^{\text{primary}}\) and \(D_{\text{train}}^{\text{secondary}}\). In each iteration, batch \(B_{\text{primary}}\) consisting of \(b_{\text{primary}}\) images from \(D_{\text{train}}^{\text{primary}}\) is constituted the same way as for single-purpose models and used to compute \(L_{\text{primary}}\) in the same way than for single-purpose models. In the same iteration, for the secondary task, batch \(B_{\text{secondary}}\) of size \(b_{\text{secondary}}\) is sampled from \(D_{\text{train}}^{\text{secondary}}\). At the start of each epoch of training, two sets of images are initialized and set to (1) \(X_{\text{train}}^{I_{T}}\), images of target individual, and (2) \(X_{\text{train}}^{I_{T^{\prime}}}\), images of non-target individuals respectively. The batch \(B_{\text{secondary}}\) is created by sampling images without replacement from the sets of images of target and non-target individuals with probability \(p_{T}\) and \(1-p_{T}\) respectively. Whenever either of the set becomes empty, it is re-initialized and used. Two images are generated for each image \(W_{i}\) in \(B_{\text{secondary}}\). If \(W_{i}\) is an image of a non-target individual then it is treated the same way as an image from \(D_{\text{train}}^{\text{primary}}\). If \(W_{i}\) belongs to the target individual \(I_{T}\), an image \(W_{i}^{\prime}\) is sampled at random from \(X_{\text{train}}^{I_{T}}\) and moderate transformations \(A_{m}\) are applied to each image resulting in \(A_{m}(W_{i})\) and \(A_{m}(W_{i}^{\prime})\). The two images \(A_{m}(W_{i})\) and \(A_{m}(W_{i}^{\prime})\) serve as positive samples for each other, making the model learn to identify faces of the target individual. The generated images are collated to create batch \(B_{\text{secondary}}^{\prime}\) of size \(2*b_{\text{secondary}}\). We describe the algorithm to compute \(L_{\text{secondary}}\) for batch \(B_{\text{secondary}}\) in Algorithm 2. While computing \(L_{\text{secondary}}\), the model is put in _evaluation_ mode, freezing the batch normalization layers. The final loss for training dual-purpose algorithms \(L_{d}\) is computed as a weighted sum of \(L_{\text{primary}}\) and \(L_{\text{secondary}}\) with weight \(w\), i.e., \[L_{d}=(1-w)*L_{\text{primary}}+w*L_{\text{secondary}} \tag{8}\] allowing the model to jointly optimize for both the primary and secondary task. We use the same optimizer with same learning rate scheduling as used by single-purpose models. The model is saved at the end of each epoch, where one epoch is defined as one complete pass over \(D_{\text{train}}^{\text{primary}}\). ``` 1:Inputs:\(B_{\text{secondary}}\): Batch of size \(b_{\text{secondary}}\) sampled from \(D_{\text{train}}^{\text{secondary}}\) consisting of images \(\{W_{1},..,W_{b_{\text{secondary}}}\}\) and their labels \(\{Z_{1},..,Z_{b_{\text{secondary}}}\}\)\(X_{train}^{I_{T}}\): Images of the target individual \(I_{T}\) used in training \(H_{t-1}\): PH algorithm after \(t-1\) updates \(L_{c}\): Function to compute contrastive loss with cross-batch memory of size \(M\)\(A_{m},A_{h}\): Moderate and hard augmentations 2:Output:\(L_{\text{secondary}}\): Secondary task loss for the batch \(B_{\text{secondary}}\) 3:Initialize:\(B_{X},B_{Y}\leftarrow[W_{1},...,W_{b_{\text{secondary}}}],[Z_{1},...,Z_{b_{ \text{secondary}}}]\)\(B_{X}^{\prime},B_{Y}^{\prime}\leftarrow[],[]\) 4:\(H_{t-1}.eval()\)\(\triangleright\) Model in evaluation mode 5:for\(i\in\{1,..,b_{\text{secondary}}\}\)do 6:if\(W_{i}\in X_{train}^{I_{T}}\)then\(\triangleright\) Target individual 7:\(W_{i}^{\prime}=random.choice(X_{train}^{I_{T}})\) 8:\(B_{X}^{\prime}.push(H_{t-1}(A_{m}(W_{i})),H_{t-1}(A_{m}(W_{i}^{\prime})))\) 9:else\(\triangleright\) Non-target individual 10:\(B_{X}^{\prime}.push(H_{t-1}(A_{m}(W_{i})),H_{t-1}(A_{h}(W_{i})))\) 11:endif 12:\(B_{Y}^{\prime}.push(Z_{i},Z_{i})\) 13:endfor 14:\(L_{sec}=L_{c}(B_{X}^{\prime},B_{Y}^{\prime})\) ``` **Algorithm 2**\(L_{\text{secondary}}\): Loss for secondary task ## 5 Metrics, validation, and model selection Given a deep perceptual hashing model \(H\), we evaluate its performance on a (a) primary task of image copy detection, (b) secondary task of face recognition of target individual \(I_{T}\), as well as on a (c) third task of facial recognition of non-target individuals. For each task, we consider a set of query images and match them against a database of embeddings using model \(H\). We use the Euclidean distance to compute distance between the embeddings. In this section, we describe the metrics used to measure the performance on each task and then present our validation setup and model selection strategy. ### Metrics **Image copy detection (ICD)**. As described in Section 2, we use \(\mu AP\) to measure the performance of \(H\) on the image copy detection task. To compute \(\mu AP\), a set of query images \(Q\) is sent to be matched against a database of reference images \(R\). While \(\mu AP\) provides a single standard number used to evaluate the quality of PH algorithms, it is not easily interpretable. We thus additionally evaluate the precision and recall of the model on a threshold \(T\) that is likely to be used in practice. The threshold \(T\) is based on the validation performance of the model \(H\) on the ICD task, selected so as for the model to achieve a precision of 90%. **Facial recognition**. We quantify the performance of \(H\) to match previously unseen images \(Q_{I}\) of an individual \(I\) to previously seen images \(R_{I}\) of \(I\), using the following metrics: 1. Recall: The percentage of images in \(Q_{I}\) that will be detected. 2. False positives per million: The number of images that will be flagged for a million images of individuals not \(I\) when matched against \(R_{I}\). 3. Precision: The percentage of images of \(I\) among all flagged images. 4. \(F_{1}\)-score: The harmonic mean of Recall and Precision. It provides a single number to compare the performance of \(H\) on facial recognition of \(I\). To compute these values, we create a set of query images \(Q\), consisting of \(Q_{I}\), i.e., the previously unseen images of individual \(I\) and \(Q_{I^{\prime}}\), i.e., the images of individuals different from \(I\). We match the query images \(Q=Q_{I}\cup Q_{I^{\prime}}\) against the database of previously seen \(N_{\text{train}}^{T}\) images of individual \(I\) using perceptual hashing algorithm \(H\) and threshold \(T\). To replicate real-world scenarios, the threshold \(T\) is same as the one used for the evaluation of \(H\) in ICD task. ### Validation **Image copy detection**. We evaluate the validation performance \(\mu AP_{\text{val}}\) of the models \(H\) on the image copy detection task by matching the set of query images \(Q_{\text{val}}^{\text{primary}}\) against the database of reference images \(R_{\text{val}}^{\text{primary}}\). **Facial recognition.** We evaluate the validation performance of the model \(H\) on the facial recognition task by matching the set of query images \(Q^{\text{secondary}}_{\text{val}}\) consisting of * \(Q^{\text{secondary}}_{\text{val},T}\), query images of target individual \(I_{T}\), and * \(Q^{\text{secondary}}_{\text{val},T^{{}^{\prime}}}\), query images of \(N^{T^{\prime}}_{\text{val}}\) non-target individuals \(I_{T^{\prime},\text{val}}\) against \(N^{T^{\prime}}_{\text{val}}+1\) reference databases \(R_{I}\), where \(I\in I_{T^{\prime},\text{val}}\cup\{I_{T}\}\). \(F_{1}\) score is then measured for each individual \(I\), corresponding to each reference database \(R_{I}\), and is reported as \(F_{1}(\text{val},I)\). The threshold \(T\) used in matching is the same as the one determined from validation on the ICD task. The query set \(Q^{\text{secondary}}_{\text{val}}\) and \(N^{T^{\prime}}_{\text{val}}+1\) reference databases \(R_{I}\) are constructed as follows: 1. Images for each individual \(I\in I_{T^{\prime},\text{val}}\) are partitioned into two sets - a reference database \(R_{I}\) containing \(N^{T}_{\text{main}}\) images, and a query set \(Q_{I}\) containing all the remaining images of \(I\). 2. The query sets \(Q_{I}\forall I\in I_{T^{\prime},\text{val}}\) are combined to construct \(Q^{\text{secondary}}_{\text{val},T^{{}^{\prime}}}\) 3. The query set \(Q^{\text{secondary}}_{\text{val},T}\) is constructed using \(N^{T}_{\text{val}}\) images of \(I_{T}\), while \(R_{I_{T}}\) consists of all images of \(I_{T}\) used in training. ### _Best model selection_ **Single-purpose models**. The best model is selected based on the \(\mu AP_{\text{val}}\) on the validation set of query and reference images. **Dual-purpose models**. The best model is selected based on the \(\mu AP_{\text{val}}\) and \(F_{1}(\text{val},I_{T})\). Each saved model is given a score \(s=\mu AP_{\text{val}}+0.1*F_{1}(\text{val},I_{T})\), and the model with the best score is selected for testing. ## 6 Datasets For the primary purpose, we use the Image Similarity Challenge 2021 dataset (DISC21) [27] and, for the secondary purpose, the VGGFace2 dataset [22]. In this section, we describe the datasets and their preprocessing steps. ### _Disc21_ The DISC21 dataset was released by Facebook for the 2021 Image Similarity Challenge. It is, to the best of our knowledge, the largest publicly available dataset for the image copy detection task. We describe the 3 sets of images from DISC21 dataset that we use either in training, validation, or testing below: 1. _DISC21 Training set_ consisting of 1M images is used as \(D^{\text{primary}}_{\text{train}}\) for training the \(H\) on primary task. 2. _DISC21 Reference set_, denoted by \(R^{\text{primary}}\), consists of 1M images that are used as a reference database in matching. We find that evaluating the performance for each saved model against the complete reference database is computationally expensive so, instead, for validation we set \(R^{\text{primary}}_{\text{val}}\) to contain 100K images from \(R^{\text{primary}}\). For testing, the complete database \(R^{\text{primary}}\) is used. Both the DISC21 reference set and the training set are sampled from the same distribution and are mutually exclusive. 3. _DISC21 Development set_ consist of 50K query images to be used for matching against the reference database. The ground truth for the 50K query images was released in two parts - first for 25K images, and then for the remaining 25K images. We use the former set of 25K query images for validation (\(Q^{\text{primary}}_{\text{val}}\)), and the latter for testing, (\(Q^{\text{primary}}_{\text{test}}\)). Both query sets are organized such that, approximately 20% of images in each set have a valid match in the reference database, while remaining images have no match. ### _VGGFace2_ The VGGFace2 dataset contains a total of 3,311,286 images for 9,131 individuals. We choose VGGFace2 over recent large-scale facial recognition datasets like WebFace260m [74] because 1) VGGFace2 provides more diverse, in terms of angles and lightning, images per individual than other datasets, and 2) images in VGGFace2 are more realistic e.g. with background while WebFace260m contains heavily cropped faces of individuals (see Fig. 3 in Appendix E). On manual inspection, we found that the VGGFace2 dataset contained both a few mislabeled images for each individual [72] and duplicate images, i.e. the same image but slightly modified or images of the same person taken within a very short interval. Having mislabeled images would lead to incorrect evaluation of model performance for facial recognition. Having the duplicates in the dataset would lead to data leakage if the similar images are sampled in train and test. We thus clean the dataset by removing mislabeled images and duplicates from the set of images for each individual in the dataset. **Mislabeled image detection.** We define mislabeled images to be the images of an individual that are not the right person, but also cases where the image is a cartoon of the person or where no faces are detected. To detect mislabeled images, we use the InsightFace1 library and more specifically the RetinaFace-10G [25] model for face detection and ResNet50 trained on WebFace600K for facial recognition [31]. For each image, the face detector detects all the faces in the image and the facial recognition algorithm then generates an embedding of each face. We denote \(F\) as the combined face detection and recognition algorithm that takes in an input image and returns an embedding of length 512 for each detected face in the image. \(F\) returns an empty list when no face is detected in the image. Cosine distance (1 - cosine similarity) is used to measure the distance between the generated embeddings. We detect the mislabeled images for images \(X^{I}\) tagged as individual \(I\) in two steps 1. **Create a base embedding** for \(I\). To create a base embedding, we consider all the images from \(X^{I}\) where only 1 face is detected. The base embedding is then calculated as an average of the \(L_{2}\) normalized embeddings from the selected images. The images with multiples faces are not used while creating the base embeddings as we do not know which face belongs to individual \(I\). All the images in which no face is detected are recorded are removed from \(X^{I}\). 2. **Detect mislabeled images**. We consider all remaining images in \(X^{I}\) and, for each image, we compute the cosine distance between each face in the image and the base embedding. If this cosine distance is larger than a predefined threshold \(T_{\text{mis}}\), the face is marked as mislabeled. An image is marked as mislabeled and removed from \(X_{I}\) if all faces in that image are marked as mislabeled. Our strategy is based on the assumption that only a small proportion of images are mislabeled and there are a lot of images per individual. An average embedding of the images, thus provides a good estimate of the embedding for the actual face of individual \(I\). Our method to combine the embeddings to generate a base embedding and match images against is similar to the standard facial recognition setup [33]. For reproducibility, we provide the complete algorithm for mislabeled image detection in appendix D. We set a threshold of \(T_{\text{mis}}=0.6\) resulting in 28,561 (0.86%) images where no face is detected by the face detection algorithm and 121,292 (3.66%) images marked as mislabeled. These images were removed from the dataset. **Duplicate image detection.** We define two images to be duplicates if they are same or seem to be captured within a short interval of each other. To remove duplicates, we use SSCD [51], a state-of-the-art deep perceptual hashing algorithm. The SSCD model is based on the Resnet50 architecture, takes as input an image of size \(288\times 288\), and returns an \(L_{2}\) normalized embedding of size 512. To detect duplicates, we consider the set of images \(X^{I}\) for an individual \(I\) remaining after removing mislabeled images. For each image \(X_{i}\) in \(X^{I}\), we compute the embeddings using SSCD algorithm. We mark an image \(X_{j}\) in \(X^{I}-\{X_{i}\}\) as duplicates of image \(X_{i}\) if the distance between the embeddings of \(X_{i}\) and \(X_{j}\) is less than a predefined threshold \(T_{dup}\). The \(L_{2}\) distance is used to compute similarity. We describe the algorithm in detail in Algorithm 4. The images are sorted in alphabetical order for reproducibility reasons (Line 4). We use \(T_{dup}\) of \(1.0\) resulting in 658,715 (19.89%) images to be marked as duplicates and be further removed from the dataset. The cleaned VGGFace2 dataset has 2,502,718 images across 9,131 individuals compared to 3,311,286 images in the original VGGFace2 dataset. We provide additional analysis between cleaned and original dataset in Appendix E. In order to keep only those individuals for which we have enough data for training, validation, and testing, we remove individuals with less than 150 images. After cleaning, our dataset consists of 2,428,033 images of 8,514 individuals. We provide the list of images in our cleaned VGGFace2 dataset at this url2. From now on, we refer to this cleaned VGGFace2 dataset as VGGFace2 dataset. Footnote 2: [https://github.com/computationalprivacy/dual-purpose-client-side-sca-nning](https://github.com/computationalprivacy/dual-purpose-client-side-sca-nning) **Dataset partitioning.** We sample a target individual \(I_{T}\) from VGGFace2 dataset. To design a dual-purpose model for detecting \(I_{T}\), we partition the VGGFace2 dataset into train, validation, and test in the following steps: 1) the target individual \(I_{T}\) is removed from the VGGFace2 dataset and remaining individuals in the dataset are considered as non-target individuals. 2) For the target individual \(I_{T}\), \(N_{\text{train}}^{T}\) images are sampled to constitute \(X_{\text{train}}^{I_{T}}\) used for training the model, \(N_{\text{val}}^{T}\) are sampled to constitute the \(X_{\text{val}}^{I_{T}}\) used in validation queries, and rest of the images \(X_{\text{test}}^{I_{T}}\) are used in testing. 3) Non-target individuals are partitioned into \(N_{\text{train}}^{T^{\prime}}\) individuals for training, \(N_{\text{val}}^{T^{\prime}}\) for validation, and \(N_{\text{test}}^{T^{\prime}}\) individuals for testing. ## 7 Results We train 10 single-purpose models \(H_{s}\), each with a different seed. For dual-purpose models \(H_{d}\), we select 10 target individuals randomly from the cleaned VGGFace2 dataset using a seed that would provide a diverse group of target individuals. For each target individual we partition the dataset into train, validation, and test as defined in Section 6.2. We train 10 dual-purpose models, 1 on each target. We use same hyperparameters and seed across all dual-purpose models. For some targets, we find the model fails during training (i.e. gives NaN loss). To maintain comparability across models, instead of changing hyperparameters like learning rate, we restart the training for the failed models with another seed. The complete implementation details and hyperparameter choices of both single-purpose and dual-purpose models are described in Appendix C. We report the median and interquartile range of the evaluations across the 10 single-purpose and dual-purpose models. For each model \(H\) the threshold \(T\) for testing is computed from validation performance as described in Section 5.1. ### _Image copy detection performance_ As described in Section 6.1, we use the query set \(Q_{\text{test}}^{\text{primary}}\) to match against the reference database \(R^{\text{primary}}\) to evaluate the performance of models on image copy detection task. Table 1 shows the median performance of single-purpose and dual-purpose models on primary task of image copy detection. Our dual-purpose models perform as well as the single-purpose models. These results show that our training procedure to jointly optimize for both the primary and secondary tasks does not impact the performance of the models on the primary task. More specifically, the dual-purpose models achieve a median \(\mu AP\) of 59.2 with an IQR of 0.5 while the single-purpose models achieve a \(\mu AP\) of 59.4 with a IQR of 0.4. The single-purpose and dual-purpose models also have similar median precision and recall. ### _Facial recognition performance_ We measure the ability of our dual-purpose perceptual hashing algorithm to match new pictures of an individual \(I\) to a database of reference images of \(I\) for both target and non-target individuals. To do this, we construct a set of query images \(Q^{\text{secondary}}_{\text{test}}\) using images of \(I_{T}\) as well as images of \(N^{T^{\prime}}_{\text{test}}\) non-target individuals of \(I_{T^{\prime},\text{test}}\), in the same way as \(Q^{\text{secondary}}_{\text{val}}\) (refer Section 5.2). Now, however, query images for target individual \(I_{T}\) comes from \(X^{I_{T}}_{\text{test}}\) instead of \(X^{I_{T}}_{\text{val}}\), and for non-target individuals we use images of \(N^{T^{\prime}}_{\text{test}}\) individuals in test partition of VGGFace2 dataset (refer Section 6.2). As described in Section 6.2, for a dual-purpose model trained to identify \(I_{T}\), images of \(I_{T}\) and non-target individual \(I_{T^{\prime}}\) are partitioned into train, validation, and test. Facial recognition performance of each dual-purpose model is then evaluated on test image of target individual \(I_{T}\) it is trained to identify and \(N^{T^{\prime}}_{\text{test}}\) non-target individuals. We aggregate the performance for 10 dual-purpose models which results in: 1) 10 values corresponding to the performance of dual-purpose models on target individuals, and 2) 10 * \(N^{T^{\prime}}_{\text{test}}\) or 73130 values corresponding to the performance of dual-purpose models on non-target individuals. We measure the facial recognition performance of single-purpose models using similar setup as for dual-purpose models. But, unlike with dual-purpose models, single-purpose models are not trained using any individual's images from the VGGFace2 dataset. We thus use complete VGGFace2 dataset for testing their performance instead of partitions in VGGFace2 dataset. To evaluate facial recognition performance of \(H_{s}\) on individual \(I\) in the VGGFace2 dataset, the query set \(Q^{\text{secondary}}\) is matched against a reference database \(R_{I}\) consisting of images of individual \(I\). Each single-purpose model is thus evaluated on all 10 target individuals, and remaining 8504 non-target individuals. \(Q^{\text{secondary}}\) and \(R_{I}\)s are constructed as follows 1) for each of the 10 target individuals \(I_{T}\), corresponding test images \(X^{I_{T}}_{\text{test}}\) are added to the query set, while \(R_{I_{T}}\) consist of images in \(X^{I_{T}}_{\text{train}}\), and 2) for each non-target individual \(I\), \(N^{T^{\prime}}_{\text{train}}\) images are selected to create \(R_{I}\), and rest of the images are added to \(Q^{\text{secondary}}\). For both dual-purpose and single-purpose models, the non-target individuals are partitioned with same seed to ensure empirical results are directly comparable between evaluations. We aggregate the performance for 10 single-purpose models resulting in: 1) 100 values corresponding to the performance of single-purpose models on target individuals, and 2) 10 * 8504 or 85040 values corresponding to the performance of single-purpose models on non-target individuals. dual-purpose model. We hypothesize this slight difference to be due to dual-purpose models being trained explicitly to separate the faces belonging to same non-target individual. This leads to slightly lower \(F_{1}\) and recall values as well as fewer FP. This decrease in FP leads to an increase in precision for non-target individuals (but with low recall). ## 8 Discussion **Dual-purpose client-side scanning (CSS) system.** Deep perceptual hashing algorithm is only one part of the CSS system. As mentioned in Sec. 2, a CSS system consists of a perceptual hashing algorithm \(H\), a database \(D\) of hashes of illegal images \(R\), and threshold \(T\). For the system to start flagging unseen pictures of targeted individuals, we need to add the additional images \(R^{\prime}\) to \(R\) for their hashes to be computed. Simply adding all the 100 training images \(X_{\text{train}}^{I_{T}}\) of target individual \(I_{T}\) to \(R\) might not go undetected and would expose the secondary purpose of the system. We thus propose to construct \(R^{\prime}\), the set of template images, such that a) all images in \(R^{\prime}\) look like illegal images to an auditor, and b) only a few of them need to be added. To do this, we first decrease the number of images that needs to be added to \(R\) by clustering the hashes of images in \(X_{\text{train}}^{I_{T}}\) into \(k\) clusters. Then, for each hash \(h\) in the centroid of \(k\) clusters, we modify an existing illegal image with a small perturbation so its hash is similar to \(h\) and the image remains visually similar to the original image. We use the \(k\)-means algorithm to cluster hashes of images in \(X_{\text{train}}^{I_{T}}\) into \(k\) clusters. We measure the performance of detecting target individual \(I_{T}\) using \(F_{1}\) score when \(k\) hashes are used. We vary \(k\) from 100 templates, i.e., using all the training images, to 1 template, i.e. using an average of hashes of all images in \(X_{\text{train}}^{I_{T}}\). We run the experiment for all 10 targets. Fig. 1 shows that the performance of our dual-purpose models barely decreases when we decrease the number of templates from 100 to 1, meaning that only one template is often sufficient for the system to work. We call this template the _template hash_. We hypothesize this results to be due to the images in \(X_{\text{train}}^{I_{T}}\) being densely clustered together in the hash space, confirming that our contrastive loss for dual-purpose is working as intended. To add the resulting template hash to the database of hashes \(D\), we need to modify an illegal image such that the modified image looks visually similar to the original image while its hash is similar to the template hash. To do so, we select an illegal image from our database whose hash is closest to the template hash from this small database of illegal images. Using the collision attack described in Struppek et al. [60], we minimize the distance between the hash of the modified image and the given template hash while keeping a visual maximal similarity between the original and modified images. To demonstrate the collision attack, we use the Stanford Dogs dataset [37] as the set of images that the attacker possesses and that would be illegal. We use the same parameters for the attack as Struppek et al. [60], except for the number of iterations, which we reduce from 10000 to 5000, and the scaling factor for the visual loss, which we set to 1. Fig. 2 shows an example of the result of the collision attack for a target individual. The modified illegal image is visually similar to the original image while its hash is equal to the template hash. We obtain similar results for the template hash for all 10 targets with a median visual similarity of 0.98 \(\pm\) 0.01 measured using structural similarity index measure (SSIM) [66]. SSIM is a commonly used metric that uses distortion in structural information to measure the similarity between two images. SSIM \(\in[0,1]\), with larger values indicating higher similarity. We evaluate the facial recognition performance when matching the images to the hash of modified illegal image. We obtain the same results as in our original experiment (Fig 5), an \(F_{1}\) score of 51.2 \(\pm\) 35 and recall of 67.2 \(\pm\) 18.3, showing that indeed the hash of the modified illegal image is very similar to the template hash. **Impact of increase in false positives (FP).** Table 2 suggests that the number of FPs per million reported by dual-purpose (DP) models might be larger than for single-purpose (SP) models. If a CSS system flags too many FPs, this might lead auditors or companies compelled to install this algorithm to raise concerns. To answer this question, we however need to consider the number of FPs reported in the complete CSS system as this is what might be available to an auditor or the company, e.g. through mandatory reporting Figure 1: Change in performance on facial recognition of target individuals for dual-purpose models when \(k\) templates are added to the database, for a varying \(k\). Figure 2: Result of collision attack. The image on the right is modified so that its hash is close to one of the images of the target individual while remaining visually similar to the original image, which we here suppose to be illegal. of flagged images. The complete CSS system is composed of a PH algorithm \(H\), reference set \(R\), and threshold \(T\). The single-purpose CSS system consists of an SP model \(H_{s}\), a reference set of illegal images \(R\), and a corresponding threshold \(T_{s}\), while the dual-purpose CSS system would consist of a DP model \(H_{d}\), reference set \(R_{d}=R\cup R^{\prime}\) where \(R^{\prime}=X_{\text{train}}^{I_{T}}\), and corresponding threshold \(T_{d}\). The number of FPs per million being flagged is, as expected, strongly influenced by the size of the reference set \(R\), increasing with the database size [34]. The reference database in the field, NCMEC, has been reported to contain more than 5 million unique hashes of illegal content [5]. We do not have access to 5 million images but, to replicate real-world scenario as closely as possible, we here assume \(R\) to be \(R^{\text{primary}}\), i.e., the _DISC21 reference set_ and use Imagenet [55] as the set of non-illegal non-target query images the system would observe. Images from train, validation, and test sets of Imagenet are combined resulting in 1,431,093 images with \(\sim 17\%\) of images containing faces [69]. We evaluate the number of FPs flagged by SP and DP models against both reference sets \(R^{\text{primary}}\), the single-purpose case, and \(R_{d}=R^{\text{primary}}\cup R^{\prime}\), the dual-purpose case. We have 10 sets of \(R^{\prime}\), i.e., \(R^{\prime}_{i}\), and thus 10 sets of \(R_{d,i}\), corresponding each target individual \(I_{T,i}\). False positives are evaluated for 10 single-purpose and 10 dual-purpose models on \(R\). Each single-purpose model is evaluated on all 10 dual-purpose databases \(R_{d,i}\), while the dual-purpose model for target \(I_{T,i}\) is evaluated only against its corresponding reference \(R_{d,i}\). Table IV shows that the number of FPs flagged per million images by single-purpose and dual-purpose models each against their reference sets (\(R\) for single purpose and \(R_{d}\) for dual purpose) are extremely similar: \(2873.2\pm 262.0\) for the single purpose and \(2774.3\pm 575.2\) for the dual purpose. This shows that neither the change of model nor the addition of the templates to the database visibly impacts the number of FPs and therefore the total number of images reported. Our dual-purpose system would thus not generate more FPs, thereby not raising suspictions. We provide additional results on FPs with different thresholds and setups in Appendix G. Finally, we note that the acceptable number of FPs in the context of client-side scanning is an open question that would mostly depend on what is acceptable from a political perspective (e.g. the current UK Online Safety Bill [28]). applied. This is expected as LSH converts a vector of 256 floats to a binary hash of 256 bits leading to a significant loss of information. LSH however doesn't impact the ability of our dual-purpose models to identify target individuals. It achieves an \(F_{1}\) score of 52.3, compared to 51.2 without LSH, and is still not a general facial recognition algorithm (low \(F_{1}\) on non-targets individuals). This shows LSH does not impact the capacity of dual-purpose models to perform facial recognition. **Impact of threshold \(T\)**. We have so far assumed the threshold \(T\) is selected based on validation performance where the model achieves a precision of 90% on the image copy detection task. Instead, a lower threshold could be chosen e.g. to limit the number of FPs. We here evaluate the model performance in test setup at thresholds corresponding to precision of 95% and 99% on the validation set for the primary task. Note that we do not report \(\mu AP\) for the image copy detection task in Table 7 as \(\mu AP\) is independent of the chosen threshold and, hence, remains the same for all the models. Table 7 shows that all our results hold even when a lower threshold is selected. The single-purpose and dual-purpose models have similar performance on image copy detection task (\(ICD\)) and the dual-purpose models continue to reliably detect and match images of target-individuals (\(F_{1}\)-score for facial recognition on target individuals (\(FR_{T}\))). **Impact of number of images in training**. Our dual-purpose models are trained using \(N^{T}_{\text{train}}\) images of target individual \(I_{T}\). In practice, for highly secretive individuals, the attacker might not have 100 images available. We here explore the impact of \(N^{T}_{\text{train}}\) on the facial recognition performance of dual-purpose models on the target individual. To do so, we train the dual-purpose models with the same parameters as before except for the number of training samples available for the target individual. We select the best model with the same strategy and evaluate it in the test setup using the \(F_{1}\) score. For evaluation, we use the same query set \(Q^{\text{secondary}}_{\text{test}}\) as defined in Section 7.2. Table 8 shows that decreasing the number of training images \(N^{T}_{\text{train}}\) from 100 to 75 and even 50 training images only slightly impact the capabilities of the model. When very few images are available, less than 50, the \(F_{1}\)-score decreases sharply. Fewer images of target individual \(I_{T}\) in training indeed reduces the diversity of images available for the model to learn from, preventing, in some cases, the model from learning a robust representation of the target individual. More advanced techniques might however be available in such cases [62]. **Single-model for all 10 targets**. So far our dual-purpose models have focused on identifying 1 target individual \(I_{T}\) per model. In practice, one might want a model to identify several people e.g. the FBI 10 most wanted fugitives [9]. We evaluate the ability of our model to search for multiple people at once by training a dual-purpose model to identify all 10 target individuals with the same model. We use the same procedure and hyperparameters as used for dual-purpose models for 1 target. For the training batch \(B_{\text{secondary}}\), we now sample an image from every target individual \(I_{T,i}\) with probability \(p_{T}\). Images from non-target individuals are thus sampled with probability \(1-10*p_{T}\). We evaluate the performance of the resulting dual-purpose model on image copy detection task, on facial recognition for all 10 targets, and non-target individuals. Table 9 shows that aiming for the model to be able to perform facial recognition on 10 people at the same time is possible although at a slight cost in terms of its performance. The median \(F_{1}\)-score decreases from 51.2% to 46% while recall, the ability of the model to flag a new unknown picture of a target individual, decreases from 67.2% to 51.3%. Importantly though, it does not impact its performance on the primary task, \(\mu AP\) of 59.7, nor does it turn it into a general facial recognition algorithm, \(F_{1}\) of 0.5 on non-target individuals. We hypothesize that this slight decrease in the facial recognition performance of target individuals is due to the fairly limited capacity of the model and the relative simplicity of the training procedure. More complex model might have the ability to encode facial recognition capabilities for more targets at once and more sophisticated training procedure might help encode information about more individuals even in small models. **Implications.** Our research shows that a PH-based client-side scanning system can be designed to provide state-of-the-art performance on the primary task of image copy detection while also having a hidden secondary purpose of identifying target individual(s). We here exploit the typical expressivity and overparametrization of deep learning with a novel training strategy to learn the secondary task without impacting the performance on the primary task. Our results could likely be further improved with the extensive computational power available to a government attacker, e.g. to perform an extensive search over hyperparameters to train models with improved results on the secondary task. Our results are therefore only a lower bound on the performance a dual-purpose deep perceptual hashing algorithm can achieve. Taken together our results raises serious concerns that client-side scanning systems could be deployed with a "hidden" feature of identifying target individuals, thus using billions of user devices as surveillance tools. ## 9 Related work **Attacks on perceptual hashing-based client-side scanning (PH-CSS).** PH-CSS systems have repeatedly been shown to be vulnerable to both detection avoidance (evasion) and collision attacks. In _detection avoidance_ (DA) attacks, the goal of the attacker is to modify an image with an imperceptible perturbation to avoid detection [32, 34, 52, 60, 68]. DA attacks would allow a perpetrator to share illegal images without being detected. In _collision attacks_, images could be imperceptibly modified so that the hash of the modified image is similar to the hash of an illegal image in the database [26, 52, 60]. Collision attacks could be used to get an innocent account flagged. Abelson et al. [15] provide a detailed overview of PH-CSS and existing concerns. In this paper, we introduce a new unknown vulnerability. More specifically we (1) demonstrate the perceptual hashing algorithm inherently does not have facial recognition capabilities, (2) formalize a realistic threat model where a perceptual hashing algorithm has a hidden feature of facial recognition of target individuals, (3) describe explicitly how the hidden feature could be added, through a specialized training procedure, to the algorithm, without building a general facial recognition capability into it, and (4) demonstrate its effectiveness. **Dataset poisoning and backdoor attacks.** In dataset poisoning, the aim of the attacker is to manipulate the training data with an intention to cause models to fail during inference [57, 58]. Dataset poisoning is commonly used in backdoor attacks where the aim of the attacker is to hide an attacker-specific trigger in the model. When an input is presented to the model with the trigger, the model behaves in a predefined way for those inputs, e.g. misclassify the input to a specific class [24, 41, 42, 56]. While dataset poisoning and backdoor attacks are designed for model to fail at inference times, e.g. misclassifying new data, our attack aims to build a secondary hidden feature into a model. **Other perceptual hashing algorithms.** In this work, we add a secondary feature to the Yokoo model [70], a deep perceptual hashing (PH) algorithm trained with contrastive loss and cross-batch memory [65]. Yokoo model won the ISC 2021 challenge to build models for the image copy detection task [48]. Other deep PH algorithms exist [46, 51, 64], including SSCD [51] which uses SimCLR loss [23], a variant for contrastive learning and other models. In our setup the attacker has the ability to select the model and training procedure. We believe our results could be extended to other models, training procedures, and loss functions. Apart from the deep PH algorithms, there exists other non-learning based (so-called shallow) PH algorithms, like PDQ [36], and pHash [71], that create the hash by extracting images features using predefined feature extractors like discrete cosine transforms [16]. We believe that embedding a "hidden" secondary purpose would be more difficult in a shallow hashing algorithm but not impossible. This would indeed e.g. require an attacker to train a model using a procedure like ours before "reducing" it to a shallow algorithm e.g. through pruning. Deep PH algorithms have furthermore been strongly outperforming shallow hashing algorithms, e.g. in Facebook's Image Search Challenge [48]. This means that while switching from deep PH to shallow PH might make the attack more difficult, this will come at a cost to the performance on primary task and potentially millions more false positive images being decrypted.
2308.07432
Wide-Area Geolocalization with a Limited Field of View Camera in Challenging Urban Environments
Cross-view geolocalization, a supplement or replacement for GPS, localizes an agent within a search area by matching ground-view images to overhead images. Significant progress has been made assuming a panoramic ground camera. Panoramic cameras' high complexity and cost make non-panoramic cameras more widely applicable, but also more challenging since they yield less scene overlap between ground and overhead images. This paper presents Restricted FOV Wide-Area Geolocalization (ReWAG), a cross-view geolocalization approach that combines a neural network and particle filter to globally localize a mobile agent with only odometry and a non-panoramic camera. ReWAG creates pose-aware embeddings and provides a strategy to incorporate particle pose into the Siamese network, improving localization accuracy by a factor of 100 compared to a vision transformer baseline. This extended work also presents ReWAG*, which improves upon ReWAG's generalization ability in previously unseen environments. ReWAG* repeatedly converges accurately on a dataset of images we have collected in Boston with a 72 degree field of view (FOV) camera, a location and FOV that ReWAG* was not trained on.
Lena M. Downes, Ted J. Steiner, Rebecca L. Russell, Jonathan P. How
2023-08-14T19:57:23Z
http://arxiv.org/abs/2308.07432v1
# Wide-Area Geolocalization with a Limited Field of View Camera in Challenging Urban Environments ###### Abstract Cross-view geolocalization, a supplement or replacement for GPS, localizes an agent within a search area by matching ground-view images to overhead images. Significant progress has been made assuming a panoramic ground camera. Panoramic cameras' high complexity and cost make non-panoramic cameras more widely applicable, but also more challenging since they yield less scene overlap between ground and overhead images. This paper presents Restricted FOV Wide-Area Geolocalization (ReWAG), a cross-view geolocalization approach that combines a neural network and particle filter to globally localize a mobile agent with only odometry and a non-panoramic camera. ReWAG creates pose-aware embeddings and provides a strategy to incorporate particle pose into the Siamese network, improving localization accuracy by a factor of 100 compared to a vision transformer baseline. This extended work also presents ReWAG*, which improves upon ReWAG's generalization ability in previously unseen environments. ReWAG* repeatedly converges accurately on a dataset of images we have collected in Boston with a 72 field of view (FOV) camera, a location and FOV that ReWAG* was not trained on. ## I Introduction GPS is an external system for localization that is susceptible to failure through jamming, spoofing, and signal dropout due to dense foliage or urban canyons. Cross-view geolocalization [1, 2, 3, 4, 5] is a localization method that only requires images from a ground-view camera and preexisting overhead imagery, with or without GPS measurements. Cross-view geolocalization measures the similarity between a ground image and all of the satellite images in a search area to determine the location that the ground image was taken from (see Fig. 1). Satellite imagery at some resolution is widely available for most of the planet, even in forests and urban areas where GPS signals can be weaker. Cross-view geolocalization with ground and overhead images is challenging due to the wide difference in viewpoints. Many existing works [5, 6, 7, 8] rely upon the use of a panoramic ground camera because it effectively decreases the problem dimensionality by reducing the impact of heading. Although a panoramic ground camera can have different headings, the heading only affects the alignment of the image, not the content, whereas the heading of a limited field of view (FOV) camera affects the visible content of the image. The use of panoramic ground cameras also simplifies the problem by maintaining as much semantic similarity as possible between the two viewpoints--overhead images show 360\({}^{\circ}\) of the surroundings of a ground agent, as do panoramic ground cameras. However, in practice panoramic cameras are rarely used due to their high monetary cost (resulting in lesser availability) and their difficulty to mount without occlusion. As a result, few real-world systems can benefit from panoramic-based localization. Widespread adoption of cross-view geolocalization technology will require its applicability to platforms without panoramic imaging capabilities. Most recent work on cross-view geolocalization takes a deep learning approach to the problem by using Siamese networks [3, 4, 5, 6, 7, 9]. A Siamese network consists of a pair of neural networks with matching architectures that simultaneously learn embedding schemes for ground and overhead images. The Siamese network is trained so that images taken in similar locations are close together in embedding space, and images taken at different locations are embedded far apart. The accuracy of the Siamese networks can be further improved by integrating their measurements over time with a particle filter [4, 5, 6, 10, 11]. However, existing particle filter geolocalization works are highly constrained - requiring some level of GPS data [6, 10], 180\({}^{\circ}\) (or greater) FOV of the ground camera [4, 5, 6, 10, 11], perfect initial location knowledge [5], or a search area of less than 2.5 km\({}^{2}\)[4, 5]. These additional constraints are needed because the algorithms cannot efficiently geolocalize a limited FOV camera across a large search area. Localization with a limited FOV camera increases both the difficulty of the problem and the computational requirements. **Our contribution.** Our approach, Restricted FOV Wide-Area Geolocalization (ReWAG), builds upon our previous work in [11] to enable efficient wide-area geolocalization with a restricted FOV camera through two key changes. The first Fig. 1: ReWAG is a cross-view geolocalization system that takes in a series of non-panoramic ground-view camera images and satellite imagery of the search area to accurately localize the agent on a search area scale that was not possible with previous works.
2301.04430
Network Adaptive Federated Learning: Congestion and Lossy Compression
In order to achieve the dual goals of privacy and learning across distributed data, Federated Learning (FL) systems rely on frequent exchanges of large files (model updates) between a set of clients and the server. As such FL systems are exposed to, or indeed the cause of, congestion across a wide set of network resources. Lossy compression can be used to reduce the size of exchanged files and associated delays, at the cost of adding noise to model updates. By judiciously adapting clients' compression to varying network congestion, an FL application can reduce wall clock training time. To that end, we propose a Network Adaptive Compression (NAC-FL) policy, which dynamically varies the client's lossy compression choices to network congestion variations. We prove, under appropriate assumptions, that NAC-FL is asymptotically optimal in terms of directly minimizing the expected wall clock training time. Further, we show via simulation that NAC-FL achieves robust performance improvements with higher gains in settings with positively correlated delays across time.
Parikshit Hegde, Gustavo de Veciana, Aryan Mokhtari
2023-01-11T12:15:15Z
http://arxiv.org/abs/2301.04430v1
# Network Adaptive Federated Learning: ###### Abstract In order to achieve the dual goals of privacy and learning across distributed data, Federated Learning (FL) systems rely on frequent exchanges of large files (model updates) between a set of clients and the server. As such FL systems are exposed to, or indeed the cause of, congestion across a wide set of network resources. Lossy compression can be used to reduce the size of exchanged files and associated delays, at the cost of adding noise to model updates. By judiciously adapting clients' compression to varying network congestion, an FL application can reduce wall clock training time. To that end, we propose a Network Adaptive Compression (NAC-FL) policy, which dynamically varies the client's lossy compression choices to network congestion variations. We prove, under appropriate assumptions, that NAC-FL is asymptotically optimal in terms of directly minimizing the expected wall clock training time. Further, we show via simulation that NAC-FL achieves robust performance improvements with higher gains in settings with positively correlated delays across time. federated learning, rate adaptation, resilience ## I Introduction Communication costs and delays of sending model updates from clients to the server are a known bottleneck in training Federated Learning (FL) systems [1, 2, 3, 4]. Two common techniques used to alleviate this issue are: 1) _local computations_ where clients perform several local steps before communicating with the server, and 2) _(lossy) compression_ where clients communicate quantized/compressed updates to the server. The eventual end goal of these approaches is to minimize the _wall clock time_ for convergence of the training algorithm (hereon referred to as FL algorithm) by reducing the amount of data communicated from clients to the server. To this end, several works have analyzed the relationship between compression, local computations and the number of rounds needed by FL algorithms to converge [5, 6, 7, 8, 9, 10, 11, 12]. However, these works ignore the impact of changing network congestion, _both across clients and across time_, on the wall clock time to converge. For instance, a client may choose a high degree of compression when it sees high network congestion, while a client seeing lower congestion may _opportunistically_ choose not to compress as much. In this work, we ask the following question: "Can we design a policy that adapts the amount of compression across clients and time according to changing network conditions in order to optimize the wall clock time?" To answer this question, we first characterize the impact that changing network congestion and an adaptive compression policy have on the wall clock time. Second, we propose the Network Adaptive Compression for Federated Learning (NAC-FL) policy that judiciously chooses compression levels based on network congestion to minimize the wall clock time. Crucially, NAC-FL does not rely on the prior knowledge of the distribution of network congestion. Instead, it learns to optimize its compression decisions on-the-fly based on the congestion seen by clients. NAC-FL works in an opportunistic manner by adaptively choosing high or low amounts of compression across clients and across time based on low or high network congestion. It further considers two effects that compression has on the wall clock time. First, with increasing amount of compression, the FL algorithm would require more communication rounds to converge, as the server receives "noisier", and hence inaccurate, model updates. Second, with higher degrees of compression, the duration of each round would decrease as a smaller model update is communicated. Since the wall clock time is affected by both the number of rounds and the duration of each round (it is effectively the product of the two quantities), a policy for choosing compression levels should consider these jointly. Fig. 1 provides an illustrative visualization. Hence, NAC-FL aims to find the "sweet-spot" compression levels over time varying network congestion. **Contributions.** We propose a general framework to study how to best adapt compression of client model updates. Assuming a stationary Markov model for the underlying network congestion state, we show that optimal policies are state dependent and characterize the expected stopping time for convergence to a predefined model accuracy. This characterization provides the underlying insight for our proposed NAC-FL policy. To our knowledge this is the first policy for compression that adapts to the stochastic variations of the underlying network congestion process. Under appropriate assumptions on the FL algorithm and underlying network congestion and delays, we provide a proof of the asymptotic optimality of NAC-FL in terms of minimizing the mean time until the convergence criterion is met. To our knowledge this is the first theoretical result of this type. Finally we demonstrate via simulation the performance gains and robustness of NAC-FL vs alternative fixed compression and/or fixed error per round policies. We explore a variety of models for network congestion, finding that in particular NAC-FL excels in the practically relevant setting where the network sees positive correlations in the network congestion accross time. ### _Related Work_ Perhaps the most related papers to our work are [13, 14, 15, 16, 17] which explored adaptive compression schemes for FL settings. In [13, 14, 15] the authors propose adapting compression to network congestion. In these works, the algorithm to select compression has a per round budget, e.g., a budget on delay (or compression error) per round, and possibly heterogeneous compression levels are chosen across the clients based on the current network congestion to minimize the compression error (or delay) for the round. These works exploit the diversity of network congestion across the clients, but _not across time_. Meanwhile [16, 17] have observed that using a higher amount of compression at the start and gradually reducing compression through time may improve the wall clock time. Our proposed policy is novel in that it learns how to best exploit congestion variation across clients and across time to optimize the wall clock time. Another line of work that aims to reduce the overall communication cost is client sampling [18, 19, 20, 21], where at each round, only a subset of the clients are chosen to participate. The authors of [21] propose a client sampling and power control policy that adapts to time varying channels of clients sharing a single base station and optimizes a proxy for wall clock time. Overall we veiw lossy compression and client sampling as alternative approaches geared at addressing communication bottlenecks. A study of how to jointly adapt lossy compression and client sampling to changing network congestion is left for future work. ### _Paper Organization_ In Section II, we introduce our system model. In Section III, we propose our NAC-FL algorithm for lossy compression and under appropriate assumptions prove it is asymptotically optimal. Section IV is devoted to exploring the method for several problem instances and in particular for various models for the underlying network congestion in terms of correlation across clients and time. In Section V, we comment on the practical aspects of estimating the file transfer delay of clients when deploying NAC-FL. Finally, in Section VI, we close the paper with some concluding remarks. **Notation**.: _Throughout this document, unless otherwise mentioned, quantities denoted with lowercase letters correspond to constants, and uppercase letters correspond to random variables. Bold symbols correspond to vectors, and regular symbols indicate scalars. For example, \(\mathbf{x}\) is a constant vector, \(\mathbf{X}\) is a random vector, \(x\) is a constant scalar, and \(X\) is a random scalar/variable. Lowercase and uppercase forms of the same letter correspond to constant and random variable notions of the same quantity. A sequence indexed by \(n\) will be denoted as \((x^{n})_{n}\)._ ## II Model Setup In this paper, we focus on a federated architecture, where a server aims to find a model that performs well with respect to the data of a group of \(m\) clients, and in which nodes exchange updates based on their local information with only the server. More precisely, suppose the loss function associated with client \(j\) is denoted by \(f_{j}(\mathbf{w})\), where \(\mathbf{w}\) represents the weights of the model, e.g., the weights of a neural network. The goal is to find the model that minimizes the average loss across clients \[f(\mathbf{w})=\frac{1}{m}\sum_{j=1}^{m}f_{j}(\mathbf{w}).\] The FL algorithm proceeds in rounds. Each round consists of two stages: (i) a local stage in which each client updates the most recent model received from the server via gradient-based updates based on its local data and (ii), an aggregation stage in which the server updates the global model by aggregating the local updates received from clients. We shall let \(\mathbf{w}^{n}\) denote the global model at the server at round \(n\). Further, we let \(\tau^{n}\) denote the total number of local steps (such as gradient steps) that each client performs at round \(n\), and let \(\mathbf{w}_{j}^{\tau^{n},n}\) denote the resulting local model at node \(j\). In this paper, we are interested in the setting where each client sends a compressed version \(\mathbf{\tilde{g}}_{Qj}^{n}\) of its local model \(\mathbf{w}_{j}^{\tau^{n},n}\) to the server using a lossy compression algorithm (\(\mathbf{w}\), _compressor_) \(\mathcal{Q}(\cdot,\cdot)\). The compressor accepts a vector \(\mathbf{x}\) and a parameter \(q\in[0,q_{\max}]\) indicating the amount of compression with the maximum value being \(q_{\max}\), and outputs \(\hat{\mathbf{X}}=\mathcal{Q}(\mathbf{x},q)\) which is an approximation of \(\mathbf{x}\), but has a decreased file size as compared to \(\mathbf{x}\). \(\hat{\mathbf{X}}\) is capitalized to highlight that the compressor \(\mathcal{Q}(\cdot,\cdot)\) may use randomness in its compression. We shall denote by \(q_{j}^{n}\) the compression Fig. 1: Illustration of how compression level affects round duration, number of rounds and wall clock time. parameter used by client \(j\) for round \(n\), and denote by \(\mathbf{q}^{n}\triangleq(q_{j}^{n})_{j=1}^{m}\) the vector of parameters used by the clients in round \(n\). After receiving updates from all the clients, the server aggregates the compressed local models and produces the next global model \(\mathbf{w}^{n+1}\). Given a target tolerance \(\varepsilon>0\), the goal of FL is to generate a sequence of global models until on some round \(r_{\varepsilon}\) a prespecified _stopping criterion_ is first met, e.g., the norm of the global loss function gradient is at most \(\epsilon\), i.e., \(\|\nabla f(\mathbf{w}^{r_{\varepsilon}})\|\leq\varepsilon\). Our goal is to find an adaptive compression policy that dynamically adapts to the possibly time varying network states such that the target accuracy is achieved with a minimum overall wall clock time. We formalize the _overall wall clock time_, denoted \(t_{\varepsilon}\), required to achieve the target accuracy as follows. The duration \(d(\tau^{n},\mathbf{q}^{n},c^{n})\) of a round \(n\) depends on: * \(\tau^{n}\), the number of local computations performed by clients which we will assume to be the same across clients; * \(\mathbf{q}^{n}\), an \(m\) dimensional vector of clients' compression parameters ; * \(c^{n}\), the _network state_ which models network congestion and is assumed to be an element of a finite set \(\mathcal{C}\). This allows some flexibility, e.g., the round's duration may depend on the max delay to deliver the model update from clients to server, or the sum of the delays if clients share a single resource in TDMA (Time Division Multiple Access) fashion. The total wall clock time is then given by \[t_{\varepsilon}=\sum_{n=1}^{r_{\varepsilon}}d\left(\tau^{n},\mathbf{q}^{n},c^{n} \right). \tag{1}\] In our system model, the sequence of network states, \((c^{n})_{n}\), is assumed to be exogenous, i.e., not be controlled by the server or the clients nor their choices of \(\tau^{n}\) and \(\mathbf{q}^{n}\). The delays associated with the server multicasting global models to clients are assumed to be exogeneous i.e., can not be controlled by the FL server/clients and are not compressed, whence are not part of the model. Still, in this work, based on observing the network state we will devise an approach to select the clients compression parameters so as to minimize the wall clock time. As discussed in Section V, in practice observation of the network state may involve light weight in band estimation by probing delays of message bits as they are delivered in a given round. A policy for choosing compression parameters is called a _state dependent stationary policy_ if it can be expressed as a function \(\mathbf{\pi}\) of the current network state, i.e., \(\mathbf{q}^{n}=\mathbf{\pi}(c^{n})\) for all rounds \(n\in\mathbb{N}\). Such a policy will be referred to simply as policy \(\mathbf{\pi}\). Given a random sequence of network states, \((\mathrm{C}^{n})_{n}\), let \(R_{\varepsilon}^{\mathbf{\pi}}\) be the random variable denoting the minimum number of rounds needed to converge to error tolerance \(\varepsilon\) under policy \(\mathbf{\pi}\). Then, the corresponding wall clock time, denoted by \(T_{\varepsilon}^{\mathbf{\pi}}\), is expressed as, \[T_{\varepsilon}^{\mathbf{\pi}}=\sum_{n=1}^{R_{\varepsilon}^{\mathbf{\pi}}}d\left(\tau ^{n},\mathbf{\pi}\left(\mathrm{C}^{n}\right),\mathrm{C}^{n}\right).\] ## III Network Adaptive Compression for Federated Learning (NAC-FL) Our approach to designing a policy to adapt clients' compression parameters centers on recognizing that the expected wall clock time can be broken up into a product of the _expected number of rounds_\(r_{\varepsilon}\) needed to converge to an error tolerance \(\varepsilon\) and the _average duration of each round_\(\hat{d}\). We start by characterizing the relationship between \(r_{\varepsilon}\), \(\hat{d}\), and the sequence of selected quantization parameters \((\mathbf{q}^{n})_{n}\) and network states \((c^{n})_{n}\) for a given FL algorithm. Below we state an assumption relating \(r_{\varepsilon}\) to \((\mathbf{q}^{n})_{n}\). To that end we introduce a strictly increasing, continuous and bounded scalar function \(h_{\varepsilon}:[0,q_{\max}]\rightarrow\mathbb{R}^{+}\) of compression parameter \(q\) and an associated vector function \(\mathbf{h}_{\varepsilon}:[0,q_{\max}]^{\times m}\rightarrow\mathbb{R}_{+}^{m}\) of a compression vector \(\mathbf{q}\) where \(\mathbf{h}_{\varepsilon,j}(\mathbf{q})=h_{\varepsilon}(q_{j})\). We let \(\mathbf{h}_{\varepsilon}^{-1}\) denote the inverse of this vector function. **Assumption 1**.: _For a given FL algorithm there exists a strictly increasing, continuous and bounded function \(h_{\varepsilon}(q)\) and norm \(\left\|\cdot\right\|\) such that given a sequence of compression parameters \((\mathbf{q}^{n})_{n}\), the FL algorithm has reached the desired error tolerance \(\varepsilon\) by round \(r\) if and only if,_ \[r>\frac{1}{r}\sum_{n=1}^{r}\left\|\mathbf{h}_{\varepsilon}\left(\mathbf{q}^{n}\right)\right\|\] _for some norm._ The above assumption implies that the expected number of rounds can be written as the average of an increasing function of the sequence of selected quantization parameters. Roughly speaking, given a lossy compression policy that generates a stationary parameter sequence \((\mathbf{Q}^{n})_{n}\) whose marginal distribution is the same as the random vector \(\mathbf{Q}\), the above criterion means that the expected number of rounds to converge to the desired error tolerance is approximately \(\mathbb{E}[\|\mathbf{h}_{\varepsilon}\left(\mathbf{Q}\right)\|]\). This is a general condition that is motivated by convergence bounds of several FL algorithms with compression, including, [5, 8, 11]. In particular in Appendix A, we illustrate this motivation for an extension of the FedCOM algorithm [11], when \(q\) indicates the normalized-variance introduced by the compressor, the scalar function is \(h_{\varepsilon}(q)=O(\sqrt{q+1}/\varepsilon)\) and the norm is the \(L_{2}\) norm. **Assumption 2**.: _For any sequence of compression parameters \((\mathbf{q}^{n})_{n}\) the minimum number of rounds \(r_{\varepsilon}\) needed to converge to an error tolerance \(\varepsilon\) is such that \(r_{\varepsilon}=\Theta(1/poly(\varepsilon))\), where \(poly(\varepsilon)\) denotes a polynomial of \(\varepsilon\)._ Assumption 2 is a natural assumption for gradient based optimization algorithms. It requires the convergence guarantees for the FL algorithm to be such that when we require a more accurate solution, the number of required communication rounds grows. This argument indeed holds even for the settings that we do not exchange compressed signals. We also make the following additional assumption about the round duration function. **Assumption 3**.: _Given a network state \(\mathrm{c}\), number of local computations \(\tau\), and compression parameters \(\mathbf{q}=\mathbf{h}_{\varepsilon}^{-1}(\mathbf{r})\), the round duration \(d\left(\tau,\mathbf{q},\mathrm{c}\right)=d\left(\tau,\mathbf{h}_{\varepsilon}^{-1}(\bm {r}),\mathrm{c}\right)\) is bounded, convex in \(\mathbf{r}\) and decreasing in every coordinate of \(\mathbf{r}\)._ In Assumption 3, the round duration being decreasing in \(\mathbf{r}\) is reasonable, since we expect more rounds as well as smaller file sizes with higher compression. The convexity is motivated by the notion that we use a "good compressor" as illustrated next. Consulting Fig. 2, for any two parameters \(q_{1},q_{2}\) and \(0<\alpha<1\), a new time-sharing compressor \(\mathcal{Q}^{\prime}\) may be derived which outputs \(\mathcal{Q}(\mathbf{x},q_{1})\) with probability \(\alpha\) and outputs \(\mathcal{Q}(\mathbf{x},q_{2})\) with probability \((1-\alpha)\). This compressor has expected round duration \(\alpha d(\tau,q_{1},\mathrm{c})+(1-\alpha)d(\tau,q_{2},\mathrm{c})\). And, in certain cases, its compression parameter is \(q_{\alpha}=\alpha q_{1}+(1-\alpha)q_{2}\) (such as when the stochastic quantizer parameterized by its normalized variance [5] is used). If \(\mathcal{Q}\) is a "good compressor", then its round duration, \(d(\tau,q_{\alpha},\mathrm{c})\), should be lower compared to that of the simple time-shared compressor, \(\alpha d(\tau,q_{1},\mathrm{c})+(1-\alpha)d(\tau,q_{2},\mathrm{c})\). Therefore, the convexity of the round duration function is a reasonable assumption for "good compressors" (considering \(h_{\varepsilon}(q)\propto q\) for simplicity). **Assumption 4**.: _The sequence of network states \((\mathrm{C}^{n})_{n}\) forms an irreducible aperiodic stationary Markov Chain on a finite state space \(\mathcal{C}\) with invariant distribution \(\mu\)._ Assumption 4 is a natural assumption made to facilitate the analysis of algorithms (see e.g., [22]). ### _Expected Wall Clock Time Formulation_ Given the above mentioned assumptions, we are now ready to introduce the proposed framework. We begin by showing that we need only consider state dependent stationary policies for choosing compression parameters when optimizing the overall wall clock time. **Lemma 1**.: _Under Assumptions 1-4 there exists a state dependent stationary policy to select compression parameters which is asymptotically optimal in terms of minimizing the wall clock time to reach a desired error tolerance of \(\varepsilon\) as \(\varepsilon\to 0\)._ The proof of Lemma 1 depends on two critical observations. First, since by Assumption 2 the number of rounds needed to converge grows large as \(\varepsilon\to 0\), one can expect the empirical distribution of the network states modelled by the finite state Markov Chain to concentrate around the invariant prior to the stopping time. Second, due to the convexity of the round duration function in Assumption 3, given a sequence of network states there exists a state dependent stationary policy that is near optimal and depends solely on the empirical distribution of the sequence. The proof is in Appendix C. Here, we will focus on the setting where \(\varepsilon\) is small, hence by Lemma 1, we only need to consider state dependent stationary policies, \(\mathbf{q}^{n}=\mathbf{\pi}(\mathrm{c}^{n})\). **Lemma 2**.: _Under Assumptions 1-4 and a fixed number of local computations per round \(\tau\), for every \(\delta>0\), there exists an \(\varepsilon_{th}>0\) such that, for all \(\varepsilon<\varepsilon_{th}\) and any state-dependent stationary policy \(\mathbf{\pi}\), the expected wall clock time is bounded as,_ \[1-\delta\leq\frac{\mathbb{E}\left[T_{\varepsilon}^{\mathbf{\pi}}\right]}{\mathbb{E }[\left\|\mathbf{h}_{\varepsilon}\left(\mathbf{\pi}(\mathrm{C})\right)\right\|] \mathbb{E}[d\left(\tau,\mathbf{\pi}(\mathrm{C}),\mathrm{C}\right)]}\leq 1+\delta, \tag{2}\] _where, \(\mathrm{C}\) denotes a random variable whose distributions is \(\mu\) (see Assumption 3)._ Lemma 2 is proved in Appendix D. Define, \[\hat{t}_{\varepsilon}^{\mathbf{\pi}}\triangleq\mathbb{E}[\left\|\mathbf{h}_{ \varepsilon}\left(\mathbf{\pi}(\mathrm{C})\right)\right\|]\mathbb{E}[d\left(\tau, \mathbf{\pi}(\mathrm{C}),\mathrm{C}\right)]. \tag{3}\] Due to Lemma 2, for small enough \(\varepsilon\), \(\hat{t}_{\varepsilon}^{\mathbf{\pi}}\) provides an accurate approximation for \(\mathbb{E}[T_{\varepsilon}^{\mathbf{\pi}}]\). Therefore, from here onwards, we shall assume implicitly that that a small \(\varepsilon\) is considered and focus on finding a policy to optimize \(\hat{t}_{\varepsilon}^{\mathbf{\pi}}\). Suppose the distribution of \(\mathrm{C}\) is known. Then, one could compute expected wall clock time as given in (3) for any state dependent stationary policy \(\mathbf{\pi}\). In this case, we could determine an optimal policy \(\mathbf{\pi}^{*}\) by solving the optimization problem, \[\min_{\mathbf{\pi}\in\mathcal{Q}_{m|\mathcal{C}|}}\quad\hat{t}_{ \varepsilon}^{\mathbf{\pi}}\;=\;\mathbb{E}[\left\|\mathbf{h}_{\varepsilon}\left(\mathbf{ \pi}(\mathrm{C})\right)\right\|]\mathbb{E}\left[d\left(\tau,\mathbf{\pi}(\mathrm{ C}),\mathrm{C}\right)\right], \tag{4}\] where \(\mathcal{Q}_{m|\mathcal{C}|}\) is the set of all state-dependent stationary policies. Alas, in practice, we often cannot directly solve the above problem, as the distribution of \(\mathrm{C}\) is unknown. Hence, below, we propose a stochastic approximation like algorithm that achieves the optimal wall clock time of \(\mathbf{\pi}^{*}\) asymptotically. ### _NAC-FL: Informal Description_ The idea underlying NAC-FL is to keep running estimates for \(\mathbb{E}\left[\left\|\mathbf{h}_{\varepsilon}\left(\mathbf{Q}\right)\right\|\right]\) and \(\mathbb{E}\left[d(\tau,\mathbf{Q},\mathrm{C})\right]\) i.e., \[\hat{r}_{\varepsilon}^{n}=\frac{1}{n}\sum_{k=1}^{n}\left\|\mathbf{h}_{\varepsilon} \left(\mathbf{q}^{(k)}\right)\right\|,\quad\hat{d}^{n}=\frac{1}{n}\sum_{k=1}^{n}d \left(\tau,\mathbf{q}^{(k)},\mathrm{c}^{(k)}\right).\] Fig. 2: Illustration of a round duration as a function of compression parameter \(q\) for a fixed local computation \(\tau\) and network state \(\mathrm{c}\). Given a network state of \(c^{n+1}\) at round \(n+1\), and, a possible choice for compression parameters \(\mathbf{q}\), the running averages would be updated as follows, \[\hat{r}_{\varepsilon}^{n+1} =\frac{n}{n+1}\hat{r}_{\varepsilon}^{n}+\frac{1}{n+1}\left\|\mathbf{h} _{\varepsilon}\left(\mathbf{q}\right)\right\|, \tag{5}\] \[\hat{d}^{n+1} =\frac{n}{n+1}\hat{d}^{n}+\frac{1}{n+1}d\left(\tau,\mathbf{q},c^{n+1} \right).\] As seen in (3), to minimize the wall clock time one should minimize \(\hat{r}_{\varepsilon}^{n+1}\hat{d}^{n+1}\), which can be expanded as, \[\hat{r}_{\varepsilon}^{n+1}\hat{d}^{n+1} =\!\frac{n}{(n+1)^{2}}\!\left[r_{\varepsilon}^{n}d\left(\tau,\mathbf{ q},c^{n+1}\right)\!+\!\hat{d}^{n}\left\|\mathbf{h}_{\varepsilon}(\mathbf{q})\right\|\,\right]\] \[+\frac{n^{2}}{(n+1)^{2}}r_{\varepsilon}^{n}\hat{d}^{n}+O\left( \frac{1}{(n+1)^{2}}\right).\] Given the fact that \(\hat{r}_{\varepsilon}^{n}\) and \(\hat{d}^{n}\) are constants, and neglecting the term \(O\left(1/(n+1)^{2}\right)\), an optimal choice for \(\mathbf{q}^{n+1}\) is \[\mathbf{q}^{n+1}=\underset{\mathbf{q}}{\text{argmin}}\quad\hat{r}_{\varepsilon}^{n}d \left(\tau,\mathbf{q},c^{n+1}\right)+\hat{d}^{n}\left\|\mathbf{h}_{\varepsilon}\left( \mathbf{q}\right)\right\|. \tag{6}\] The NAC-FL policy is summarized in Algorithm 1. To retrieve policy informally described above the tunable parameters \(\left(\beta_{n}\right)_{n}\) and \(\alpha\) should be set to \(\beta_{n}=\frac{1}{n}\) and \(\alpha=1\). Consider two possible network states \(\mathrm{c}\) and \(c^{\prime}\) at a round \(n\). If the delay under state \(\mathrm{c}\) is higher compared to \(c^{\prime}\) for any compression parameters, then NAC-FL would choose a higher compression amount \(\mathbf{q}\) for state \(\mathrm{c}\) compared to compression amount \(\mathbf{q}^{\prime}\) for state \(c^{\prime}\), i.e., \(\mathbf{q}>\mathbf{q}^{\prime}\) elementwise. This may be concluded from the selection policy of (6), and noting that \(r_{\varepsilon}(\mathbf{q})\) is increasing in \(\mathbf{q}\) (Assumption 1), and \(d(\tau,\mathbf{q},\mathrm{c})\) is decreasing in \(\mathbf{q}\) (Assumption 3). Observe that since the estimates \(\hat{r}_{\varepsilon}^{n}\) and \(\hat{d}^{n}\) will initially change across rounds, NAC-FL may choose different compression parameters in two rounds for which the network was in the same state, i.e., NAC-FL is not a state-dependent stationary policy. Still, we will show NAC-FL is asymptotically near optimal. To develop this result we shall next present NAC-FL in a more formal manner. ``` Input : Initialization: \(\hat{r}_{\varepsilon}^{(0)},\hat{d}^{(0)}\) ; step size schedule \(\{\beta_{n}\}_{n=1}^{\infty}\); parameter \(\alpha\). 1for\(n=1,\dots\), until terminationdo 2Server observes network state \(\mathrm{c}^{n}\) ; 3\(\mathbf{q}^{n}=\) \(\underset{\mathbf{q}}{\text{argmin}}\quad\alpha\hat{r}_{\varepsilon}^{(n-1)}d \left(\tau,\mathbf{q},c^{n}\right)+\hat{d}^{(n-1)}\left\|\mathbf{h}_{\varepsilon} \left(\mathbf{q}\right)\right\|\); 4\(\hat{r}_{\varepsilon}^{n}=(1-\beta_{n})\hat{r}_{\varepsilon}^{(n-1)}+\beta_{n} \left\|\mathbf{h}_{\varepsilon}\left(\mathbf{q}^{n}\right)\right\|\) ; 5\(\hat{d}^{n}=(1-\beta_{n})\hat{d}^{(n-1)}+\beta_{n}d(\tau,\mathbf{q}^{n},\mathrm{c}^ {n})\); 6 end for ``` **Algorithm 1**NAC-FL ### _NAC-FL: Formal Description_ Our NAC-FL approach is also inspired by the Frank-Wolfe Algorithm [23]. We start by reformulating the optimization program in (4). Denote by set \(V_{\varepsilon}\) all possible pairs of expectations \(\left(\hat{r}_{\varepsilon},\hat{d}\right)\), \[V_{\varepsilon}=\left\{\left(\hat{r}_{\varepsilon},\hat{d}\right) :\exists\,\mathbf{\pi}\in\mathcal{Q}_{m|\mathcal{C}|}\text{ s.t. }\hat{r}_{\varepsilon}=\mathbb{E}\left[\left\|\mathbf{h}_{ \varepsilon}\left(\mathbf{\pi}(\mathrm{C})\right)\right\|\right],\right. \tag{7}\] \[\hat{d}=\mathbb{E}\left[d\left(\tau,\mathbf{\pi}(\mathrm{C}),\mathrm{ C}\right)\right]\Big{\}}.\] Using the set \(V_{\varepsilon}\), and denoting \(H(r,d)\triangleq rd\), we may write the optimization (4) characterizing the optimal policy \(\mathbf{\pi}^{*}\) as \[\min_{\hat{r}_{\varepsilon},\hat{d}}\{H(\hat{r}_{\varepsilon},\hat{d}):(\hat{r} _{\varepsilon},\hat{d})\in V_{\varepsilon}\}. \tag{8}\] In this case, from a point \((\hat{r}_{\varepsilon}^{n},\hat{d}^{n})\), the Frank-Wolfe update would be given as, \[(\hat{r}_{\varepsilon},\hat{d}) =\underset{(r,d)\in V_{\varepsilon}}{\text{argmin}}\quad\nabla H \left(\hat{r}_{\varepsilon}^{n},\hat{d}^{n}\right)^{\top}\binom{r}{d}, \tag{9}\] \[\hat{r}_{\varepsilon}^{n+1} =(1-\beta)\hat{r}_{\varepsilon}^{n}+\beta\hat{r}_{\varepsilon},\] \[\hat{d}^{n+1} =(1-\beta)\hat{d}^{n}+\beta\hat{d}.\] The gradient \(\nabla H(\hat{r}_{\varepsilon},\hat{d})\) is, \(\nabla H(\hat{r}_{\varepsilon},\hat{d})=\left(\hat{d}\quad\hat{r}_{\varepsilon} \right)^{\top}\). \(V_{\varepsilon}\) is a set of feasible averages of \(\hat{r}_{\varepsilon}\) and \(\hat{d}\). Therefore, at round \((n+1)\), not all the pairs \((r,d)\in V_{\varepsilon}\) may be achievable. Hence, NAC-FL approximates equation (9) as, \[\mathbf{q}^{n+1}=\underset{\mathbf{q}}{\text{argmin}}\quad\hat{r}_{\varepsilon}^{n}d \left(\tau,\mathbf{q},\mathrm{c}^{n+1}\right)+\hat{d}^{n}\left\|\mathbf{h}_{ \varepsilon}\left(\mathbf{q}\right)\right\|.\] We have thus retrieved our proposed NAC-FL algorithm based on the Frank-Wolfe update, with one difference. The above derivation suggests the use of a fixed step-size \(\beta\) at all rounds while the previously derived algorithm used a decaying the step-size \(\beta_{n}=1/n\). In our simulations, we will embrace the latter. The following assumption is required to show the asymptotic optimality of NAC-FL. A state dependent stationary policy \(\mathbf{\pi}\) maps from a domain of finite size \(|\mathcal{C}|\), to a range positive-real vectors of dimension \(m\). Therefore, the policy may be represented by a positive-real vector, \(\underline{\mathbf{\pi}}\), of dimension \(m\,|\mathcal{C}|\). Further, a vector \(\mathbf{r}^{\underline{\mathbf{\pi}}}\) may be obtained by applying \(h_{\varepsilon}(\cdot)\) elementwise to the policy vector \(\underline{\mathbf{\pi}}\), \(\mathbf{r}^{\underline{\mathbf{\pi}}}\triangleq\mathbf{h}_{\varepsilon}\left(\underline{ \mathbf{\pi}}\right)\). This representation is used in the following assumption. **Assumption 5**.: _The objective function \(\hat{t}_{\varepsilon}^{\mathbf{\pi}}\) of the optimization problem in (4) is a strictly quasiconvex function in \(\mathbf{\pi}\) in the following sense,_ \[\mathbf{r}^{\underline{\mathbf{\pi}}^{\top}}\left(\nabla_{\mathbf{r}^{\underline{\mathbf{\pi}}}} \hat{t}_{\varepsilon}^{\mathbf{\pi}}\right)=0\ \implies\ \mathbf{r}^{\underline{\mathbf{\pi}}^{\top}}\left(\nabla_{\mathbf{r}^{\underline{\mathbf{\pi}}}} ^{2}\hat{t}_{\varepsilon}^{\mathbf{\pi}}\right)\mathbf{r}^{\mathbf{\pi}}>0. \tag{10}\] Assumption 5 ensures that there is a unique state dependent stationary policy \(\underline{\mathbf{\pi}}^{*}\) which optimizes (4). We have observed that the considered network model, compression model and the \(\left\|\mathbf{h}_{\varepsilon}\left(\mathbf{q}\right)\right\|\) function associated with the FedCOM algorithm indeed satisfy this assumption. Next we shall establish an optimality property for NAC-FL. To that end we shall consider executing NAC-FL without termination with \(\beta_{n}=\beta\) for all \(n\) and let \(\left(\mathbf{Q}_{\beta}^{n}\right)_{n}\), \(\hat{R}_{\varepsilon,\beta}^{n}\) and \(\hat{D}_{\beta}^{n}\) be the corresponding sequence of compression parameters and the associated estimates. **Theorem 1**.: _Let \(\boldsymbol{\pi}^{*}\) be the solution and \(\hat{\boldsymbol{\tau}}_{\varepsilon}^{*}\) the minimum of the optimization problem in (4). If Assumptions 1-5 hold, then there exists a positive sequence \((\beta_{i})_{i=1}^{\infty}\) with \(\beta_{i}\to 0\) as \(i\to\infty\), such that for every \(\rho>0\), there exists a threshold \(n_{th}(\rho)\) such that,_ The proof of Theorem 1 is included in Appendix B. **Remark 1**.: _Theorem 1 should be interpreted with some subtlety. Say the desired error-tolerance \(\varepsilon\) is very small such that the number of rounds needed to converge under any compression policy is such that \(r_{\varepsilon}\gg n_{th}(\rho)/\beta\). Then, based on Theorem 1, one can show that NAC-FL compression choices will be near optimal after \(n_{th}(\rho)/\beta\) rounds. Thereafter, since \(r_{\varepsilon}\) is large, NAC-FL will make near optimal choices for long enough leading to a near optimal expected wall clock time._ We further remark on the meaning of the asymptotic result in the context of minimizing the wall clock time. In applications that require a very low error-tolerance \(\varepsilon\), one needs to have a large number (i.e., in the asymptotic region) of communication rounds \(r_{\varepsilon}\) for convergence. Therefore, even though the wall clock time obtained by using NAC-FL may be large in this setting, it is near-optimal compared to other methods of choosing compression parameters. ## IV Simulation In this section, we present our simulation results. We begin by describing additional model details used in our simulations. ### _Additional Model Details_ #### Iv-A1 Compression Model We shall use the _stochastic quantizer_ in [5] which we will denote as \(\mathcal{Q}_{q}(\cdot,b)\). The quantizer has a parameter \(b\in\{1,\ldots,32\}\) corresponding to the number of bits used to represent each co-ordinate, in addition to the bit used to denote signs. When input a vector \(\boldsymbol{x}\), it outputs, \[\mathcal{Q}_{q}(\boldsymbol{x},b)=\left\|\boldsymbol{x}\right\|_{\infty} \text{sign}(\boldsymbol{x})\zeta(\boldsymbol{x},b) \tag{11}\] where \(\text{sign}(\boldsymbol{x})\) is the element-wise sign operator and where the function \(\zeta(\boldsymbol{x},b)\) uniformly quantizes each co-ordinate amongst \(2^{b}-1\) levels between 0 and 1. That is, if \(x_{i}/\left\|\boldsymbol{x}\right\|_{\infty}\in\left[\frac{l}{2^{b}-1},\frac{l +1}{2^{b}-1}\right]\), then it is quantized as, \[\zeta_{i}(\boldsymbol{x},b)=\begin{cases}\frac{l+1}{2^{b}-1},&\text{with prob. }\frac{|x_{i}|}{\|x\|_{\infty}}(2^{b}-1)-l,\\ \frac{l}{2^{b}-1},&\text{otherwise}.\end{cases}\] When \(\boldsymbol{x}\) is quantized to \(b\)-bits per co-ordinate, its file size is given by the function, \(s(b)=\left\|\boldsymbol{x}\right\|_{0}(b+1)+32\) bits. Here, the zero-norm, \(\left\|\boldsymbol{x}\right\|_{0}\), gives the length of the vector, the \(1\) indicates the bit used to denote the sign, and the 32 bits are for a floating point number denoting the norm, \(\left\|\boldsymbol{x}\right\|_{\infty}\). Finally, if client \(j\) uses the parameter \(b_{j}\), then the vector of parameters used by the clients is denoted as, \(\boldsymbol{b}=(b_{j})_{j=1}^{m}\). #### Iv-A2 Network Congestion Model For purposes of evaluating the performance of various algorithms over different types of network congestion we propose the following general, albeit idealized, model. We let \(\boldsymbol{C}^{n}\) be a \(m\) dimensional random vector denoting the _Bit Transmission Delay_ (BTD) for clients during round \(n\). We further let \(\boldsymbol{C}^{n}=\text{exp}\left(\boldsymbol{Z}^{n}\right)\), i.e., coordinate-wise exponentiation of an \(m\) dimensional first order autoregressive process given by \(\left(\boldsymbol{Z}_{i}\right)_{i=0}^{\infty}\) where \(\boldsymbol{Z}_{0}=\boldsymbol{0}\), where \[\boldsymbol{Z}^{n}=A\,\boldsymbol{Z}^{(n-1)}+\boldsymbol{E}^{n},\quad\forall n \geq 1, \tag{12}\] where \(A\) is an \(m\times m\) deterministic matrix, and \(\boldsymbol{E}^{n}\sim\mathcal{N}(\boldsymbol{\mu},\Sigma)\) are i.i.d., \(m\) dimensional normal random vectors. Different correlations across time and clients may be modelled by varying \(A\), \(\boldsymbol{\mu}\) and \(\Sigma\). The marginal distributions of \(\boldsymbol{C}^{n}\) are thus log-normal but can be correlated in different ways based on the underlying autoregressive process. In particular: **Homogeneous Independent:** the parameters are set to \(A=0\), \(\boldsymbol{\mu}=\boldsymbol{1}\), and \(\Sigma=\sigma^{2}I\). This results in a process which is independent and identically distributed across clients and time. **Heterogeneous Independent:** the parameters are set to \(A=0\), \(\mu_{i}=0\) for \(i\in\{1,\ldots,5\}\) and \(\mu_{i}=2\) for \(i\in\{6,\ldots,10\}\), and \(\Sigma=I\). This results in a process which is independent across clients and time, with the BTD being lower for the first 5 clients compared to the rest. **Perfectly correlated:** the parameters are set to \(A\) such that \(A_{i,j}=\frac{a}{m}\) where \(a\in(0,1)\), \(\boldsymbol{\mu}=\boldsymbol{0}\), and \(\Sigma\) such that \(\Sigma_{i,j}=\sigma^{2}=1\). This results in a process where all clients see the same positively correlated time-varying delays. **Partially correlated:** the parameters are set to \(A\) such that \(A_{i,j}=\frac{a}{m}\), \(\boldsymbol{\mu}=\boldsymbol{0}\), and \(\Sigma\) such that \(\Sigma_{i,i}=1\) and \(\Sigma_{i,j}=1/2\) for \(i\neq j\). This results in a process where delays are positively correlated accross clients and time. #### Iv-A3 Model for Round Durations We will model the duration of a round as the maximum across clients' delays, i.e., \[d(\tau,\boldsymbol{b},c)=\max_{j}[\theta\tau+c_{j}s(b_{j})],\] where \(\theta\) represents the compute time per local computation, and \(c_{j}s(b_{j})\) the BTD of client \(j\) times the size of the client \(j\)'s file capturing the time taken to communicate its update. For simplicity we will set \(\theta=0\). #### Iv-A4 Compression Level Choice Policies We compare NAC-FL to the following policies, Fixed BitHere, a number \(b\) is fixed, and all the clients use the stochastic quantizer \(\mathcal{Q}_{q}(\boldsymbol{x},b)\) from (11) with the parameter \(b\). We present results for \(b\in\{1,2,3\}\), as we didn't notice a performance improvement for larger parameters in our experiments. Fixed ErrorThis method was suggested in [13] and is parameterized by a number \(q\). At each round \(n\), the parameters \(\boldsymbol{b}^{n}\) of the stochastic quantizers are such that the average normalized-variance \(\bar{q}^{n}\) (see equation (15)) is smaller than \(q\), and the duration of the round \(d(\tau,\mathbf{q}^{n},\mathrm{c}^{n})\) is minimized. We fix \(q=5.25\) in all our experiments after finding it to be performing well across different settings. #### Iv-A5 Machine Learning Model We consider \(m=10\) clients. We consider the MNIST dataset [24] which may be distributed homogeneously or heterogeneously amongst the clients. Since data is heterogeneous across clients in most FL applications, we consider the heterogenous data case. That is, each client has data corresponding to 1 unique label. The MNIST dataset has 60,000 training samples, 10,000 test samples and 10 labels. The clients and the server aim to train a fully connected neural network with the architecture \((784,250,10)\) with the sigmoid activation for the hidden layer. The learning rate is initialized to \(\eta_{0}=0.07\), and is decayed by a factor \(0.9\) every 10 rounds. The aggregation rate and local computations per round are fixed throughout the training to \(\gamma=1\) and \(\tau=2\) respectively. As for the parameters of the NAC-FL policy, we set \(\beta_{n}=\frac{1}{n}\), and \(\alpha=2\). We measure the performance of the global model using the following, Training LossThe training loss of the global model is the empirical cross entropy loss across the entire set of training samples. Test AccuracyThe test accuracy is measured over all the test samples. Here, in some experiments, we run 20 simulations with different random seeds, and report the mean, 90th percentile and 10th percentile times to reach a test accuracy of 90%. The 90th and 10th percentile scores are reported to capture the variation in performance across the 20 simulations. We also report a _gain_ metric, which is sample mean of the time gained to reach 90% accuracy by NAC-FL compared to a another policy reported in percentage. For instance, let \(x_{i}\), \(y_{i}\) be the times under NAC-FL and another policy for a random seed \(i\), then the gain is \(100*\left(\sum_{i=1}^{20}y_{i}/x_{i}-1\right)/20\). ### _Simulation Results_ #### Iv-B1 Homogeneous Independent BTD We simulated over \(\sigma^{2}\in\{1,2,3\}\) in order to study the change in performance over increasing variance. We observe that in all the cases, NAC-FL and the Fixed Error policy have very similar performance across all the considered statistics. This is because the Fixed Error policy was designed to operate well in the i.i.d., network delay case. However, both NAC-FL and Fixed Error policy perform better than all the Fixed Bit policies according to all the statistics across all the considered parameters. Moreover, we observed that the gap in the performance to Fixed Bit policies increased with increasing variance. For instance, the gain of the best Fixed Bit policy increased from 145% to 250% when the variance was increased from 1 to 3, while the gain of the worst fixed bit policy increased from 314% to 881%. This is as expected because both NAC-FL and Fixed Error policy adapt to the heterogenous delay of clients at any given time. Surprisingly, NAC-FL lagged behind Fixed Error policy in some metrics, but it performed better in terms of the gain metric in all the 3 cases, with the gain over Fixed Error policy ranging from 1% to 8%. #### Iv-B2 Heterogeneous Independent BTD We considered this case since the first 5 clients would have consistently worse delay, NAC-FL and the Fixed Error policy would consistently compress the updates of those clients heavily. Since the data distribution is heterogeneous, it may be possible heavy compression of updates from specific clients throughout the training may hurt the performance. On the other hand, the Fixed Bit policies use the same amount of compression across all clients equally irrespective of their delays. Still, we observed that NAC-FL and the Fixed Error policy perform better than the Fixed Bit policies as can be seen in Table II. In fact, performance in terms of the gain metric is very comparable to the i.i.d., network delay case with \(\sigma^{2}=1\) in Table I. variance_, denoted \(\sigma_{\infty}^{2}\), which is designed to capture the variance, and long and short term correlations of a random process, \[\sigma_{\infty}^{2}\triangleq\lim_{n\rightarrow\infty}\frac{\mathbb{E}\left[ \left(Z^{(1)}+\cdots+Z^{n}\right)^{2}\right]}{n}. \tag{14}\] For the autoregressive process in (13), it may be computed to be, \(\sigma_{\infty}^{2}=1/(1-a^{\prime})^{2}\). Table III shows the performance of the different policies under varying asymptotic variance of the marginals. We observe that in addition to beating the baseline fixed bit policies on all the metrics, the NAC-FL performs better than the Fixed Error policy in most metrics as well. Considering the gain metric, we observe gain of 13% over the Fixed Error policy for low asymptotic variance of \(\sigma_{\infty}^{2}=1.56\), and is as large as 27% for higher asymptotic variance of \(\sigma_{\infty}^{2}=4\). Notably, in terms of the 10th percentile time to reach 90% accuracy, the Fixed Error policy required 40%, 23% and 32% more time compared to NAC-FL in the \(\sigma_{\infty}^{2}\)=1.56, 4 and 16 cases respectively. #### Iv-B4 Partially Correlated Btd In Table IV, we show results for the partially correlated BTD case with asymptotic variance \(\sigma_{\infty}^{2}=4\). We consider this case to demonstrate that NAC-FL is effective with positive (but, not 100%) correlation across clients as well. Indeed, we observe NAC-FL performing better compared to all the other policies across all the considered metrics, with a gain of 10% over the Fixed Error policy, and 129% over the best fixed bit policy. Notably, in terms of the 10th percentile and 90th percentile metrics, NAC-FL outperformed Fixed Error policy by 30% and 15% respectively. Figure 3 contains sample path plots of Training Loss and Accuracy vs Wall Clock Time for the independent homogeneous (\(\sigma^{2}=2\)), heterogeneous and perfectly correlated (\(\sigma_{\infty}^{2}=4\)) BTD cases. Both accuracy and loss plots for NAC-FL and Fixed Error are overlapping in the independent homogeneous and heterogeneous BTD cases, as expected. However, in the perfectly correlated BTD case, NAC-FL dominates the performance of Fixed Error policy. In summary, we observe that NAC-FL's performance is robust under a range of network models considered. NAC-FL vastly outperformed the baseline Fixed Bit policies in all the network models. The performance of NAC-FL was observed to be similar to that of Fixed Error policy in the independent BTD setting, albeit, it outperformed Fixed Error policy in terms of the gain metric under all the network models. Notably, the gap between NAC-FL and Fixed Error policy was observed to be noticeably high in the perfectly and partially correlated BTD settings, where NAC-FL was able to adapt to positive correlations of BTD across time, whereas Fixed Error could not. ## V NAC-FL in Practice In this section we briefly comment on some practical aspects underlying estimating model update delays. This involves estimating the network's current average BTD to each client. A simple approach to doing so is to observe that for the stochastic quantizer described in Section IV-A1, clients always send the vector of signs of their updates, no matter what are the bits per coordinate that will be chosen. So, as the clients send their signs, the server may probe the delay characteristics to estimate the BTD of clients without having to request vacuous (non update related) bits to do so. It may then use these estimates to perform the optimization in (6) for the round. ## VI Conclusion Due to their distributed character FL algorithms are exposed to congestion across a potentially large number of network resources, whence one might say they are exposed to network congestion and variability at scale. Building adaptive algorithms that minimize the impact of time varying congestion across clients presents a significant challenge, particularly when the aim is to directly optimize the expected wall clock time. NAC-FL exemplifies a new class of robust algorithms to optimally adapt clients' lossy compression. This paper further provides the technical roadmap to formalizing and showing asymptotic optimality for such algorithms.
2308.08701
Optimal Transport with Defective Cost Functions with Applications to the Lens Refractor Problem
We define and discuss the properties of a class of cost functions on the sphere which we term defective cost functions. We then discuss how to extend these definitions and some properties to cost functions defined on Euclidean space and on surfaces embedded in Euclidean space. Some important properties of defective cost functions are that they result in Optimal Transport mappings which map to points along geodesics, have a nonzero mixed Hessian term, among other important properties. We also compute the cost-sectional curvature for a broad class of cost functions, to verify and some known examples of cost functions and easily prove positive cost-sectional curvature for some new cost functions. Finally, we discuss how we can construct a regularity theory for defective cost functions by satisfying the Ma-Trudinger-Wang (MTW) conditions on an appropriately defined domain. As we develop the regularity theory of defective cost functions, we discuss how the results apply to a particular instance of the far-field lens refractor problem.
Axel G. R. Turnquist
2023-08-16T23:14:35Z
http://arxiv.org/abs/2308.08701v2
# Optimal transport with defective cost functions with applications to the lens refractor problem ###### Abstract. We define and discuss the properties of a class of cost functions on the sphere which we term defective cost functions. We discuss how to extend these definitions and some properties to cost functions defined on Euclidean space and on surfaces embedded in Euclidean space. Some important properties of defective cost functions are that they result in Optimal Transport mappings which map to points along geodesics, have a nonzero mixed Hessian term, among other important properties. We also compute the cost-sectional curvature for a broad class of cost functions using the notation built from defining defective cost functions and apply the formulas to a few known examples of cost functions. Finally, we discuss how we can construct a regularity theory for defective cost functions by satisfying the Ma-Trudinger-Wang (MTW) conditions on an appropriately defined domain. As we develop the regularity theory of defective cost functions, we discuss how the results apply to a particular instance of the far-field lens refractor problem and to cost functions that already fit into the preexisting regularity theory, but now by employing simple formulas derived in this paper. ## 1. Introduction Recently, much work has been done on deriving PDE formulations of freeform optics problems with lenses and reflectors, see for example the work in [16, 14, 15, 1, 4], resulting in Optimal Transport PDE and, more generally, generated Jacobian equations posed on the unit sphere \(\mathbb{S}^{2}\subset\mathbb{R}^{3}\). The general problem is to find, given source and target light intensities modeled as probability measures \(\mu\subset\mathbb{S}^{2}\) and \(\nu\subset\mathbb{S}^{2}\), the function that determines the shape of a lens (or reflector) system that achieves the desired light intensity output pattern. The PDE formulations for some of these optics problems have an important interpretation in terms of Optimal Transport. On the theoretical side, some interest in the menagerie of Optimal Transport problems that arise from these optics problems stems from the rather "exotic" nature of these cost functions in their Optimal Transport interpretation. For optical problems concerning redirecting directional light to a desired far-field intensity, the resulting PDE is also posed on the sphere. We will show that, in our definition of defective cost functions, some cost functions on the sphere which appear "exotic" actually satisfy some very natural conditions and thus lead to well-behaved Optimal Transport mappings. A regularity theory has been built for cost functions in Theorem 4.1 of Loeper [9]. The cost functions we address in this manuscript do not fit into the theory established by Loeper, but can also exhibit "defects" that do not allow for mass to be transported too far on the unit ## 1. Introduction The study of the _first \(\mathbb{S}^{2}\subset\mathbb{R}^{3}\), \(\mathbb{R}^{d}\) and, surfaces \(M\subset\mathbb{R}^{3}\). We introduce a function \(\beta\), which characterizes how far the mapping moves along a geodesic. We then introduce the definition of a defective cost function, which is a cost function that changes concavity as a function of the Riemannian distance. We show that this is the condition for the lens refractor problem that leads to the solvability condition observed earlier. Using these definitions and some useful formulas, we show how defective cost functions have nonzero mixed Hessian determinant on the sphere \(\mathbb{S}^{2}\) and \(\mathbb{R}^{d}\). We end this section with a computation of the cost-sectional curvature for defective cost functions on \(\mathbb{S}^{2}\) and \(\mathbb{R}^{d}\) and apply the results to some known examples, confirming previous works in the literature. In Section 4, we first show how defective cost functions satisfy the MTW conditions, with the caveat that they must also, independently, satisfy the cost-sectional curvature condition. We then show a computation of power cost functions on the unit sphere and how they satisfy every condition in Theorem 4.1 of Loeper [9], except for the strict cost-sectional curvature condition, unless we are dealing with the squared geodesic cost. We then show how one can control the distance the Optimal Transport mapping moves mass by imposing some controls on the source and target density. This allows us to build a \(C^{\infty}\) regularity theory for defective cost functions following the outline of the regularity theory in Loeper [9]. In Section 5 we summarize the contributions of the paper. ## 2. Preliminaries In this section, we first introduce the far-field lens refractor problem. We then show a simple example where the lens refractor problem cannot be solved. ### Far-Field Lens Refractor Problem #### 2.1.1. Reshaping Directional Light via a Lens of Refractive Index \(n\) Relative to Ambient Space One optical setup for the lens refractor problem is as follows. Light radiating from a point in a vacuum with intensity \(f\) goes in the direction \(x\) and then passes through a lens of refractive index \(n>1\) whose interior edge is a sphere and whose outer edge is described by the radial function \(\mathcal{L}=u_{1}(x)x\), where \(u_{1}(x)>1\). The values \(n=1.333\), \(n=1.52\) and \(n=2.417\) are typical for water, glass and diamond, respectively. After passing through the lens, the light travels in the new direction \(y\) in a vacuum and describes an intensity pattern \(g\) in the far field, see Figure 1. #### 2.1.2. Interior and Exterior Media Separated By Outer Lens Edge A similar setup is described in [4], where there are two media, the interior medium and the exterior medium, with different refractive indices, \(n_{I}\) and \(n_{O}\), respectively. The inner edge of a refractor with refractive index \(n_{I}\) is spherical with center at the origin and the outer edge is given by the function \(\mathcal{L}=u_{1}(x)x\). The ray originates at the origin in the direction \(x\) in the interior medium, travels through the lens, gets refracted as it leaves the outer edge of the lens and enters the exterior medium in the direction \(y\), see Figure 2. This setup allows for us to define two quantities, \(n=n_{I}/n_{O}\), and \(\kappa=n_{O}/n_{I}\). We choose to use the notation \(n\) for both \(n=n_{I}/n_{O}>1\) the refractive index in Section 2.1.1 because the PDE formulation will end up being the same. We choose to use the notation \(\kappa>1\), since the PDE formulation for this setup is very different Figure 2. Light with radial intensity \(f(x)\) travels in the direction \(x\) in a region of refractive index \(n_{1}\), enters the lens with refractive index \(n_{1}\), whose inside edge is spherical and outer edge is given by the function \(\mathcal{L}=u_{1}(x)x\), then exits the lens into a region of refractive index \(n_{1}\) in the direction \(y\) making a far-field intensity pattern \(g(y)\). Figure 1. Light with radial intensity \(f(x)\) travels in the direction \(x\), enters the lens with refractive index \(n\) relative to the ambient space, whose inside edge is spherical and outer edge is given by the function \(\mathcal{L}=u_{1}(x)x\), then exits the lens in the direction \(y\) making a far-field intensity pattern \(g(y)\). on a theoretical level. We will refer to both setups, that in Section 2.1.1 and that for \(n=n_{I}/n_{O}>1\) as lens refractor problem I and the setup with \(\kappa>1\) as lens refractor problem II. #### 2.1.3. Potential and Cost Functions From [12], the derivation of the PDE for the lens refractor problem I shows that the cost function is of the following form: \[c(x,y)=-\log\left(n-x\cdot y\right). \tag{1}\] Such a cost function may seem a bit unusual in Optimal Transport contexts, but it very closely resembles the cost function for the perhaps better known reflector antenna problem \(c(x,y)=-\log(1-x\cdot y)\), see [14, 15]. Some clear differences are, however, that the cost in (1) is Lipschitz, whereas the cost \(c(x,y)=-\log(1-x\cdot y)\) for the reflector antenna problem is not. Also, when the magnitude of the gradient of the potential function is zero, the mapping for (1) satisfies \(T(x)=x\), whereas for the reflector antenna we get \(T(x)=-x\). It should become clear as we analyze the lens problem in greater detail, then, that it does not make sense to take the limit as \(n\to 1\) since we cannot talk about the lens problem "converging" in any sense to the reflector problem. From [4], the derivation of the PDE for the lens refractor problem II shows that the cost function is of the following form: \[c(x,y)=\log\left(\kappa x\cdot y-1\right). \tag{2}\] We can immediately notice a potential cause of concern. Since \(\kappa>1\), it is possible on the sphere to have \(x\cdot y=1/\kappa\). That means that the cost function in Equation (2) is not Lipschitz. Moreover, there is an issue if in our Optimal Transport formulation, we have source and target densities which require that mass transports further than a distance \(\arccos(1/\kappa)\). #### 2.1.4. PDE Formulation From the conservation of light intensity, for any subset \(A\subset\mathbb{S}^{2}\) where \(T(A)\subset\mathbb{S}^{2}\) we get \[\int_{A}f(x)dx=\int_{T(A)}g(y)dy. \tag{3}\] Solving for the mapping using the law of refraction (given the shape of the outer edge of the lens \(u_{1}(x)\)) and using the change of variables \(u(x)=\log u_{1}(x)\) for the lens refractor problem I and the change of variables \(u(x)=-\log u_{1}(x)\) for the lens refractor problem II, the following PDE is derived \[\det\left(D^{2}u(x)+D^{2}_{xx}c(x,y)|_{y=T(x)}\right)=\frac{\left|D^{2}_{xy}c( x,y)\right|_{y=T(x)}\left|\,f(x)\right.}{g(T(x))}, \tag{4}\] where \[\nabla u(x)=-\nabla_{x}c(x,y)|_{y=T(x)}. \tag{5}\] ### A Simple Example Showing Ill-Posedness As shown above in Section 3, we must have that the Optimal Transport mapping cannot move mass beyond a certain distance, i.e. \(d_{\mathbb{S}^{2}}(x,T(x))\leq\arccos(1/n)\). This puts a hard constraint on the allowable source and target masses. Here, we give an example demonstrating a source and target intensities that do not satisfy the solvability condition. Depicted schematically in Figure 3, no single lens of can take the mass in red to the mass in blue, no matter how large the refractive index is. Physically, this makes sense that it would be unsolvable since the maximum angle at which the light rays can be deflected by a lens is a right angle, by taking the limit \(\lim_{n\to\infty}\arccos(1/n)=\pi/2\), and mass in the Figure 3 must transport more than halfway around the sphere. Explicit examples for a continuous mapping \(T\) which are unsolvable via a lens system can easily be furnished. Suppose we fix \(x_{0}\in\mathbb{S}^{2}\), then for any \(x\in\mathbb{S}^{2}\), we define the source density: \[f(x)=\frac{1}{\beta}e^{\frac{-\frac{1}{2}d^{2}(x,x_{0})}{\sigma^{2}}}, \tag{6}\] where \(\beta\) is the normalization parameter: \[\beta=2\pi\int_{0}^{\pi}e^{-\frac{1}{2}(\theta/\sigma)^{2}}\sin\theta d\theta, \tag{7}\] and a target density given by: \[g(x)=\frac{1}{\beta}e^{\frac{-\frac{1}{2}d^{2}(x,-x_{0})}{\sigma^{2}}}. \tag{8}\] Figure 3. Schematic of source (red) and target (blue) distributions, in which the Optimal Transport PDE for any lens refractor problem is ill-posed with the cost function (1) Choose \(\sigma\) to be very small, for example \(\sigma=0.01\). Then, \(\beta\approx 0.0006283\). In order to solve the lens refractor problem, we must satisfy the conservation of energy equation (3). We choose a Borel set \(B_{x_{0}}(5\sigma)\) to be the open geodesic ball of radius \(5\sigma\) centered around \(x_{0}\). Then: \[\frac{2\pi}{\beta}\int_{0}^{0.05}e^{-\frac{1}{2}(\theta/\sigma)^{2}}\sin \theta d\theta\approx 0.999992. \tag{9}\] That is \(99.9992\%\) of the source mass is contained within a radius \(0.05\) of the point \(x_{0}\). Likewise, for the target mass, we have \(99.9992\%\) of the target mass is contained within a radius \(0.05\) of the point \(-x_{0}\). Thus, the condition that the mapping be mass preserving requires that the mapping \(T\) satisfy: \[0.999992\approx\int_{B_{x_{0}}(0.05)}f(x)dx=\int_{T(B_{x_{0}}(0.05))}g(y)dy. \tag{10}\] By the fact that the mapping \(T\) is continuous, the non-empty open set \(B_{x_{0}}(0.05)\) is mapped to the non-empty open set \(T(B_{x_{0}}(0.05))\). Since only \(0.0008\%\) of the mass of \(g\) is located outside the ball \(B_{-x_{0}}(0.05)\), we must have that \(T(B_{x_{0}}(0.05))\cap B_{-x_{0}}(0.05)\neq\emptyset\) and the intersection is an open set. Thus, the continuous mapping \(T\) satisfies \(\|T(x)-x\|\geq\pi-0.1\) in a non-negligible set and violates the fact that mass is not allowed to move too far. The result from [4] shows, furthermore, that we cannot even expect weak solutions if the mass distributions require mass to move too far. ## 3. Computations, Defective Cost Functions, Mixed Hessian Determinants, and Cost-Sectional Curvature ### Explicit Form of the Mapping for the Lens Reflector Problem #### 3.1.1. Computation of Mapping and Solvability Condition for the Lens Refractor Problem I We initiate our study by performing an explicit computation of the mapping in terms of the gradient of the potential function \(\nabla u\), which we label as \(p\in T_{x}\mathbb{S}^{2}\). This will help us build intuition for when the PDE (4) should be unsolvable. We solve for the mapping \(T=y\) via the equation \(\nabla_{\mathbb{S}^{2},x}c(x,y)=-p\). Let \(\hat{q}=x\times\hat{p}\). For \(c(x,y)=-\log\left(n-x\cdot y\right)\), we get, in local tangent coordinates \((\hat{p},\hat{q})\): \[\left(\left\|p\right\|,0\right)=\left(\frac{-y\cdot\hat{p}}{n-y\cdot x},\frac{ -y\cdot\hat{q}}{n-y\cdot x}\right). \tag{11}\] Since \(n-x\cdot y\neq 0\), we find that \(y\cdot\hat{q}=0\). Thus, along with the constraint \((y\cdot x)^{2}+(y\cdot\hat{p})^{2}=1\), we find the following expression for the mapping \(T\): \[T(x,p)=x\ \frac{\sqrt{1+(1-n^{2})\left\|p\right\|^{2}}+n\left\|p\right\|^{2}}{1+ \left\|p\right\|^{2}}+p\ \frac{\sqrt{1+(1-n^{2})\left\|p\right\|^{2}}-n}{1+\left\|p\right\|^{2}}. \tag{12}\] Note that \(p=0\implies T(x,0)=x\). Since the shape of the reflector satisfies \(\nabla u(x)=\nabla u_{1}(x)e^{u_{1}(x)}\) then \(p=\nabla u_{1}(x)=0\) implies the lens is "flat" with respect to the canonical spherical metric, which, of course, makes sense physically because where the lens is flat the light is not redirected but simply passes straight through. An important observation from Equation (12) is that \(\left\|p\right\|\) cannot exceed a certain threshold without making the mapping become complex valued. This consequently puts a hard constraint on how far the mapping \(T(x)\) can move mass at a point \(x\). We designate the quantity \(p^{*}=1/\sqrt{n^{2}-1}\) and any associated vector with magnitude \(p^{*}\) as \(p^{*}\hat{p}\) and at this value the square roots vanish in Equation (12). Therefore, \[T(x,p^{*}\hat{p}) =\frac{n}{1+(p^{*})^{2}}\left(x(p^{*})^{2}-p^{*}\hat{p}\right), \tag{14}\] \[=\frac{x}{n}-\frac{\sqrt{n^{2}-1}}{n}\hat{p}. \tag{13}\] Thus, \[T(x,p^{*}\hat{p})\cdot x=\frac{1}{n}, \tag{15}\] and thus denoting \(y=T\), we see that the we need to impose the following constraint: \[x\cdot y\geq\frac{1}{n}. \tag{16}\] The requirement in Equation (16) has been noted in previous works, such as [4]. Clearly, if this condition is not satisfied, we cannot find an Optimal Transport mapping that solves the PDE (4). From the results in [4], we actually see that this solvability condition is both necessary and sufficient for the existence of solutions of the PDE (4) (in a weak sense). In Section 3.3, we will show how this solvability condition can be found directly from the cost function. #### 3.1.2. Computation of Mapping and Solvability Condition for the Lens Refractor Problem II For the cost function \(c(x,y)=\log\left(\kappa x\cdot y-1\right)\), we find the following expression for the mapping \(T\): \[T(x,p)=x\frac{\sqrt{\kappa^{2}+\left(\kappa^{2}-1\right)\left\|p\right\|^{2}} +\left\|p\right\|^{2}}{\kappa(1+\left\|p\right\|^{2})}+p\frac{\sqrt{\kappa^{2 }+\left(\kappa^{2}-1\right)\left\|p\right\|^{2}}-1}{\kappa(1+\left\|p\right\|^ {2})}. \tag{17}\] By taking the limit \(\left\|p\right\|\to 0\), we get \(T(x)=x\). By taking the limit, \(\left\|p\right\|\rightarrow\infty\), we get \[\lim_{\left\|p\right\|\rightarrow\infty}T(x,p)=\frac{1}{\kappa}x+\hat{p}\frac{ \sqrt{\kappa^{2}-1}}{\kappa}, \tag{18}\] and thus \(x\cdot y=\frac{1}{\kappa}\). Thus, we see that there is the solvability condition: \[x\cdot y\geq\frac{1}{\kappa}. \tag{19}\] **Remark 1**.: _What we will show later in this section is that the solvability condition from Equation (16) and the solvability condition from Equation (19), while they look very similar, arise from two different properties of the cost function \(c(x,y)\), which we will encapsulate in the definition of defective cost function in Section 3.3._ _Both cost functions can be written as \(c(x,y)=G(d_{\mathbb{S}^{2}}(x,y))\). The condition from Equation (16) arises because \(G^{\prime\prime}(z)=0\) for \(z=\arccos(1/n)\). The condition from Equation (19) arises because \(G^{\prime}(z)=\infty\) for \(z=\arccos(1/\kappa)\)._ ### Cost Functions Leading to Exponential-Type Maps For the squared geodesic cost function \(c(x,y)=\frac{1}{2}d_{\mathbb{S}^{2}}(x,y)^{2}\), it is well known [11] that the mapping is of the form \(T(x,p)=\exp_{x}(p)\). For other cost functions, like the one presented in this paper, \(c(x,y)=-\log(n-x\cdot y)\), \(n>1\), is it true that we can express the map as \(T(x,p)=\exp_{x}(\hat{p}\beta(\left\|p\right\|))\)? This leads to the following definition: **Definition 2**.: _We will say that a cost function leads to a map of exponential type if the Optimal Transport mapping can be expressed in the following way:_ \[T(x)=\text{exp}_{x}\left(\frac{\nabla u(x)}{\left\|\nabla u(x)\right\|}\beta( \left\|\nabla u(x)\right\|)\right) \tag{20}\] _where \(u(x)\) is the potential function and \(\beta\) is a continuous function on its appropriate domain._ Since, Equation (12) can be expressed in the following way: \[T(x,p)=xR_{1}(\left\|p\right\|)+pR_{2}(\left\|p\right\|), \tag{21}\] with \(R_{1}(\left\|p\right\|)^{2}+\left\|p\right\|^{2}R_{2}(\left\|p\right\|)^{2}=1\), we see that \(T(x,p)\) is on the great circle passing through \(x\) in the direction \(p\). By the formula in Equation (12), the length of the geodesic is \(\arccos R_{1}(\left\|p\right\|)=\arccos\left(\frac{\sqrt{1+(1-n^{2})\left\|p \right\|^{2}}+n\gamma^{2}}{1+\gamma^{2}}\right)=\beta(\left\|p\right\|)\). Thus, Equation (12) can be rewritten as: \[T(x,p)=\exp_{x}\left(\hat{p}\arccos\left(\frac{\sqrt{1+(1-n^{2})\left\|p \right\|^{2}}+n\left\|p\right\|^{2}}{1+\left\|p\right\|^{2}}\right)\right). \tag{22}\] For the values \(n=1.333\), \(n=1.52\), and \(n=2.417\), typical values for the refractive indices of water, glass, and diamond, the \(\beta\) functions are plotted in Figure 4. Here we note that the "worst" behavior we can expect from such functions is the appearance of a cusp, which is seen in the figure. Staying away from \(p^{*}\), the \(\beta\) functions are actually Lipschitz, see Lemma 7 below. For the cost function \(c(x,y)=\frac{1}{\kappa}\log(\kappa x\cdot y-1)\), we get the \(\beta\) function: \[\beta(\left\|p\right\|)=\arccos\left(\frac{\sqrt{\kappa^{2}+(\kappa^{2}-1) \left\|p\right\|^{2}}+\left\|p\right\|^{2}}{\kappa(1+\kappa^{2}\left\|p\right\| ^{2})}\right). \tag{23}\] For the values \(n=1.333\), \(n=1.52\), and \(n=2.417\), the \(\beta\) functions are plotted in Figure 5. The \(\beta\) functions are monotonically increasing, but flatten out as \(\left\|p\right\|\rightarrow\infty\). Thus, we say that \(c(x,y)=-\log(n-x\cdot y)\) and \(c(x,y)=\log(\kappa x\cdot y-1)\) are cost functions leading to maps of exponential type. The following theorem shows when this is possible and when \(\beta\) is monotone and satisfies \(\beta(0)=0\). **Theorem 3**.: _Suppose the cost function on the unit sphere can be written as a function of the dot product between \(x\) and \(y\), or the distance between \(x\) and \(y\), i.e. \(c(x,y)=F(x\cdot y)=G(d_{\mathbb{S}^{2}}(x,y))\) and that \(F\) is differentiable on an open interval \((0,a)\) with \(F^{\prime}(\zeta)\neq 0\) for \(\zeta\in(0,a)\) and \(G^{\prime\prime}(z)\) is strictly positive or strictly negative on an open interval \((0,b)\) for \(z\in(0,b)\). Then, the Optimal Transport mapping arising from this cost function can be written as_ \[T(x,p)=\text{exp}_{x}\left(\hat{p}\beta(\left\|p\right\|)\right), \tag{24}\] _where \(\beta\) is a monotone function of its argument and \(\beta(0)=0\)._ Proof.: Suppose that the cost function \(c(x,y)=F(x\cdot y)\). Then, the equation \(p=-\nabla_{x}F(x,y)\) implies that we have, for \(\hat{q}\in T_{x}\mathbb{S}^{2}\) where \(\hat{q}\cdot\hat{p}=0\) that, in the tangent coordinates \((\hat{p},\hat{q})\), we have \((\left\|p\right\|,0)=(-F^{\prime}(x\cdot y)y\cdot\hat{p},-F^{\prime}(x\cdot y )y\cdot\hat{q})\). This then immediately shows that \(0=y\cdot\hat{q}F^{\prime}(x\cdot y)\). Thus, if \(F^{\prime}(x\cdot y)\neq 0\), then \(y\cdot\hat{q}=0\) and therefore, the mapping does not go in the \(\hat{q}\) direction and therefore, \(T\) is of the form \(T(x,p)=R_{1}x+R_{2}p\), where \(\lim_{\left\|p\right\|\to 0}\left\|p\right\|R_{2}=0\). Therefore, \(R_{1}(0)=1\), since Figure 4. The function \(\beta\) plotted as a function of \(\left\|p\right\|\). Black indicates \(n=1.333\), approximately the refractive index of water, red indicates \(n=1.52\), approximately the refractive index of glass, and blue indicates \(n=2.417\), approximately the refractive index of diamond. \(R_{1}^{2}+\left\|p\right\|R_{2}^{2}=1\). The geodesic distance will then be given by \(\arccos R_{1}\) (and thus \(\beta(0)=0\)) and thus also \(T\) can be written generally as: \[T(x,p)=\exp_{x}\left(\hat{p}\beta(\left\|p\right\|)\right), \tag{25}\] where \(\beta(\left\|p\right\|)=\arccos R_{1}(\left\|p\right\|)\). Since \(\left\|p\right\|=F^{\prime}(x\cdot y)y\cdot\hat{p}=G^{\prime}(d_{\mathbb{S}^{2 }}(x,y))\), we get that \(\beta\) and \(G^{\prime}\) are inverses of each other. Taking a derivative, we get \(\beta^{\prime}(\left\|p\right\|)=1/G^{\prime\prime}(d_{\mathbb{S}^{2}}(x,y))\). Therefore, If \(G^{\prime\prime}(z)\) is strictly positive, then \(\beta^{\prime}(\left\|p\right\|)\) is strictly positive. Likewise, for \(G^{\prime\prime}(z)\) being strictly negative. As was hinted at in the proof of Theorem 3, the cost function being a function of the dot product directly leads to the cost function being a function of the geodesic distance between \(x\) and \(y\) on the unit sphere. This is because, for the unit sphere, \(x\cdot y=\cos\left(d_{\mathbb{S}^{2}}(x,y)\right)\), which means any cost function that can be written as a function of \(x\cdot y\) can be written as a function of \(d_{\mathbb{S}^{2}}(x,y)\) and vice versa. The same can be done on the (not necessarily unit) sphere in any dimension \(\mathbb{S}^{n-1}\), embedded in \(\mathbb{R}^{n}\), but cannot be done on any other manifold. Therefore, if \(c(x,y)=F(x\cdot y)=F\left(\cos d_{\mathbb{S}^{2}}(x,y)\right)=G\left(d_{ \mathbb{S}^{2}}(x,y)\right)\). Figure 5. The function \(\beta\) plotted as a function of \(\left\|p\right\|\). Black indicates \(n=1.333\), red indicates \(n=1.52\), and blue indicates \(n=2.417\). The preceding discussion also yielded two important relations. The first comes from the definition of \(\beta\) and the exponential map and by denoting \(z=d_{\mathbb{S}^{2}}(x,y)\): \[z=\beta(\|p\|). \tag{26}\] The second comes from \(\|p\|=F^{\prime}(x\cdot y)y\cdot\hat{p}\). As mentioned in the proof of Theorem 3, replacing \(F(x\cdot y)=G(d_{\mathbb{S}^{2}}(x,y))\), and denoting \(z=d_{\mathbb{S}^{2}}(x,y)\), we derive: \[\|p\|=G^{\prime}(z). \tag{27}\] Thus, we see that the functions \(\beta\) and \(G^{\prime}\) are inverses of each other. We saw that for the cost functions for the far-field lens refractor problem led to maps of exponential type. It should also be remarked here that the squared geodesic cost function \(c(x,y)=d_{\mathbb{S}^{2}}(x,y)^{2}=\arccos(x\cdot y)^{2}\) and the cost function arising from the reflector antenna: \(c(x,y)=-\log\|x-y\|=-\log(1-x\cdot y)\) lead to maps of exponential type, where \(\beta(\|p\|)\) is monotonically increasing and monotonically decreasing, respectively. For a real-world example of a cost function on the sphere which is does not lead to a map of exponential type, we refer to the point-to-point reflector problem, whose setup and derivation are contained in [16]. The cost function is: \[c(x,y)=\log\left(\frac{1}{(T^{2}-L^{2})^{2}}-\frac{1-x\cdot y}{2(T^{2}-L^{2})( T-L(x\cdot\hat{e}))(T-L(y\cdot\hat{e}))}\right),\ \ x,y\in\mathbb{S}^{2}, \tag{28}\] where \(T,L\) are parameters arising in the optical setup satisfying \(T\geq L\). \(T\) is the length of the optical path and \(L\) is the distance between the points. There is a special direction in the problem which is the direction from the source point to the target point, denoted by \(\hat{e}\). Due to the presence of the term \(x\cdot\hat{e}\) arising from this special direction \(\hat{e}\), the cost function cannot be written as simply a function of \(x\cdot y\). For this reason, in computing the mapping for this kind of "exotic" cost function, we observe that \(T\) has the more general form: \[T(x,p)=R_{1}x+R_{2}p+R_{3}q, \tag{29}\] where w.l.o.g. \(q=x\times p\). The presence of the nonzero \(R_{3}\) term due to the cost (28) indicates that \(T(x,p)\) does not travel along the great circle defined by \(x\) and \(p\). Thus, we have an example of a cost function which leads to a mapping \(T\) which is not of exponential type. **Remark 4**.: _As was mentioned above, for the sphere, cost functions that are functions of the Riemannian distance between \(x\) and \(y\) can be written as functions of the dot product \(x\cdot y\) and this simplifies computations greatly. This is the only manifold where this is true. The natural extension, then, of costs leading to maps of exponential type in Euclidean space \(c(x,y):\Omega\times\Omega^{\prime}\to\mathbb{R}\), where \(\Omega,\Omega^{\prime}\subset\mathbb{R}^{n}\) are of the form \(c(x,y)=H(d(x,y))\), where \(d(x,y)=\|x-y\|\). We may write such cost functions as \(c(x,y)=H(d(x,y))=J(\frac{1}{2}d(x,y)^{2})\) and thus, \(p=\nabla_{x}c(x,y)\) results in the following equation:_ \[p=J^{\prime}\left(\frac{1}{2}d(x,y)^{2}\right)(y-x), \tag{30}\] _which indicates that \(y-x=\frac{p}{J^{\prime}}\), which we see implies that the mapping is along a geodesic from \(x\) in the direction of \(p\). The generalization, then, of the condition in Theorem 3 that \(F^{\prime}(\zeta)\neq 0\) is given in the Euclidean case by \(J^{\prime}(\zeta)\neq 0\). Furthermore, we can find an expression for \(\beta^{\prime}\left(\|p\|\right)\). From Equation (30), we get:_ \[\|p\|=J^{\prime}\left(\frac{1}{2}d(x,y)^{2}\right)d(x,y)=H^{\prime}(d(x,y)), \tag{31}\] _and thus, by letting \(z=d(x,y)\) and assuming that \(H^{\prime\prime}(z)\neq 0\), we can invert this and also show that:_ \[\beta^{\prime}(\|p\|)=\frac{1}{H^{\prime\prime}(z)}, \tag{32}\] _which is a result that holds in the case of the sphere as well, see Equation (37). This shows that the condition \(H^{\prime\prime}(z)\neq 0\) is also important in the Euclidean case._ _Furthermore, we have \(z=\beta(\|p\|)\) and \(\|p\|=H^{\prime}(z)\) so \(\beta\) and \(H^{\prime}\) are inverses._ **Remark 5**.: _The further generalization to Riemannian manifolds can be given as follows. If \(c(x,y)=H(d_{M}(x,y))=J(\frac{1}{2}d_{M}(x,y)^{2})\), then since_ \[p=-\nabla_{x}c(x,y)=-J^{\prime}\left(\frac{1}{2}d_{M}(x,y)^{2}\right)\nabla_{ x}\left(\frac{1}{2}d_{M}(x,y)^{2}\right), \tag{33}\] _we get:_ \[\frac{p}{J^{\prime}\left(\frac{1}{2}d_{M}(x,y)^{2}\right)}=-\nabla_{x}\left( \frac{1}{2}d_{M}(x,y)^{2}\right), \tag{34}\] _and thus, by McCann [11], we get_ \[y=\text{exp}_{x}\left(\frac{p}{J^{\prime}\left(\frac{1}{2}d_{M}(x,y)^{2} \right)}\right). \tag{35}\] _Thus, we see generally that if the cost function can be written as a function of the Riemannian distance on a manifold \(M\), we get maps of exponential type. The generalization, then, of the condition in Theorem 3 that \(F^{\prime}(\zeta)\neq 0\) is given in the Riemannian case by \(J^{\prime}(\zeta)\neq 0\). Since the distance from \(x\) to \(y\) is given by the magnitude of the tangent vector in the argument of the exponential map, we can, as in the Euclidean case, derive the relation:_ \[\|p\|=J^{\prime}\left(\frac{1}{2}d_{M}(x,y)^{2}\right)d_{M}(x,y)=H^{\prime}(d_ {M}(x,y)), \tag{36}\] _and thus, assuming \(H^{\prime\prime}(z)\neq 0\), we derive Equation (32) for the Riemannian manifold case. As in the Euclidean case, we have \(z=\beta(\|p\|)\) and \(\|p\|=H^{\prime}(z)\) so \(\beta\) and \(H^{\prime}\) are inverses of each other._ ### Defective Cost Functions For the squared geodesic cost function, \(c(x,y)=\frac{1}{2}d_{\mathbb{S}^{2}}(x,y)^{2}\), the map \(\mathfrak{G}:T_{x}\mathbb{S}^{2}\to\mathbb{S}^{2}\) given by \(p\mapsto T(x,p)=\exp_{x}(p)\) satisfies \(\mathfrak{G}\left(\left\{p\in T_{x}\mathbb{S}^{2}:\|p\|\leq\pi\right\}\right)= \mathbb{S}^{2}\). Likewise, for the reflector antenna problem, with cost \(c(x,y)=-2\log(1-x\cdot y)\), the map \(\mathfrak{G}:T_{x}\mathbb{S}^{2}\to\mathbb{S}^{2}\) given by \(p\mapsto T(x,p)=x\frac{\|p\|^{2}-1}{\|p\|^{2}+1}-p\frac{2}{\|p\|^{2}+1}\), as derived in [6], satisfies \(\mathfrak{G}(T_{x}\mathbb{S}^{2})=\mathbb{S}^{2}\). The mapping \(\mathfrak{G}:T_{x}\mathbb{S}^{2}\to\mathbb{S}^{2}\) given by \(p\mapsto T(x,p)\) where \(T(x,p)\) is given in Equation (12) or Equation (17) cannot map over the whole unit sphere for any \(x\). We desire to find a condition on the cost function that explains this, which leads to our definition of a defective cost function. **Definition 6**.: _A cost function \(c(x,y):\mathbb{S}^{2}\times\mathbb{S}^{2}\to 0\), which can be written \(c(x,y)=F(x\cdot y)=G(d_{\mathbb{S}^{2}}(x,y))\), satisfying \(F^{\prime}(\zeta)\neq 0\) on an open interval \((0,a)\) is defective if \(G^{\prime\prime}(z)=0\) or \(G^{\prime}(z)=\infty\) for some \(z\in(0,\pi)\) and \(G^{\prime\prime}(z)\) is either positive or negative for \(z\in[0,z^{*})\), where \(z^{*}\in(0,\pi)\) is the smallest value for which \(G^{\prime\prime}(z)=0\) or \(G^{\prime}(z)=\infty\). If necessary in the discussion, we will refer to cost functions for which \(G^{\prime\prime}(z^{*})=0\) as defective cost functions of the first type, and cost functions for which \(G^{\prime}(z^{*})=\infty\) as defective cost functions of the second type. We also denote \(p^{*}\) as the value \(p^{*}=G^{\prime}(z^{*})\)._ Defective cost functions lead to maps of exponential type provided that the distance transported is less than \(z^{*}\). By the definition, defective cost functions of the first type are those for which the concavity of the cost function as a function of the Riemannian distance on \(\mathbb{S}^{2}\) changes for some \(z\in(0,\pi)\) and defective cost functions of the second type are where \(G^{\prime}(z)\) diverges to infinity as \(\|p\|\to\infty\). This implies by differentiating, that \(G^{\prime\prime}(z^{*})=\infty\). Then, the equations \(G^{\prime}(z)=\|p\|\) and \(G^{\prime\prime}(z)=1/\beta^{\prime}(\|p\|)\) imply that since \(G^{\prime\prime}(z)\neq 0\), we can invert \(\|p\|=G^{\prime}(z)\) to get \(z=\beta(\|p\|)\) and thus \(\lim_{\|p\|\to\infty}\beta(\|p\|)=z^{*}\) and \(\lim_{\|p\|\to\infty}\beta^{\prime}(\|p\|)=0\). So, we see that defective cost functions \(c(x,y)=G(z)\) restricted to the domain \(z\in[0,z^{*})\) are "well-behaved", but do not allow for mass to be transported beyond a distance \(z^{*}\). The cost functions \(c(x,y)=d_{\mathbb{S}^{2}}(x,y)^{2}\) and \(c(x,y)=-\log(1-x\cdot y)\) are not defective costs, whereas \(c(x,y)=-\log(n-x\cdot y)\) is a defective cost function of the first type because \(G^{\prime\prime}(z^{*})=0\), for \(z^{*}=\arccos(1/n)\). The cost function \(c(x,y)=\frac{1}{\kappa}\log(\kappa x\cdot y-1)\) is a defective cost function of the second type, because \(G^{\prime\prime}(\arccos(1/\kappa))=\infty\). For the cost \(c(x,y)=-\log(n-x\cdot y)=-\log(n-\cos(z))\), we get that \(G^{\prime\prime}(z)=0\) when \(z=\arccos(1/n)\). From Equation (16), we see that this value of \(z\) is exactly the farthest the mapping may transport while staying real-valued. For the cost function \(c(x,y)=-\log(n-x\cdot y)+\log(n+1)\), we have plotted the cost function as a function of geodesic distance and denoted the inflection points \(G^{\prime\prime}(z)=0\) with circles, see Figure 6. A defective cost function automatically fails to satisfy the injectivity for each \(x\in\mathbb{S}^{2}\) of the map \(y\to-\nabla_{x}c(x,y)\) over the set \(\mathbb{S}^{2}\setminus\{-x\}\), which is one of the MTW conditions presented in [9]. It should be noted also that the cost functions in Theorem 4.1 of [9] lead to maps of exponential type but are not defective. For \(\|p\|\leq c<p^{*}\), we also have the important fact that the \(\beta\) functions, defined in Definition 2 are Lipschitz. **Lemma 7**.: _Suppose that there exists a \(z^{*}\) such that \(G^{\prime\prime}(z^{*})=0\). Denote by \(p^{*}\) the value such that \(\beta(p^{*})=z^{*}\). The \(\beta\) function arising from a cost function \(F(x\cdot y)\), where \(F^{\prime}(\zeta)\neq 0\), and the cost function satisfies the MTW conditions, then \(\beta\) is Lipschitz on the interval \([0,c]\), where \(c<p^{*}\)._ Proof.: Let \(z=d_{\mathbb{S}^{2}}(x,y)\). From (27), we get \(\|p\|=G^{\prime}(z)\). Also, \(z=\beta(\|p\|)\) and therefore \(\beta^{-1}(z)=\|p\|\). Therefore, taking a derivative of (27) with respect to \(z\), we get: \[G^{\prime\prime}(z)=\frac{d}{dz}\beta^{-1}(z)=\frac{1}{\beta^{\prime}(\|p\|)}, \tag{37}\] and therefore, \(\beta^{\prime}(\|p\|)=1/G^{\prime\prime}(z)\). So, for \(z<z^{*}\), we see that the derivative of \(\beta\) is bounded. The important relation in Equation (37) explains why the \(\beta\) function for the lens refractor problem I has a cusp and why the \(\beta\) function for the lens refractor problem II flattens out. Therefore, if we have a \(\beta\) function with a cusp, then it arises from a defective cost function of the first type, and if the \(\beta\) function limits to a horizontal asymptote, where the asymptote is at a value strictly less than \(\pi\) Figure 6. Cost function \(c(x,y)=-\log(n-\cos(z))+\log(n+1)\) plotted as a function of \(z\). Black indicates \(n=1.333\), approximately the refractive index of water, red indicates \(n=1.52\), approximately the refractive index of glass, and blue indicates \(n=2.417\), approximately the refractive index of diamond. Below the inflection points, the cost functions are concave. then it arises from a defective cost function of the second type. Both are types of cost functions which do not allow the mass to be transported a distance beyond a fixed value strictly less than \(\pi\). **Remark 8**.: _It should be clear how to extend the definition of a defective cost function in Euclidean space. If we have a cost function \(c(x,y)=H(\left\|x-y\right\|)=J\left(\frac{1}{2}\left\|x-y\right\|^{2}\right)\), then denote \(z^{*}\) to be the smallest value for which \(H^{\prime\prime}(z)=0\) or \(H^{\prime}(z)=\infty\). Then, let \(F\) satisfy \(F^{\prime}(\zeta)\neq 0\) for \(\zeta<\sqrt{2z^{*}}\) and \(H^{\prime\prime}(z)\) is either positive or negative for \(z\in[0,z^{*})\). We call such a \(c\) a defective cost function._ **Remark 9**.: _In the case of more general Riemannian manifolds, if we have a cost function \(c(x,y)=H(d_{M}(x,y))=J\left(\frac{1}{2}d_{M}(x,y)^{2}\right)\), then denote \(z^{*}\) to be the smallest value for which \(H^{\prime\prime}(z)=0\) or \(H^{\prime}(z)=\infty\). Then, let \(F\) satisfy \(F^{\prime}(\zeta)\neq 0\) for \(\zeta<\sqrt{2z^{*}}\) and \(H^{\prime\prime}(z)\) is either positive or negative for \(z\in[0,z^{*})\). We call such a \(c\) a defective cost function. Note that in this case, \(z^{*}\) will necessarily be less than the injectivity radius._ ### Derivation of the Mixed Hessian Term \(\left|D_{xy}^{2}c(x,y)\right|_{y=T(x)}\right|\) The mixed Hessian term \(\left|D_{xy}^{2}c(x,y)\right|_{y=T(x)}\)\(\left|\) is important to compute to check the MTW conditions as well as for numerical discretizations. See, for example, the paper [5] for an example of a numerical discretization that uses the explicit form of the mixed Hessian. In this subsection, we derive various expressions for computing the mixed Hessian for cost functions leading to maps of exponential type. The expressions we derive, which are Equations (47), (51) and (55), are for mappings of exponential type, which arise from cost functions depending solely on terms involving the dot product \(x\cdot y\), see Section 3.2, however, the derivation can be easily generalized to other, more general, cost functions also arising in optics applications and will be explored in future work. Here we derive a formula for the mixed Hessian via an integral formulation. Using the notation established in Equation (21), define \(R(p)=pR_{2}(\left\|p\right\|)\). Then, for a region \(E\subset T_{x}\mathbb{S}^{2}\) define \(U=R(E)=\left\{pR_{2}(\left\|p\right\|)|p\in E\right\}\) and unit vectors \(\hat{u}_{1},\hat{u}_{2}\), where \(\hat{u}_{1}=\hat{p}\) and \(\hat{u}_{2}=\hat{p}\times x\). For any vector \(v\in T_{x}\mathbb{S}^{2}\), this defines a coordinate system on \(T_{x}\mathbb{S}^{2}\), i.e. \(v=(u_{1},u_{2})\), where \(u_{1}=v\cdot\hat{u}_{1}\) and \(u_{2}=v\cdot\hat{u}_{2}\). Then, define the region \(T(x,E)=\left\{\sqrt{1-u_{1}^{2}-u_{2}^{2}}|(u_{1},u_{2})\in U\right\}\subset \mathbb{S}^{2}\), see Figure 7, where \(y_{x}=y\cdot y\) and \(y_{p}=\). Since the mixed Hessian satisfies the formula \[\left|\det D_{xy}^{2}c\right|=\frac{1}{\left|\det D_{p}T\right|}, \tag{38}\] we see that if we compute the quantity \(\left|\det D_{p}T\right|\), then we can compute the determinant of the mixed Hessian. We see that the quantity \(\left|\det D_{p}T\right|\) can be compute via computing the area of the region \(T(x,E)\). From Figure 7, we see: \[\int_{T(x,E)}dS=\int_{E}\left|\det D_{p}T\right|dp. \tag{39}\] Also, \[\int_{T(x,E)}dS=\int_{U}\frac{1}{\sqrt{1-\left\|u\right\|^{2}}}du, \tag{40}\] \[=\int_{E}\frac{1}{\sqrt{1-\left|R(p)\right|^{2}}}\left|\det\nabla R(p)\right|dp. \tag{41}\] And therefore, by Equations (38), (39) and (41), we get the following expression for the mixed Hessian: \[\left|\det D_{xy}^{2}c\right|=\frac{\sqrt{1-\left|R(p)\right|^{2}}}{\left|\det \nabla R(p)\right|}. \tag{42}\] Since \(R(p)=pR_{2}(\left\|p\right\|)\), we compute: Figure 7. Change in area formula from tangent coordinates \(p\) to coordinates on the sphere \(T(x,p)=xR_{1}(\left\|p\right\|)+pR_{2}(\left\|p\right\|)\) via the coordinates \((u_{1},u_{2})\) of the orthogonal projection of \(T(x,p)\) onto the tangent plane \(T_{x}\mathbb{S}^{2}\). \[(\nabla R)_{11} =\frac{p_{1}^{2}}{\|p\|}R_{2}^{\prime}(\|p\|)+R_{2}(\|p\|), \tag{44}\] \[(\nabla R)_{12} =(\nabla R)_{21}=\frac{p_{1}p_{2}}{\|p\|}R_{2}^{\prime}(\|p\|),\] (45) \[(\nabla R)_{22} =\frac{p_{2}^{2}}{\|p\|}R_{2}^{\prime}(\|p\|)+R_{2}(\|p\|), \tag{43}\] and thus, \[\det{(\nabla R)}=\|p\|\,R_{2}(\|p\|)R_{2}^{\prime}(\|p\|)+R_{2}(\|p\|)^{2}, \tag{46}\] and hence \[\left|\det{D_{xy}^{2}c}\right|=\frac{\sqrt{1-\left\|p\right\|^{2}R_{2}(\|p\|) ^{2}}}{\left|\|p\right|R_{2}(\|p\|)R_{2}^{\prime}(\|p\|)+R_{2}(\|p\|)^{2}}. \tag{47}\] We find that there is a more convenient expression for the determinant of the mixed Hessian in terms of \(R_{1}(\|p\|)\). First, we use the fact that: \[R_{1}(\|p\|)^{2}+\left\|p\right\|^{2}R_{2}(\|p\|)^{2}=1, \tag{48}\] and thus, we get \(|R_{1}(\|p\|)|=\sqrt{1-\left\|p\right\|^{2}R_{2}(\|p\|)^{2}}\) and, also, \[R_{1}(\|p\|)R_{1}^{\prime}(\|p\|)+\|p\|\,R_{2}(\|p\|)^{2}+\left\|p\right\|^{2} R_{2}(\|p\|)R_{2}^{\prime}(\|p\|)=0, \tag{49}\] and thus, \[\left|\|p\right\|R_{2}(\|p\|)R_{2}^{\prime}(\|p\|)+R_{2}(\|p\|)^{2}\right|= \frac{|R_{1}(\|p\|)R_{1}^{\prime}(\|p\|)|}{\|p\|}. \tag{50}\] Thus, in terms of \(R_{1}\) we get the very simple expression: \[\left|\det{D_{xy}^{2}c}\right|=\frac{\|p\|}{|R_{1}^{\prime}(\|p\|)|}. \tag{51}\] Using Equation (47), we derive a formula for the mixed Hessian term in terms of the function \(\beta\) and \(F^{\prime}(x\cdot y)\) and \(G^{\prime\prime}(d_{\mathbb{S}^{2}}(x,y))\). From the definition \(T(x,p)=xR_{1}(\|p\|)+pR_{2}(\|p\|)\) and the fact that \(\beta(\|p\|)=\arccos(R_{1})\), we get that \(\beta(\|p\|)=\arcsin(\|p\|\,R_{2}(\|p\|))\). Therefore, since \[\beta^{\prime}(\|p\|)=\frac{R_{2}(\|p\|)+xR_{2}^{\prime}(\|p\|)}{\sqrt{1-\left\| p\right\|^{2}R_{2}(\|p\|)^{2}}}, \tag{52}\] we get, \[\left|\beta^{\prime}(\|p\|)\right|=\frac{1}{|R_{2}(\|p\|)|}\left|D_{p}T\right|, \tag{53}\] from (47). Therefore, \[\left|D_{p}T\right|=\left|R_{2}(\left\|p\right\|)\right|\left|\beta^{\prime}( \left\|p\right\|)\right|=\frac{\sin\beta(\left\|p\right\|)}{\left\|p\right\|} \left|\beta^{\prime}(\left\|p\right\|)\right|, \tag{54}\] and thus, we have the following expression for the mixed Hessian in terms of the \(\beta\) function: \[\left|D_{xy}^{2}c(x,y)\right|=\frac{\left\|p\right\|}{\sin\left(\beta(\left\|p \right\|)\right)\left|\beta^{\prime}(\left\|p\right\|)\right|} \tag{55}\] This expression, while cumbersome to compute by hand, proffers a much simpler route for checking the MTW conditions for all defective cost functions. Provided that we satisfy \(z\in[0,z^{*})\), the mixed Hessian is strictly positive. **Lemma 10**.: _For a defective cost function \(c(x,y)\), when \(z\) satisfies \(z<z^{*}\), we have_ \[\left|D_{xy}^{2}c(x,y)\right|>0. \tag{56}\] Proof.: Let \(z=d_{\mathbb{S}^{2}}(x,y)\). Then, we derive the important equality: \[\left|D_{xy}^{2}c(x,y)\right|=\frac{\left\|p\right\|}{\sin\left(\beta(\left\| p\right\|)\right)\left|\beta^{\prime}(\left\|p\right\|)\right|}=\frac{\left|G^{ \prime\prime}(z)\right|}{\left|R_{2}(\left\|p\right\|)\right|}=\left|G^{ \prime\prime}(z)\right|\left|F^{\prime}(x\cdot y)\right|, \tag{57}\] since \(y\cdot\hat{p}=\frac{\left\|p\right\|}{F^{\prime}(x\cdot y)}=\left\|p\right\|R _{2}\left(\left\|p\right\|\right)\), therefore, \(R_{2}(\left\|p\right\|)=\frac{1}{F^{\prime}(x\cdot y)}\). Since \(\frac{1}{\beta^{\prime}(\left\|p\right\|)}=G^{\prime\prime}(z)\neq 0\) for \(z\in[0,z^{*})\), and \(F^{\prime}(x\cdot y)\neq 0\), we get that \(\left|D_{xy}^{2}c(x,y)\right|>0\). **Remark 11**.: _A similar computation can be done for the Euclidean case for defective cost functions, see Remark 4 for the background and assumptions, i.e. \(J^{\prime}(\zeta)\neq 0\) and \(H^{\prime\prime}(z)\neq 0\). It can be shown that for the \(\mathbb{R}^{d}\) for such cost functions, we get:_ \[\left|D_{p}T\right|=\beta^{\prime}(\left\|p\right\|)\left(\frac{\beta(\left\| p\right\|)}{\left\|p\right\|}\right)^{d}. \tag{58}\] _Let \(\zeta=\frac{1}{2}\left\|x-y\right\|^{2}\). By Equation (31) we get \(J^{\prime}(\zeta)z=\left\|p\right\|\) implies \(J^{\prime}(\zeta)=\left\|p\right\|/\beta(\left\|p\right\|)\). Using Equation (32), we get:_ \[\left|D_{xy}^{2}c(x,y)\right|=\frac{1}{\beta^{\prime}(\left\|p\right\|)}\left( \frac{\left\|p\right\|}{\beta(\left\|p\right\|)}\right)^{d}=\left|H^{\prime \prime}(z)\right|\left|J^{\prime}(\zeta)\right|^{d}, \tag{59}\] _which is the natural equivalent of Equation (57) and shows that the mixed Hessian term is nonzero for cost functions in Euclidean space such that \(J^{\prime}(\zeta)\neq 0\) and \(H^{\prime\prime}(z)\neq 0\)._ **Remark 12**.: _For the more general surface, the formula is more difficult to obtain. This is due to the fact that the curvature is non-constant. The general form can be found to be:_ \[\left|D_{xy}^{2}c(x,y)\right|=\frac{\left\|p\right\|}{\mathfrak{S}_{x,\hat{p}} (\beta(\left\|p\right\|))\left|\beta^{\prime}(\left\|p\right\|)\right|}, \tag{60}\] _where the function \(\mathfrak{S}_{x,\hat{p}}\) depends on the point \(x\) and \(\hat{p}\) and can be obtained by solving the Jacobi field along a path \(\gamma\), where \(\gamma(0)=x\) and \(\gamma^{\prime}(0)=\hat{p}\), which measures the amount that an exponential map from the point \(x\) in the direction infinitesimally "spreads" out and it depends on the curvature of the surface. Note that the function \(\mathfrak{S}_{x,\hat{p}}\) is zero at \(x\), i.e. \(\mathfrak{S}_{x,\hat{p}}(0)=0\) and at its first conjugate point \(x_{0}\). Let \(z=d_{M}(x,y)\) and \(\zeta=\frac{1}{2}d_{M}(x,y)^{2}\). Since \(J^{\prime}(\zeta)=\left\|p\right\|/\beta(\left\|p\right\|)\), we see that the required condition for the mixed Hessian to be nonzero is not \(J^{\prime}(\zeta)\neq 0\) and \(H^{\prime\prime}(z)\neq 0\), but instead \(J^{\prime}(\zeta)=0\) and the more complicated condition:_ \[\frac{H^{\prime\prime}(z)z}{\mathfrak{S}_{x,\hat{p}}(z)}\neq 0. \tag{61}\] _In the Euclidean case, \(\mathfrak{S}_{x,\hat{p}}(z)=z\), so this reduces to the condition \(H^{\prime\prime}(z)\neq 0\) that we encountered in Remark 11._ **Remark 13**.: _Thus, we see that for the sphere, the conditions \(G^{\prime\prime}(z)\neq 0\) and \(F^{\prime}(\zeta)\neq 0\) are sufficient to guarantee that the mixed Hessian term is non-zero. For Euclidean space, the conditions \(H^{\prime\prime}(z)\neq 0\) and \(J^{\prime}(\zeta)\neq 0\) are sufficient. For more general surfaces, the additional condition in Equation (61) is required. The condition \(H^{\prime\prime}(z)\neq 0\) for the \(\beta\) function to be Lipschitz._ ### Cost-Sectional Curvature for Defective Cost Functions In this subsection, we present simple general formulas for checking the positivity of the cost-sectional curvature for cost functions of the type \(c(x,y)=F(x\cdot y)\) on the sphere and \(c(x,y)=J\left(\frac{1}{2}\left\|x-y\right\|^{2}\right)\) on subsets of Euclidean space. These formulas, interestingly, will only depend just the first- and second-order derivatives of \(F\) and \(J\), respectively, even though the cost-sectional curvature tensor requires taking four derivatives. We will then use the formulas we derive to check the cost-sectional curvature for various cost functions, including the cost from the lens reflector problem I \(c(x,y)=-\log(n-x\cdot y)\) and the cost from the lens refractor problem II \(c(x,y)=\log(\kappa x\cdot y-1)\). We will confirm that the cost function for the lens reflector problem I does not satisfy the positive cost-sectional curvature condition, but does for the lens reflector problem II, as was found in [4]. However, we emphasize that the formulas we derive will be valid for any defective cost function. The cost-sectional curvature condition comes from defining the following fourth-order tensor, which was first defined in [10], but we will use the formula from [8]. Note: there is a minus sign in front of the cost function and that the derivatives with respect to \(x\) need to be taken with respect to the metric on either \(\mathbb{S}^{2}\) or \(\mathbb{R}^{d}\), respectively. Here, we define the cost-sectional curvature tensor: **Definition 14**.: _On the domain that the map \(y\mapsto-\nabla_{x}c(x,y)\) is injective, \(\left|D^{2}_{xy}c(x,y)\right|\neq 0\) and \(c(x,y)\in C^{4}\), we define the map_ \[\mathfrak{G}_{c}(\xi,\eta)=D^{4}_{p_{k},p_{l},x_{i},x_{j}}\left[(x,p)\to-c(x,y )(p)|_{y=T(x)}\right]\xi_{i}\xi_{j}\eta_{k}\eta_{l}. \tag{62}\] An important condition to check for the purposes of regularity theory is positive sectional curvature, usually known as the Aw condition, i.e. that on an appropriate domain for all \(x\in M\), \(y\in M\), \(\xi\in T_{x}M\), \(\eta\in T_{x}M\), \(\xi\perp\eta\) that: \[\mathfrak{G}_{c}(\xi,\eta)\geq 0. \tag{63}\] The Aw condition is an important condition to check such that smooth source and target mass density functions which are bounded away from zero and infinity lead to Optimal Transport mapping and potential functions that are smooth as well. Without the Aw condition, it is possible to have smooth smooth source and target mass density functions which are bounded away from zero and infinity and possibly have an Optimal Transport mapping that is not even continuous, see [8] and also [3]. A more stringent condition is known as the As condition, which is often the condition used in deriving regularity results, as in [9]. It requires that there exist a constant \(C_{0}>0\) such that \[\mathfrak{G}_{c}(\xi,\eta)\geq C_{0}\left|\xi\right|^{2}\left|\eta\right|. \tag{64}\] 5.1. Computation of the cost-sectional curvature for \(c(x,y)=F(x\cdot y)\) on the sphere \(\mathbb{S}^{2}\) We compute the \(3\times 3\) matrix Hessian for the cost function \(c(x,y)=F(x\cdot y)\) using Euclidean derivatives, since our metric on the sphere is induced by the surrounding Euclidean space. Given a function \(f:\mathbb{R}^{3}\rightarrow\mathbb{R}\), we compute the Hessian as follows: \[\nabla^{2}_{xx}f(x)=D^{2}_{xx}f(x)-(D_{x}f(x)\cdot x)\text{Id}, \tag{65}\] where \(D\) are the standard Euclidean (ambient) derivatives. We will thus compute the term \(\nabla^{2}_{xx}c(x,y)|_{y=T}\). We get: \[D_{x}F(x\cdot y)\cdot x=F^{\prime}(x\cdot y)x\cdot y=-\frac{\|p\|}{\sin\beta( \|p\|)}\cos\beta(\|p\|)=-\|p\|\cot\beta(\|p\|). \tag{66}\] We also compute: \[\frac{\partial^{2}}{\partial x_{i}x_{j}}F(x\cdot y)=F^{\prime\prime}(x\cdot y )y_{i}y_{j}. \tag{67}\] Thus, we get: \[\nabla^{2}_{xx}F(x\cdot y)=F^{\prime\prime}(x\cdot y)\begin{pmatrix}y_{1}^{2} &y_{1}y_{2}&y_{1}y_{3}\\ y_{1}y_{2}&y_{2}^{2}&y_{2}y_{3}\\ y_{1}y_{3}&y_{2}y_{3}&y_{3}^{2}\end{pmatrix}-F^{\prime}(x\cdot y)(x\cdot y) \begin{pmatrix}1&0&0\\ 0&1&0\\ 0&0&1\end{pmatrix}. \tag{68}\] Recall that \(y_{i}=x_{i}(x\cdot y)+\frac{p_{i}}{F^{\prime}(x\cdot y)}\). This Hessian can be entirely expressed as a function of \(x,p\), however, for simplicity, we express in terms of \(x,\zeta,p\), where \(\zeta=x\cdot y=\cos\beta(\|p\|)\): \[\left(\nabla^{2}_{xx}c(x,y)|_{y=T}\right)_{kl}=F^{\prime\prime}(\zeta)\left( x_{k}\zeta+\frac{p_{k}}{F^{\prime}(\zeta)}\right)\left(x_{l}\zeta+\frac{p_{l}}{F^ {\prime}(\zeta)}\right)-\zeta F^{\prime}(\zeta)\delta_{kl}, \tag{69}\] Since \(\xi\) and \(p\) can be related via \(\zeta=\cos\beta(\|p\|)\), we relabel Equation (69) by \(f_{1}(p)=F^{\prime\prime}(\zeta)\zeta^{2}\), \(f_{2}(p)=F^{\prime\prime}(\zeta)/(F^{\prime}(\zeta))^{2}\), \(f_{3}(p)=\zeta/F^{\prime}(\zeta)\), and \(f_{4}(p)=-\zeta F^{\prime}(\zeta)\) and get \[\left(\nabla^{2}_{xx}c(x,y)|_{y=T}\right)_{kl}=f_{1}(p)x_{k}x_{l}+f_{2}(p)p_{k }p_{l}+f_{3}(p)(x_{k}p_{l}+x_{l}p_{k})+f_{4}(p)\delta_{kl}. \tag{70}\] With this simplification of notation,we proceed and compute: \[D_{p_{i}}\left(\nabla^{2}_{xx}c(x,y)|_{y=T}\right)_{kl}=D_{p_{i} }f_{1}(p)x_{k}x_{l}+D_{p_{i}}f_{2}(p)p_{k}p_{l}+f_{2}(p)(\delta_{ik}p_{l}+\\ \delta_{il}p_{k})+D_{p_{i}}f_{3}(p)(x_{k}p_{l}+x_{l}p_{k})+f_{3}( p)(x_{k}\delta_{il}+x_{l}\delta_{ik})+D_{p_{i}}f_{4}(p)\delta_{kl}. \tag{71}\] Taking another derivative, we get: \[D_{p_{i}p_{j}}\left(\nabla^{2}_{xx}c(x,y)|_{y=T}\right)_{kl}=D_{p _{i}p_{j}}f_{1}(p)x_{k}x_{l}+D_{p_{i}p_{j}}f_{2}(p)p_{k}p_{l}+D_{p_{i}}f_{2}(p) (\delta_{jk}p_{l}+\delta_{jl}p_{k})+\\ D_{p_{j}}f_{2}(p)(\delta_{ik}p_{l}+\delta_{il}p_{k})+f_{2}(p)( \delta_{ik}\delta_{jl}+\delta_{il}\delta_{jk})+D_{p_{i}p_{j}}f_{3}(p)(x_{k}p_{ l}+x_{l}p_{k})+\\ D_{p_{i}}f_{3}(p)(x_{k}\delta_{jl}+x_{l}\delta_{jk})+D_{p_{j}}f_{3}( p)(x_{k}\delta_{il}+x_{l}\delta_{ik})+D_{p_{i}p_{j}}f_{4}(p)\delta_{kl}. \tag{72}\] Now, we compute the following: \[\sum_{i,j,k,l}D_{p_{i}p_{j}}\left(\nabla^{2}_{xx}c(x,y)|_{y=T} \right)_{kl}\xi_{i}\xi_{j}\eta_{k}\eta_{l}=\left\langle D^{2}_{pp}f_{1}(p)\xi,\xi\right\rangle(x\cdot\eta)^{2}+\left\langle D^{2}_{pp}f_{2}(p)\xi,\xi \right\rangle(p\cdot\eta)^{2}+\\ 4(D_{p}f_{2}(p)\cdot\xi)(\xi\cdot\eta)(p\cdot\eta)+f_{2}(p)(\xi \cdot\eta)^{2}+2\left\langle D^{2}_{pp}f_{3}(p)\xi,\xi\right\rangle(x\cdot \eta)(p\cdot\eta)+2(D_{p}f_{3}(p)\cdot\xi)(x\cdot\eta)(\xi\cdot\eta)+\\ 2(D_{p}f_{3}(p)\cdot\xi)(\xi\cdot\eta)(x\cdot\eta)+\left\langle D ^{2}_{pp}f_{4}(p)\xi,\xi\right\rangle\left|\eta\right|^{2}. \tag{73}\] We now use the fact that \(\xi\perp\eta=0\) and \(x\cdot\eta=0\): \[\sum_{i,j,k,l}D_{p_{i}p_{j}}\left(\nabla^{2}_{xx}c(x,y)|_{y=T}\right)_{kl}\xi_ {i}\xi_{j}\eta_{k}\eta_{l}=\left\langle D^{2}_{pp}f_{2}(p)\xi,\xi\right\rangle (p\cdot\eta)^{2}+\left\langle D^{2}_{pp}f_{4}(p)\xi,\xi\right\rangle\left|\eta \right|^{2}. \tag{74}\] This shows that in order to satisfy condition As, we must check two conditions, (1) the function \(f_{4}(p)\) must be strictly negative definite (strictly concave as a function of \(\left\|p\right\|\)) and (2) \(f_{2}(p)+f_{4}(p)\) must be strictly negative definite (strictly concave as a function of \(\left\|p\right\|\)). In order to satisfy condition Aw, we simply weaken the strictly negative definite condition to be just simply negative definite. Replacing \(f_{2}(p)=F^{\prime\prime}(x\cdot y)/(F^{\prime}(x\cdot y))^{2}\) and \(f_{4}(p)=-(x\cdot y)F^{\prime}(x\cdot y)\) back in, we can now check the lens problem: \[f_{4}(p)=-(x\cdot y)F^{\prime}(x\cdot y)=-\frac{x\cdot y}{n-x\cdot y}, \tag{75}\] and \[f_{2}(p)=F^{\prime\prime}(x\cdot y)/(F^{\prime}(x\cdot y))^{2}=-1. \tag{76}\] Recall from Equation (12) that \[x\cdot y=\frac{\sqrt{1+(1-n^{2})\left\|p\right\|^{2}}+n\left\|p\right\|^{2}}{1+ \left\|p\right\|^{2}}. \tag{77}\] The result, for different \(n\) is shown in Figure 8. All are convex. This means that the lens problem does not satisfy condition Aw. For the case where the refractive index is less than the surrounding medium, we compute: \[f_{4}(p)=-(x\cdot y)F^{\prime}(x\cdot y)=-\frac{\kappa x\cdot y}{\kappa(x\cdot y)- 1}, \tag{78}\] and \[f_{2}(p)=F^{\prime\prime}(x\cdot y)/(F^{\prime}(x\cdot y))^{2}=-1. \tag{79}\] Then, we use Equation (17) to get: \[x\cdot y=\frac{\sqrt{\kappa^{2}+(\kappa^{2}-1)\left\|p\right\|^{2}}+\left\|p \right\|^{2}}{\kappa(1+\left\|p\right\|^{2})}. \tag{80}\] The result, in Figure 9 shows that \(f_{4}(p)\) is concave in \(\left\|p\right\|\) and thus satisfies condition Aw. In order to check As, we would need to compute the second derivative of the function in Equation (78) with respect to \(\left\|p\right\|\) using Equation (80). This is Figure 8. The function \(f_{4}(p)\) plotted as a function of \(\left\|p\right\|\). Black indicates \(n=1.333\), approximately the refractive index of water, red indicates \(n=1.52\), approximately the refractive index of glass, and blue indicates \(n=2.417\), approximately the refractive index of diamond. a straightforward exercise, and has been done in [4], where it was shown that condition As does not hold. For the squared geodesic cost \(c(x,y)=\frac{1}{2}d_{\mathbb{S}^{2}}(x,y)^{2}\), we have that \(F(x\cdot y)=\frac{1}{2}\arccos(x\cdot y)^{2}\). Therefore, we get: \[f_{4}(p)=-(x\cdot y)F^{\prime}(x\cdot y)=\frac{(x\cdot y)\arccos(x\cdot y)}{ \sqrt{1-(x\cdot y)^{2}}}=\frac{\|p\|\cos\|p\|}{\sqrt{1-\cos^{2}\|p\|}}, \tag{81}\] which is concave in \(\|p\|\). Also, \[f_{2}(p)=F^{\prime\prime}(x\cdot y)/(F^{\prime}(x\cdot y))^{2}=\frac{1}{( \arccos(x\cdot y))^{2}}-\frac{x\cdot y}{\arccos(x\cdot y)\sqrt{1-(x\cdot y)^{2 }}}, \tag{82}\] which is convex in \(\|p\|\), but the sum \(f_{2}(p)+f_{4}(p)\) is concave in \(\|p\|\) as expected. Checking the condition As is a straightforward exercise (take two derivatives), and the results from [9] show that the condition As is satisfied for the squared geodesic cost function. Figure 9. The function \(f_{4}(p)\) plotted as a function of \(\|p\|\). Black indicates \(\kappa=1.333\), red indicates \(\kappa=1.52\),, and blue indicates \(\kappa=2.417\). 5.2. Computation of the cost-sectional curvature for \(c(x,y)=F(x\cdot y)\) for Euclidean space \(\mathbb{R}^{d}\) This can be contrasted with the Euclidean case. Denote \(\zeta=\frac{1}{2}\left\|x-y\right\|^{2}\). We get: \[\frac{\partial}{\partial x_{i}}c(x,y)=J^{\prime}\left(\zeta\right)(x_{i}-y_{i}), \tag{83}\] and thus \[\frac{\partial^{2}}{\partial x_{i}\partial x_{j}}c(x,y)=J^{\prime\prime}(\zeta )(x_{i}-y_{i})(x_{j}-y_{j}), \tag{84}\] and \[\frac{\partial^{2}}{\partial x_{i}^{2}}c(x,y)=J^{\prime\prime}(\zeta)(x_{i}-y _{i})^{2}+J^{\prime}(\zeta), \tag{85}\] and thus, \[\left(\nabla^{2}_{xx}c(x,y)|_{y=T}\right)_{kl}=J^{\prime}(\zeta) \delta_{kl}+\\ J^{\prime\prime}(\zeta)\begin{pmatrix}(x_{1}-y_{1})^{2}&(x_{1}- y_{1})(x_{2}-y_{2})&(x_{1}-y_{1})(x_{3}-y_{3})\\ (x_{1}-y_{1})(x_{2}-y_{2})&(x_{2}-y_{2})^{2}&(x_{2}-y_{2})(x_{3}-y_{3})\\ (x_{1}-y_{1})(x_{3}-y_{3})&(x_{2}-y_{2})(x_{3}-y_{3})&(x_{3}-y_{3})^{2}\end{pmatrix}. \tag{86}\] Since \(x-y=-\frac{p}{J^{\prime}(\zeta)}\), we get: \[\left(\nabla^{2}_{xx}c(x,y)|_{y=T}\right)_{kl}=J^{\prime}(\zeta)\delta_{kl}- \frac{d}{d\zeta}\left(\frac{1}{J^{\prime}(\zeta)}\right)p_{k}p_{l}. \tag{87}\] Since we can relate \(p\) and \(\zeta\) via the equation \(\left\|p\right\|=J^{\prime}(\zeta)\sqrt{2\zeta}\), we may rewrite Equation (87) as: \[\left(\nabla^{2}_{xx}c(x,y)|_{y=T}\right)_{kl}=f_{1}(p)\delta_{kl}+f_{2}(p)p_{ k}p_{l}, \tag{88}\] where \(f_{1}(p)=J^{\prime}(\zeta)\) and \(f_{2}(p)=\frac{J^{\prime\prime}(\zeta)}{J^{\prime}(\zeta)^{2}}\). Thus, we can compute: \[D_{p_{i},p_{j}}\left(\nabla^{2}_{xx}c(x,y)|_{y=T}\right)_{kl}=D_ {p_{i}p_{j}}f_{1}(p)\delta_{kl}+D_{p_{i}p_{j}}f_{2}(p)p_{k}p_{l}+\\ D_{p_{i}}f_{2}(p)(\delta_{jk}p_{l}+\delta_{jl}p_{k})+D_{p_{j}}f_{2}(p)( \delta_{ik}p_{l}+\delta_{il}p_{k})+f_{2}(p)(\delta_{ik}\delta_{jl}+\delta_{il }\delta_{jk}). \tag{89}\] Thus, we compute \[\sum_{i,j,k,l}D_{p_{i},p_{j}}\left(\nabla^{2}_{xx}c(x,y)|_{y=T} \right)_{kl}\xi_{i}\xi_{j}\eta_{k}\eta_{l}=\left\langle D_{pp}f_{1}(p)\xi,\xi \right\rangle\left|\eta\right|^{2}+\left\langle D_{pp}f_{2}(p)\xi,\xi\right \rangle(p\cdot\eta)^{2}+\\ 4(D_{p}f_{2}(p)\cdot\xi)(p\cdot\eta)(\xi\cdot\eta)+2f_{2}(p)(\xi \cdot\eta)^{2}. \tag{90}\] Since \(\xi\perp\eta\), we get: \[\sum_{i,j,k,l}D_{p_{i},p_{j}}\left(\nabla^{2}_{xx}c(x,y)|_{y=T}\right)_{kl}\xi _{i}\xi_{j}\eta_{k}\eta_{l}=\left\langle D_{pp}f_{1}(p)\xi,\xi\right\rangle \left|\eta\right|^{2}+\left\langle D_{pp}f_{2}(p)\xi,\xi\right\rangle(p\cdot \eta)^{2}. \tag{91}\] As in the case of the sphere in Section 3.5.1, in order for condition As to hold, it is necessary for two properties to hold: (1) that \(f_{1}\) is strictly concave in \(\left\|p\right\|\) and that (2) \(f_{1}+f_{2}\) is strictly concave in \(\left\|p\right\|\). For property Aw to hold, we weaken strict concavity to simply concavity. We have \[f_{1}(p)=J^{\prime}(\zeta)=\frac{\left\|p\right\|}{\beta(\left\|p\right\|)}, \tag{92}\] and \[f_{2}(p)=\frac{J^{\prime\prime}(\zeta)}{J^{\prime}(\zeta)^{2}}=\frac{1}{ \left\|p\right\|}\left(\frac{1}{\left\|p\right\|\beta^{\prime}(\left\|p\right\| )}-\frac{1}{\beta(\left\|p\right\|)}\right). \tag{93}\] As an example, for the squared cost, \(J(\zeta)=\zeta=\frac{1}{2}\left\|p\right\|^{2}\), for which \(J^{\prime}(\zeta)=1\), which is concave in \(\left\|p\right\|\). Also, \(J^{\prime\prime}(\zeta)=0\), so we see that we satisfy the cost-sectional curvature condition Aw, but As does not hold. **Remark 15**.: _Some explicit computations for more general Riemannian manifolds have been done in, for example, [3] where there is an interesting example where for the squared geodesic cost function \(c(x,y)=\frac{1}{2}d_{M}(x,y)^{2}\), the cost-sectional curvature is not positive for an ellipsoid of revolution._ ## 4. Regularity and Solvability ### Ma-Trudinger-Wang Conditions for Defective Cost Functions It was proved, in [4] that there exist weak solutions to the lens refractor problem I & II, provided that, most importantly, the conditions in Equations (16) and (19) were met, respectively. In [7], it was proved that smooth solutions exist for the lens refractor problem II, with an argument using supporting ellipsoids and hyperboloids, respectively. Here, instead, we are inspired to take the route taken in [9] and prove a regularity result for the lens refractor problem II, but, more generally, for any defective cost function that satisfies the MTW conditions. We begin by stating the MTW conditions, formulated originally in [10], but we focus on the Riemannian generalization as stated in [9]. Given a compact domain \(D\subset\mathbb{S}^{2}\times\mathbb{S}^{2}\), denote by \(\pi_{1}:\mathbb{S}^{2}\times\mathbb{S}^{2}\mapsto\mathbb{S}^{2}\), the projection \(\pi_{1}(x,y)=x\) and its inverse \(\pi_{1}^{-1}(x)=\{x\}\times\mathbb{S}^{2}\). For any \(x\in\pi_{1}(D)\), we denote by \(D_{x}\) the set \(D\cap\pi_{1}^{-1}(x)\). Then, we introduce the following conditions: **Hypothesis 16**.: 1. _The cost function_ \(c\) _belongs to_ \(C^{4}(D)\)_._ 2. _For all_ \(x\in\pi_{1}(D)\)_, the map_ \(y\to-\nabla_{x}c(x,y)\) _is injective on_ \(D_{x}\)_._ 3. _The cost function_ \(c\) _satisfies_ \(\det D_{xy}^{2}c\neq 0\) _for all_ \((x,y)\) _in_ \(D\)_._ 4. _The cost-sectional curvature is non-negative on_ \(D\)_. That is, for all_ \((x,y)\in D\)_, for all_ \(\xi,\eta\in T_{x}\mathbb{S}^{2}\)_,_ \(\xi\perp\eta\)_,_ (94) \[\mathfrak{G}_{c}(x,y)(\xi,\eta)\geq 0\] 5. _The cost-sectional curvature is uniformly positive on_ \(D\)_. That is, for all_ \((x,y)\in D\)_, for all_ \(\xi,\eta\in T_{x}\mathbb{S}^{2}\)_,_ \(\xi\perp\eta\)_,_ (95) \[\mathfrak{G}_{c}(x,y)(\xi,\eta)\geq C_{0}\left|\xi\right|^{2}\left|\eta\right| ^{2}\] Denoting \(z^{*}=\{\min z:G^{\prime\prime}(z)=0\text{ or }G^{\prime\prime}(z)=\infty,z\in(0,\pi)\}\), we now choose the subdomain \(D\) as follows. For each \(x\in\mathbb{S}^{2}\), we denote the corresponding geodesic ball \(B_{x}(z^{*})\). Then, let \(0<\gamma<z^{*}\) \[D_{\gamma}=\cup_{x}\left\{x\right\}\times B_{x}(\gamma) \tag{96}\] Now that we have defined \(D_{\gamma}\), the computations of Section 3 allow for us to verify the MTW conditions. Assuming that \(F\) is \(C^{4}[0,\arccos\gamma]\), \(G\) is \(C^{4}[0,\gamma]\), then, \(c\) satisfies **A0** on \(D_{\gamma}\). As long as \(c\) satisfies the hypotheses in Theorem 3, then **A1** is satisfied on \(D_{\gamma}\). By the computation in Equation (57), if we have \(F^{\prime}(\zeta)\neq 0\) and \(G^{\prime\prime}(z)\neq 0,\infty\), then \(c\) satisfies **A2** on \(D_{\gamma}\). As shown in Section 3.5, if \(f(4)(p)\) is strictly concave and \(f_{4}(p)+f_{2}(p)\) is strictly concave for \(\|p\|<p^{*}\), then **Aw** is satisfied for \(c\) on \(D_{\gamma}\). The condition **As** can then be checked by taking two derivatives of \(f_{2}(p)\) and \(f_{4}(p)\) with respect to \(\|p\|\). **Remark 17**.: _Based on Remarks 4, 8, 11, and the work in Section 3.5.2, the MTW conditions can be checked for a cost function on Euclidean space in an analogous subdomain \(D_{\gamma}\subset\mathbb{R}^{d}\times\mathbb{R}^{d}\)._ 1.1. Example of Computations for Cost Functions in Theorem 4.1 of Loeper [9]: Power Costs on the Unit Sphere \(c(x,y)=\frac{1}{p}d_{\mathbb{S}^{2}}(x,y)^{p}\) As remarked in Section 1, for the researcher working on applications, an important result of the work in this paper is to allow for the MTW conditions to be checked easily for a wide class of cost functions, not just defective cost functions. Note that if \(z^{*}\geq\pi\), then our cost functions naturally fit into Theorem 4.1 of Loeper [9], which are the cost functions that can be checked with the preexisting regularity theory of Loeper. The benefit is that in Section 3 we have identified simple conditions on the cost function which allow for the hypotheses of Theorem 4.1 of Loeper (including the MTW conditions) to be verified. What this means is that our computations in this manuscript allow for the researcher in applied fields to much more easily verify the MTW conditions for a wide class of functions. As an illustration of this, we verify the MTW conditions for the power cost functions on the sphere: \(c(x,y)=\frac{1}{s}d_{\mathbb{S}^{2}}(x,y)^{s}=\frac{1}{s}\arccos(x\cdot y)^{s}\). First, we check that the power cost functions satisfy some preliminary conditions from Theorem 4.1 in Loeper [9]. That is, \(G(z)\) is smooth and strictly increasing with \(G^{\prime}(0)=0\). We immediately verify that \(G(z)=\frac{1}{s}z^{s}\) is smooth and strictly increasing for \(s>0\) and \(G^{\prime}(z)=z^{s-1}\), so that means that if \(s>1\) we have that \(G^{\prime}(0)=0\). From now on assume that \(s>1\). Now, we verify the MTW conditions on \(D=\mathbb{S}^{2}\times\mathbb{S}^{2}\setminus\)antidiag using the formulas from this manuscript. Clearly, **A0** is satisfied on \(D\). In order for **A1** to be satisfied, we check the hypotheses of Theorem 3. We need \(F^{\prime}(\zeta)\neq 0\) for \(\zeta\in(0,1)\) and \(G^{\prime\prime}(z)\) strictly positive or negative on \((0,\pi)\). Now, \(F(\zeta)=\frac{1}{s}\arccos(\zeta)^{s}\). Therefore, \(F^{\prime}(\zeta)=-\arccos(\zeta)^{s-1}/\sqrt{1-\zeta^{2}}\). Thus, we have \(F^{\prime}(\zeta)\neq 0\) for \(\zeta<1\) and \(s>1\). Since \(G^{\prime\prime}(z)=(s-1)z^{s-2}\), we have that \(G^{\prime\prime}(z)\) is strictly positive on \((0,\pi)\) for \(s>1\). Therefore, condition A1 is satisfied for \(s>1\). The condition **A2**, by Equation (57) is satisfied for \(s>1\). In order to verify condition **As**, we compute the term \(R_{1}(\|p\|)\) from Equation (21). This is simple, because Equation (27) shows that \(\|p\|=G^{\prime}(z)=z^{s-1}\). Therefore, \(z=\|p\|^{1/(s-1)}\). Therefore, we get \(R_{1}(\|p\|)=\cos\left(\|p\|^{\frac{1}{s-1}}\right)\). We must check that \(f_{4}(p)\) and \(f_{2}(p)+f_{4}(p)\) are strictly convex as a function of \(\|p\|\), as shown in Section 3.5. We have \[f_{4}(p)=-\zeta F^{\prime}(\zeta)=\frac{\zeta\arccos(\zeta)^{s-1}}{\sqrt{1- \zeta^{2}}} \tag{97}\] where \(\zeta=x\cdot y\). Thus, \[f_{4}(p)=\frac{\|p\|\cos\left(\|p\|^{\frac{1}{s-1}}\right)}{\sqrt{1-\cos^{2} \left(\|p\|^{\frac{1}{s-1}}\right)}},\quad\|p\|\in[0,\pi^{s-1}] \tag{98}\] and \[f_{2}(p)=F^{\prime\prime}(\zeta)/(F^{\prime}(\zeta))^{2}=\\ \|p\|^{\frac{s}{s-1}}\left((s-1)-\|p\|^{\frac{1}{s-1}}\,\frac{ \cos\left(\|p\|^{\frac{1}{s-1}}\right)}{\sqrt{1-\cos^{2}\left(\|p\|^{\frac{1}{ s-1}}\right)}}\right),\quad\|p\|\in[0,\pi^{s-1}] \tag{99}\] \[=(s-1)\arccos(\zeta)^{-s}-\arccos(\zeta)^{2-2s}f_{4}(p) \tag{100}\] \[=\frac{s-1}{\|p\|^{\frac{s}{s-1}}}-\frac{1}{\|p\|^{2}}f_{4}(p) \tag{101}\] and therefore, \[f_{2}(p)+f_{4}(p)=\frac{s-1}{\|p\|^{\frac{s}{s-1}}}+\frac{\|p\|^{2}-1}{\|p\|^{ 2}}f_{4}(p) \tag{102}\] This can be used to confirm, for example, that condition **As** does not hold for \(s>2\), since for small values of \(\|p\|\), we have: \[f_{2}(p)+f_{4}(p)=(s-2)\,\|p\|^{\frac{-s}{s-1}}+o\left(\|p\|^{\frac{-s}{s-1}}\right) \tag{103}\] so the cancellation of the highest order term only happens when \(s=2\). The fact that the power costs for \(s\neq 2\) do not satisfy the **As** condition agrees with the result for Euclidean power costs in [13]. ### Conditions on Source and Target Mass Density Functions that Constrain the Mapping Now that we have verified the MTW conditions on a subdomain \(D_{\gamma}\), we need to make sure that our source and target densities \(f\) and \(g\) are such that they do not require mass to move too far, as was the case in the example in Section 2.2. The work in [9] showed that certain technical conditions on the source and target mass distributions restricted mass to move away from the cut locus on the sphere. Therefore, in that paper, the MTW conditions were shown to be satisfied on the subdomain \(D\subset\mathbb{S}^{2}\times\mathbb{S}^{2}\), where the domain \(D\) was chosen to be \(D=\mathbb{S}^{2}\times\mathbb{S}^{2}\setminus\) antidiag, where the set antidiag is defined as \((x,-x)\in\mathbb{S}^{2}\times\mathbb{S}^{2}\). In this section, we show that more strict conditions can be found on the source and target mass distributions that restrict the Optimal Transport mapping to satisfy \(d_{\mathbb{S}^{2}}(x,T)\leq\gamma\) for any desired \(\gamma<\pi\), where the Optimal Transport mapping \(T\) arises from certain defective cost functions, such as the cost function arising in the lens refractor problem. This then allows us to use the regularity framework in [9] to build a regularity theory for a large class of defective cost functions. Define \(G(\phi)(x)=\exp_{x}(\nabla\phi(x))\), which is the Optimal Transport mapping arising from the squared geodesic cost function. Also define \(T(\phi)(x)=\exp_{x}\left(\frac{\nabla\phi(x)}{\|\nabla\phi(x)\|}\beta\left( \nabla\phi(x)\right)\right)\) where \(\beta\) is the function in Theorem 3 arising from a defective cost function. As before, let \(z^{*}\) denote the smallest value for which \(G^{\prime\prime}(z)=0\) or \(G^{\prime}(z)=\infty\). First, we begin with a result due to Delanoe and Loeper [2]. **Theorem 18** (Delanoe and Loeper).: _Let \(\phi:\mathbb{S}^{2}\to\mathbb{R}\) be a \(C^{3}\) functions such that \(G(\phi)\) is a diffeomorphism and let \(\rho:\mathbb{S}^{2}\to\mathbb{R}\) be the positive \(C^{1}\) functions defined by:_ \[G(\phi)_{\#}\text{dVol}=\rho\text{dVol}, \tag{104}\] _where dVol is the standard \(2\)-form on the unit sphere \(\mathbb{S}^{2}\) induced by the standard Euclidean metric on \(\mathbb{R}^{3}\). Then,_ \[\max_{\mathbb{S}^{2}}|d\phi|\leq C\max_{\mathbb{S}^{2}}\left|d[\rho^{-1}] \right|. \tag{105}\] We continue by defining a \(c\)-convex function. **Definition 19**.: _A function \(\phi:\mathbb{S}^{2}:\to\mathbb{R}\) is \(c\)-convex if at each point \(x\in\mathbb{S}^{2}\) there exists a point \(y\in\mathbb{S}^{2}\) and a value \(\phi^{c}(y)\) such that:_ \[-\phi^{c}(y)-c(x,y) =\phi(x) \tag{107}\] \[-\phi^{c}(y)-c(x^{\prime},y) \leq\phi(x^{\prime}),\quad\forall x^{\prime}\in\mathbb{S}^{2}. \tag{106}\] **Theorem 20**.: _Let \(\phi:\mathbb{S}^{2}\to\mathbb{R}\) be a \(C^{3}\), \(c\)-convex function, for the defective cost function \(c\), such that \(G(\phi)\) and \(T(\phi)\) are diffeomorphisms and the Jacobian \(\left|\nabla T\circ G^{-1}\right|\) has a bounded derivative and let \(\tilde{\rho}:\mathbb{S}^{2}\to\mathbb{R}\) be the positive \(C^{1}\) function defined by:_ \[T(\phi)_{\#}\text{dVol}=\tilde{\rho}\text{dVol}. \tag{108}\] _Then, there exists a constant \(\tilde{C}>0\) such that:_ \[\max_{\mathbb{S}^{2}}\left|d\phi\right|\leq\tilde{C}\left(\max\tilde{\rho}+ \max\left|d\tilde{\rho}\right|\right). \tag{109}\] Proof.: Note: if \(\rho=\tilde{\rho}=1\), then \(\phi\) is \(c\)-convex, \(G(\phi)\) and \(T(\phi)\) are diffeomorphisms and \(\left|\nabla T\circ G^{-1}\right|\) has a bounded derivative. From Theorem 18, the \(c\)-convex function is \(C^{3}\) and \(G(\phi)\) is a diffeomorphism. Therefore, \(\phi\) satisfies the bound: \[\max_{\mathbb{S}^{2}}\left|d\phi\right|\leq C\max_{\mathbb{S}^{2}}\left|d[\rho ^{-1}]\right|. \tag{110}\] where \(\rho\) is defined by: \[G(\phi)_{\#}\text{dVol}=\rho\text{dVol}, \tag{111}\] Since \(T(\phi)\) is a diffeomorphism, it satisfies \(0<|\nabla T(\phi)(x)|<\infty\). The function \(\tilde{\rho}\) then satisfies \(\tilde{\rho}=(T\circ G^{-1})_{\#}\rho\) and likewise \(\rho=(G\circ T^{-1})_{\#}\tilde{\rho}\). We denote \(S=T\circ G^{-1}(x)=\exp_{x}\left(\frac{\nabla\phi(x)}{\|\nabla\phi(x)\|}\left( -\nabla\phi(x)+\beta\left(\nabla\phi(x)\right)\right)\right)\). We have then, that: \[\tilde{\rho}=\rho(S^{-1}(x))\left|\nabla S^{-1}(x)\right|. \tag{112}\] Since \(S\) is invertible and a diffeomorphism, since it is the composition of diffeomorphisms, then we have \(c\leq\left|\nabla S^{-1}(x)\right|\leq C\), and since \(G\) is a diffeomorphism, we have \(0<m\leq\rho\leq M\). Therefore, \[\tilde{\rho}\geq cm. \tag{113}\] Then, since \[\rho=\tilde{\rho}(S(x))\left|\nabla S(x)\right|, \tag{114}\] we get: \[\rho\geq C\min\tilde{\rho}\geq cCm, \tag{115}\] and therefore, \[\frac{1}{\rho}\leq\frac{1}{cCm}. \tag{116}\] Now, since \(|\nabla S|\) has bounded derivatives, there exists a \(\mu>0\) such that: \[|d\rho|=\left|dS^{-1}_{\#}\tilde{\rho}\right|=|d\left[\tilde{\rho}(S(x))\left| \nabla S(x)\right|\right| \tag{117}\] \[=|d\tilde{\rho}(S(x))\nabla S(x)+\tilde{\rho}(S(x))d\left|\nabla S(x)\right| \leq|cd\tilde{\rho}(S(x))+\mu\tilde{\rho}(S(x))|\,, \tag{118}\] Therefore, we get: \[|d\phi|\leq\left|d[\rho^{-1}]\right|\leq\tilde{C}\left(\max\tilde{\rho}+\max |d\tilde{\rho}|\right). \tag{119}\] This result shows that provided that we control the ratio of \(g/f\) and the ratio of the derivative of \(g/f\), then the derivative of the potential function \(\phi\) can be controlled as well. This, in turn, via the fact that for a defective cost function, as long as \(\|p\|<p^{*}\), means that the Optimal Transport mapping will not transport a distance greater than \(z^{*}\) from Definition 6. At this point, we can adapt the results of Loeper in [9] to the case of defective cost functions to get the following result: **Theorem 21**.: _Let \(\mu\), \(\nu\) be two \(C^{\infty}\) probability measures that are strictly bounded away from zero and \(u\) a \(c\)-convex potential function, where \(c\) is a defective cost function, such that, defining \(T(u)(x)=\text{exp}_{x}\left(\frac{\nabla u(x)}{\|\nabla u(x)\|}\beta(\|\nabla u (x)\|)\right)\), we have \(T(u)_{\#}\mu=\nu\) we and also we have the bound \(|du|<p^{*}\), where \(p^{*}\) is defined in Definition 6. Additionally, let \(c\) satisfy assumption_ **As** _on \(D_{\gamma}\). Then \(u\in C^{\infty}(\mathbb{S}^{2})\)._ Proof.: The proof follows the line of reasoning in Section 5 of Loeper [9]. First, we have shown by the estimate in Theorem 20 that there exist potential functions whose derivatives can be bounded by \(g/f\) and the derivative of \(g/f\). Assuming that \(\mathbf{As}\) is satisfied, we use a \(C^{2}\) estimate established in [10] on \(u\). Then, we use the method of continuity to construct smooth solutions for any \(\mu\), \(\nu\) for any smooth positive densities satisfying the hypotheses of Theorem 20. **Remark 22**.: _An analogous \(C^{1,\alpha}\) result like in Loeper [9] is difficult to obtain because \(g/f\) and \(d(g/f)\) must be bounded in order to satisfy Theorem 20._ **Remark 23**.: _One interesting consequence of Theorem 21 is that we can use it to devise a way to solve examples such as Section 2.2 by using a series of \(N\) lenses \(\left\{L_{i}\right\}_{i=1,\ldots,N}\), whereas it is impossible to solve using a single lens._ ## 5. Conclusion In examining the far-field lens refractor problem and its solvability condition, we found that we could answer many theoretical and computational questions by formulating the cost functions as particular cases of a much more general class of cost function which we have chosen to call defective cost functions on the unit sphere. Using these definitions, we examined when the MTW conditions held for such cost functions and derived formulas for verifying the cost-sectional curvature for defective cost functions. We discussed how to extend these ideas to Euclidean space and general Riemannian manifolds. We examined a sufficient condition on the source and target densities \(f\) and \(g\), respectively, that ensured the mapping \(T\) satisfied the solvability bound, which was achieved by bounding the ratio \(g/f\) and the derivative of \(g/f\) in the sup-norm. Using this, we could employ the regularity framework of Loeper in order to establish \(C^{\infty}\) smoothness for the potential function for defective cost functions satisfying the strictly positive cost-sectional curvature condition. **Acknowledgements**: I would like to especially thank Brittany Hamfeldt for introducing me to the lens refractor problem and working alongside for some of the initial exploratory computations. I would also like to thank Rene Cabrera for listening to and discussing the ideas as they developed.
2310.12513
Real space iterative reconstruction for vector tomography (RESIRE-V)
Tomography has had an important impact on the physical, biological, and medical sciences. To date, most tomographic applications have been focused on 3D scalar reconstructions. However, in some crucial applications, vector tomography is required to reconstruct 3D vector fields such as the electric and magnetic fields. Over the years, several vector tomography methods have been developed. Here, we present the mathematical foundation and algorithmic implementation of REal Space Iterative REconstruction for Vector tomography, termed RESIRE-V. RESIRE-V uses multiple tilt series of projections and iterates between the projections and a 3D reconstruction. Each iteration consists of a forward step using the Radon transform and a backward step using its transpose, then updates the object via gradient descent. Incorporating with a 3D support constraint, the algorithm iteratively minimizes an error metric, defined as the difference between the measured and calculated projections. The algorithm can also be used to refine the tilt angles and further improve the 3D reconstruction. To validate RESIRE-V, we first apply it to a simulated data set of the 3D magnetization vector field, consisting of two orthogonal tilt series, each with a missing wedge. Our quantitative analysis shows that the three components of the reconstructed magnetization vector field agree well with the ground-truth counterparts. We then use RESIRE-V to reconstruct the 3D magnetization vector field of a ferromagnetic meta-lattice consisting of three tilt series. Our 3D vector reconstruction reveals the existence of topological magnetic defects with positive and negative charges. We expect that RESIRE-V can be incorporated into different imaging modalities as a general vector tomography method.
Minh Pham, Xingyuan Lu, Arjun Rana, Stanley Osher, Jianwei Miao
2023-10-19T06:24:36Z
http://arxiv.org/abs/2310.12513v1
# Real space iterative reconstruction for vector tomography (RESIRE-V) ###### Abstract Tomography has had an important impact on the physical, biological, and medical sciences. To date, most tomographic applications have been focused on 3D scalar reconstructions. However, in some crucial applications, vector tomography is required to reconstruct 3D vector fields such as the electric and magnetic fields. Over the years, several vector tomography methods have been developed. Here, we present the mathematical foundation and algorithmic implementation of REal Space Iterative REconstruction for Vector tomography, termed RESIRE-V. RESIRE-V uses multiple tilt series of projections and iterates between the projections and a 3D reconstruction. Each iteration consists of a forward step using the Radon transform and a backward step using its transpose, then updates the object via gradient descent. Incorporating with a 3D support constraint, the algorithm iteratively minimizes an error metric, defined as the difference between the measured and calculated projections. The algorithm can also be used to refine the tilt angles and further improve the 3D reconstruction. To validate RESIRE-V, we first apply it to a simulated data set of the 3D magnetization vector field, consisting of two orthogonal tilt series, each with a missing wedge. Our quantitative analysis shows that the three components of the reconstructed magnetization vector field agree well with the ground-truth counterparts. We then use RESIRE-V to reconstruct the 3D magnetization vector field of a ferromagnetic meta-lattice consisting of three tilt series. Our 3D vector reconstruction reveals the existence of topological magnetic defects with positive and negative charges. We expect that RESIRE-V can be incorporated into different imaging modalities as a general vector tomography method. To make the algorithm accessible to a broad user community, we have made our RESIRE-V MATLAB source codes and the data freely available at [https://github.com/minhpham0309/RESIRE-V](https://github.com/minhpham0309/RESIRE-V). ## Introduction Tomography has had a radical impact on diverse fields ranging from medical diagnosis [1] to 3D structure determination of proteins [2], crystal defects [3, 4] and amorphous materials [5, 6], at the atomic resolution. Despite its very diverse applications, the central problem in tomography remains the same, that is, how to accurately reconstruct the 3D structure of an object from a number of projections with noise and incomplete data. The conventional reconstruction methods include filtered back projection (FBP) [1, 2], algebraic reconstruction technique (ART) [7], simultaneous algebraic reconstruction technique (SART) [8], and simultaneous iterative reconstruction technique (SIRT) [9, 10], which remain popular in tomographic applications. Recently, more advanced iterative algorithms have been developed for tomography, including equal slope tomography (EST) [11, 12], nonuniform fast Fourier transform (NUFFT) [13], generalized Fourier iterative reconstruction (GENFIRE) [14, 15] and real space iterative reconstruction (RESIRE) [5, 16]. In particular, RESIRE, which uses the Radon transform as the forward projection and the Radon transpose as the back projection, is not only superior to other existing tomographic algorithms [16], but also has been used to determine the 3D atomic structure of amorphous materials [5, 6] and the chemical order and disorder in medium/high entropy alloys [17]. Despite all these applications, they only deal with scalar tomography, where each voxel in a 3D reconstruction has a magnitude but no direction. However, in some important applications, vector tomography is required, where each voxel has a magnitude and a direction such as the electric and magnetic field. Over the years, several vector tomography reconstruction methods have been developed, including vector electron tomography with Lorentz transmission electron microscopy and holography [18, 19, 20, 21, 22, 23, 24, 25], soft and hard x-ray vector tomography [26, 27, 28, 29, 30, 31, 32, 33, 34, 35]. In particular, the combination of ptychography, a powerful coherent diffractive imaging method [36, 37], and vector tomography can in principle achieve the highest spatial resolution, which is only limited by the wavelength and the diffraction signal [31, 27, 34]. Very recently, we have merged soft-x-ray magnetic circular dichroism and ptychography with vector tomography to image the 3D topological magnetic monopoles and their interaction in a ferromagnetic meta-lattice with a spatial resolution of 10 nm [34]. Using RESIRE [16], our vector tomography algorithm can accurately reconstruct the 3D magnetization vector field from multiple tilt series each with a limited number of experimental projections. Furthermore, due to the experimental error, the measured tilt angles may not always coincide with the true orientations of the projections. To tackle this problem, we implement an iterative angular refinement method to reduce the tilt angle error [16], enabling us to obtain more accurate vector tomographic reconstruction. Here, we provide the mathematical foundation and implementation of our vector tomography algorithm, termed RESIRE-V. Both numerical simulations and experimental data have been used to demonstrate the effectiveness of this vector tomography algorithm. ## Methods We begin with some setup and conventions. First, we employ Euler angles to describe the orientation of a rigid body with respect to a fixed coordinate system. For example, the orientation representation ZYX used intensively in our research fits well with vector tomography experiments: samples are rotated about the Z-axis (in-plane rotation) before a set of tilt series (rotation about the Y-axis) are acquired. The last rotation about the X-axis is helpful in angular refinement. We use the notation \(Z_{\phi}Y_{\theta}X_{\psi}\) to represent Euler angle rotations: the first rotation is about the Z-axis by an angle \(\phi\), followed by a rotation about the Y-axis by an angle \(\theta\), and ends with a rotation about the X-axis by an angle \(\psi\), respectively. The corresponding rotation matrix \(R_{Z_{\phi}Y_{\theta}X_{\psi}}=R_{\theta}^{X}R_{\psi}^{X}\) is defined to be the product of three single-axis rotation matrices about the Z, Y, and X axes by angles \(\psi\), \(\theta\) and \(\phi\) respectively: \[R_{\phi}^{Z}:=\begin{bmatrix}\cos\phi&-\sin\phi&0\\ \sin\phi&\cos\phi&0\\ 0&0&1\end{bmatrix},\quad R_{\theta}^{Y}:=\begin{bmatrix}\cos\theta&0&\sin \theta\\ 0&1&0\\ -\sin\theta&0&\cos\theta\end{bmatrix},\quad R_{\psi}^{X}:=\begin{bmatrix}1&0&0 \\ 0&\cos\psi&-\sin\psi\\ 0&\sin\psi&\cos\psi\end{bmatrix}\] For short notation, we write \(R_{\theta}\) instead of \(R_{Z_{\phi}Y_{\theta}X_{\psi}}\) where \(\theta=\{\phi,\theta,\psi\}\) (no orientation is specified). In perfect experimental conditions where there is no X-axis rotation, \(\psi\) is zero. Otherwise, \(\psi\) can be non-zero and we use angular refinement to determine \(\psi\). The convention finishes and we move to the formulation part. ### Formulation For an x-ray beam propagating along the z direction (standard unit vector \(\vec{e}_{z}=[0,\ 0,\ 1]^{T}\)), only the z component of the magnetization contributes to the 2D signals. The contribution takes either positive or negative values depending on the left or right circular polarization. In the case of rotation, we need the inner product \(\left\langle R_{\theta}\ \mathbf{M}(R_{\phi}^{\dagger}\vec{r}),\ \hat{k}\right\rangle\) to count for the contribution. Here, \(\mathbf{M}=[M_{x},\ M_{y},\ M_{z}]\) is the magnetization vector field, which is a function of the Cartesian coordinate vector \(\vec{r}=(x,\ y,\ z)\), and \(R_{\phi}^{\dagger}\) is the adjoined, and also inverse and transpose, of \(R_{\phi}\). Adding the non-magnetic term \(O\) and taking the integral along the z axis (projection), we obtain the 2D signal: \[\int_{z}c\left\langle R_{\phi}\ \mathbf{M}(R_{\phi}^{\dagger}\vec{r}),\vec{e}_ {z}\right\rangle+O(R_{\phi}^{\dagger}\vec{r})\,dz=P_{\phi} \tag{1}\] where \(c\) is a constant that relates the XMCD signal to the magnetization and the pixel size. We can temporarily let \(\mathbf{M}\) absorb \(c\) in the derivation for simplicity and then rescale \(\mathbf{M}\) after the reconstruction. We then write this equation using the change of variable \(\vec{r}\gets R_{\theta}^{\dagger}\vec{r}\): \[\int_{L_{0}}\left\langle\mathbf{M}(x,y,z),R_{\phi}^{\dagger}\vec{e}_{z} \right\rangle+O(x,y,z)\,dz=P_{\phi}(x,y) \tag{2}\] Rotating the sample by some Euler angles \(\theta\) and taking the integral along the z-axis is equivalent to taking the line integral along the opposite rotation direction (passive rotation). To solve this equation numerically, we need to discretize the equation. Replacing the line integral with a projection operator and expanding the inner product, we represent the equation algebraically: \[\Pi_{\phi}\left(\alpha_{\theta}M_{x}+\beta_{\theta}M_{y}+\gamma_{\theta}\,M_{ z}+O\right)=P_{\theta} \tag{3}\] where \(\Pi_{\phi}\) is the projection operator, and \(P_{\theta}\) is the corresponding projection with respect to Euler angles \(\theta=(\phi,\ \theta,\ \psi)\). In this notation, we drop the spatial variables \((x,y,z)\) for simplicity. Let \(\vec{n}_{\theta}=[\alpha_{\theta},\ \beta_{\theta},\ \gamma_{\theta}]\) be the last column of \(R_{\phi}^{\dagger}\). Specifically, if we use the orientation representation \(Z_{\phi}Y_{\theta}\), then the normal vector is given by: \(\vec{n}_{\theta}=[\alpha_{\theta},\ \beta_{\theta},\ \gamma_{\theta}]=[\sin\theta\,\cos\phi,\ \sin\theta\,\sin\phi,\ \cos\theta]\). One can verify that the second magnetization component does not contribute to the measured projections when \(\phi=0\). It implies that other types of rotation are required for successful vector tomography reconstructions. Since \(\Pi_{\theta}\) is linear, we can apply the commutative property and distribute the linear operator to each magnetization component: \[\alpha_{\theta}\,\Pi_{\theta}\left(M_{x}\right)+\beta_{\theta}\,\Pi_{\theta} \left(M_{y}\right)+\gamma_{\theta}\,\Pi_{\theta}\left(M_{z}\right)+\Pi_{\theta }\left(O\right)=P_{\theta} \tag{4}\] Eqn.4 describes that the three 3D magnetization components and the non-magnetic structure are coupled via a linear constraint. So far, we have formulated the vector tomography in the perfect condition (noise free). In the presence of noise and assuming that the noise is Gaussian with mean 0 and variance \(\sigma^{2}\), we add the noise term \(\mathcal{N}(0,\sigma^{2})\) to the left-hand side of the equation. \[\alpha_{0}\,\Pi_{\Theta}\left(M_{x}\right)+\beta_{0}\,\Pi_{\Theta}\left(M_{y} \right)+\gamma_{0}\,\Pi_{\Theta}\left(M_{z}\right)+\Pi_{\Theta}\left(O\right) +\mathcal{N}(0,\sigma^{2})=P_{\Theta} \tag{5}\] We first denote \(P_{\Theta}^{+}\) and \(P_{\Theta}^{-}\) are random variables, with the same variance \(\sigma^{2}\), that represent the left and right polarized projections by Euler angles \(\theta\) respectively. Let \(b_{\theta}^{-}=\frac{1}{2}\left(P_{\theta}^{+}-P_{\theta}^{-}\right)\) be a random variable as describe, we obtain a simpler linear equation: \[\alpha_{\Theta}\,\Pi_{\Theta}\left(M_{x}\right)+\beta_{\Theta}\,\Pi_{\Theta} \left(M_{y}\right)+\gamma_{\theta}\,\Pi_{\Theta}\left(M_{z}\right)+\mathcal{ N}(0,\frac{\sigma^{2}}{2})=b_{\theta}^{-} \tag{6}\] Note that, by the law of large numbers, taking the average of two random variables with the same mean and variance results in a new random variable where the mean stays the same, but the variance gets reduced by half [38]. We can use maximum likelihood estimation to recover three-dimensional magnetization from corrupted 2D signals. Specifically for Gaussian noise, the log maximum likelihood function is the sum of the squared errors (or \(l_{2}\) distances) between the desired and measured signals: \[\min_{\mathbf{M}}\,\varepsilon(\mathbf{M})=\frac{1}{2}\sum_{\bar{\theta}}\| \alpha_{\Theta}\,\Pi_{\Theta}\left(M_{x}\right)+\beta_{\theta}\,\Pi_{\Theta} \left(M_{y}\right)+\gamma_{\theta}\,\Pi_{\Theta}\left(M_{z}\right)-b_{\theta}^ {-}\|^{2} \tag{7}\] We can always write the minimization problem in the form \(\varepsilon(\mathbf{M})=\frac{1}{2}\sum_{\Theta}\|\Pi_{\Theta}\left(\alpha_{ \Theta}M_{x}+\beta_{\Theta}\,M_{y}+\gamma_{\theta}\,M_{z}\right)-b_{\theta}^ {-}\|^{2}\) thanks to the linearity of the projection operator. For efficient implementation, the latter form is preferred over Eqn. 7. The maximum likelihood function will appear different for other types of noise; however, the famous least-squares form can still handle other circumstances because of its simplicity and effectiveness. Eqn.7 is our final form of the vector tomography formulation, and the remaining part is designing a numerical scheme to solve this minimization. ## RESIRE-V algorithm We develop our algorithm based on the real space iterative technique. Noticing that the projection operator is linear, one can construct a matrix representation for each \(\Pi_{\Theta}\). We assume that the projections have size \(n\times n\), and the sample gets reconstructed with thickness \(n\). In that case, each projection matrix \(\Pi_{\Theta}\) has \(O(n^{3})\) non-zero elements. In the case of over-constraint, Eqn. 7 can be solved using the normal equation. Otherwise, in the case of under-determined system, we need to add a regularizer to prevent overfitting. When adding a damping term as a regularizer, we have an overconstrained system again and we can solve the equation using the normal equation as in the over-constraint case. In either case, storing projection matrices is tremendously expensive since the size expands in the cubic order of the projection size. Here, to save memory usage, we do not need to store the projection matrices but compute the forward projections at every iteration instead. This procedure will increase the number of computations; however, GPU parallel computing can help reduce the computational time significantly. Our gradient descent algorithm incorporates two steps: forward projection and back projection. For the first step, we institute our 3D Radon transform from 2D Radon transform (Fig. 1), which can be found elsewhere [39, 40]. The algorithm first divides pixels in a 3D image into four sub-pixels and projects each sub-pixel individually. Specifically, at a tilt angle, we compute the corresponding coordinate of each pixel and project it on the XY plane. The value of each sub-pixel is distributed proportionally to the four nearest neighbors, according to the distance between the projected location and the pixel centers. Supposing that the pixel projection lands on the center point of a bin, then the bin on the axis gets the entire value of the pixel. If the pixel projection hits the border between four bins, the pixel value is split evenly between these four bins. Next, we establish the transpose of the Radon transform for the back-projection step. This process is similar to the forward projection but in reverse order. According to the distance between the projected location and the pixel centers, the four nearest neighbors to a projection sub-pixel proportionally contribute their values to the sub-pixel. If the pixel projection hits the border between 4 bins, the pixel takes a quarter value of each of these four bins. After specifying the forward and back projection, we can now take the gradient of the error metric \(\varepsilon(\mathbf{M})\) in Eqn. 7 with respect to each magnetization component: \[\frac{\partial\varepsilon}{\partial M_{x}}=\sum_{\Theta}\alpha_{\Theta}\,\Pi_{ \Theta}^{T}\left(\alpha_{\Theta}\,\Pi_{\Theta}\left(M_{x}\right)+\beta_{ \theta}\,\Pi_{\Theta}\left(M_{y}\right)+\gamma_{\theta}\,\Pi_{\Theta}\left(M_ {z}\right)-b_{\theta}^{-}\right)=\sum_{\Theta}\alpha_{\Theta}\,\Pi_{\Theta}^{T }\Big{(}\Pi_{\Theta}\left(\alpha_{\Theta}M_{x}+\beta_{\Theta}\,M_{y}+\gamma_{ \theta}\,M_{z}\right)-b_{\theta}^{-}\Big{)} \tag{8}\] where \(\Pi_{\Theta}^{T}\) is the transpose operator of the Radon transform for Euler angles \(\theta\). As mentioned above, the second form of the gradient will be used for the C++/Cuda implementation. Next, we show that the gradient is L-lipschitz and the algorithm will converge to the global minimum with an appropriate step size. Specifically, we want to find an L such that the following inequality is true: \[\big{\|}\nabla\varepsilon(\mathbf{M}_{1})-\nabla\varepsilon(\mathbf{M}_{2}) \big{\|}\leq L\big{\|}\mathbf{M}_{1}-\mathbf{M}_{2}\big{\|}\quad\forall\, \mathbf{M}_{1},\,\mathbf{M}_{2} \tag{9}\] The Lipchitz constant L gets calculated as \(\sqrt{3}nN_{z}\) where \(n\) and \(N_{Z}\) are the number of projections and the thickness in pixels of the reconstruction, respectively. Hence, we can choose the step size to be \(1/L\) for the convergence guarantee. Details of the proof can be found in the Supplementary, step-size analysis. The algorithm is finalized and described step by step in pseudocode 1 and Fig. 2. For efficient implementation, the gradient w.r.t. each component \(\frac{\partial\mathcal{F}}{\partial M_{x}}=\sum_{l}\frac{\partial\mathcal{F}}{ \partial M_{x}}\) will be accumulated. In addition, the step size is generalized to be \(\frac{t}{\sqrt{3\text{a}N}}\) where \(t\approx 1\) is the normalized step size. By our analysis, \(t\) should be less than or equal to 1 for the convergence guarantee. The analysis uses triangle inequalities and takes into account the worst-case scenario. In practice where better scenarios are more popular, the algorithm can converge with \(t\)'s values slightly larger than 1. The analysis is complete and we move to the discussion on conditions for vector tomography reconstruction. ### Analysis: conditions for vector tomography reconstruction Scalar tomography, a well-posed problem, only requires one dataset (single tilt axis) for reconstruction [42], assuming the number of measurements is sufficient. The Fourier slice theorem can verify that a 2D image needs \(n\) sampling points on the Fourier domain for a unique reconstruction. This requirement implies that \(n^{2}\) projections corresponding to these sampling points are sufficient for the scalar tomography reconstruction. Indeed, the actual number of measurements is much smaller than the theoretical value. For a 3D vector field, all three magnetization components need to be recovered, giving a question about the particular requirement in the measured data. In his discussion in the 1980s, Norton showed that the reconstruction of a diverge-less 2D vector field appeared to be unique [43]. Prince gave a more generalized discussion of the reconstructions of arbitrary vector fields in the 1990s. He demonstrated that, for reconstructing an arbitrary \(n\)-dimensional vector field, \(n\) tomographic projection datasets in which the probe is sensitive to n different directions of the vector field needed to be acquired [44, 45]. The idea of using more than one tilt rotation axes has been used successfully in scalar tomography to reduce the missing wedge artifacts [46, 47]. That idea is also believed to be a key to solving the vector tomography problem. In our research, we use the Fourier slice theorem to show specific experimental conditions for the reconstruction of arbitrary vector fields. The theorem states that the 2D Fourier transform of a 2D projection equals a 2D slice through the origin of the 3D Fourier transform of an object. The 2D slice is defined based on the corresponding rotation angle. In the case of noise-free, we apply the Fourier transform to both sides of Eqn.6. \[\alpha_{0}\,\mathcal{F}\big{[}\Pi_{\Theta}(M_{x})\big{]}+\beta_{0}\,\mathcal{ F}\big{[}\Pi_{\Theta}(M_{y})\big{]}+\gamma_{0}\,\mathcal{F}\big{[}\Pi_{ \Theta}(M_{z})\big{]}=\mathcal{F}\big{[}b_{\bar{\Theta}}^{-}\big{]} \tag{10}\] Applying the Fourier slice theorem, we have a linear constraint involving the Fourier transforms \(\hat{m}_{x}\), \(\hat{m}_{y}\) and \(\hat{m}_{z}\) of the three magnetization components \(M_{x}\), \(M_{y}\) and \(M_{z}\). This constraint applies to every Fourier point \(\bar{\xi}\) on a 2D Fourier slice through the origin. \[\alpha_{0}\,\hat{m}_{x}(\bar{\xi})+\beta_{0}\,\hat{m}_{y}(\bar{\xi})+\gamma_{ 0}\,\hat{m}_{z}(\bar{\xi})=\hat{b}_{\bar{0}}^{-}(\bar{\xi})\quad\text{where} \quad\langle\bar{\xi},\vec{n}_{\Theta}\rangle=0,\,\text{and}\,\,\vec{n}_{\Theta }=[\alpha_{\Theta},\,\beta_{\Theta},\,\gamma_{0}] \tag{11}\] Figure 1: Illustration of the Radon transform in the 2D case from Matlab [41]: The algorithm first divides image pixels into four sub-pixels and projects them onto a 2D plane separately. The value of each sub-pixel is distributed proportionally to two nearest neighbors, according to the distance between the projected location and the pixel centers. The transpose of the Radon transform follows the same idealogy in the reverse order. According to the distance between the projected location and the pixel centers, the two nearest neighbors to a projection sub-pixel proportionally contribute their values to the sub-pixel. **Input**: a set of \(n\) projections with left and right polarization \(\{P_{\theta_{i}}^{+}\}_{i=1}^{n}\) and \(\{P_{\theta_{i}}^{-}\}_{i=1}^{n}\) and their corresponding tilt angles \(\{\theta_{i}\}_{i=1}^{n}\), the total number of iterations \(K\), the step size \(t\approx 1\) **Preprocessing**: 1. Compute the mean images from the left and right polarization projections: \(b_{\theta_{i}}^{+}=\frac{1}{2}(P_{\theta_{i}}^{+}+P_{\theta_{i}}^{-})\). 2. Reconstruct the non-magnetic signal \(O\) from the mean images and use threshold to obtain its support. 3. Compute the difference images from the left and right polarization projections: \(b_{\theta_{i}}^{-}=\frac{1}{2}(P_{\theta_{i}}^{+}-P_{\theta_{i}}^{-})\) **Initialize**: \(O^{0}\). **for**\(k=0,\dots,K-1\)**do** **for**\(i=1,\dots,n\)**do** Compute "forward projections" \(\Pi_{\theta_{i}}(\alpha_{\theta_{i}}M_{x}^{k}+\beta_{\theta_{i}}M_{y}^{k}+ \gamma_{\theta_{i}}M_{z}^{k})\) using Radon transform. Compute residual \[Re_{\theta_{i}}(\mathbf{M}^{k}):=\Pi_{\theta_{i}}(\alpha_{\theta_{i}}M_{x}^{k }+\beta_{\theta_{i}}M_{y}^{k}+\gamma_{\theta_{i}}M_{z}^{k})-b_{\theta_{i}}^{-}\] Compute "back projection" for each projection difference \[\frac{\partial\epsilon_{i}}{\partial M_{x}}(\mathbf{M}^{k}):=\alpha_{\theta_ {i}}\Pi_{\theta_{i}}^{T}\,Re_{\theta_{i}}(\mathbf{M}^{k}),\quad\frac{ \partial\epsilon_{i}}{\partial M_{y}}(\mathbf{M}^{k}):=\beta_{\theta_{i}}\Pi _{\theta_{i}}^{T}\,Re_{\theta_{i}}(\mathbf{M}^{k}),\quad\frac{\partial \epsilon_{i}}{\partial M_{z}}(\mathbf{M}^{k}):=\gamma_{\theta_{i}}\Pi_{\theta _{i}}^{T}\,Re_{\theta_{i}}(\mathbf{M}^{k})\] **end for** **Update \(\mathbf{M}^{k+1}\)**: \[M_{x}^{k+1} =M_{x}^{k}-\frac{t}{\sqrt{3}nN}\frac{\partial\epsilon}{\partial M _{x}}(\mathbf{M}^{k})\quad\text{where}\quad\frac{\partial\epsilon}{\partial M _{x}}(\mathbf{M}^{k}):=\sum_{i=1}^{n}\frac{\partial\epsilon_{i}}{\partial M_{ x}}(\mathbf{M}^{k})\] \[M_{y}^{k+1} =M_{y}^{k}-\frac{t}{\sqrt{3}nN}\frac{\partial\epsilon}{\partial M _{x}}(\mathbf{M}^{k})\quad\text{where}\quad\frac{\partial\epsilon}{\partial M _{y}}(\mathbf{M}^{k}):=\sum_{i=1}^{n}\frac{\partial\epsilon_{i}}{\partial M_{ y}}(\mathbf{M}^{k})\] \[M_{z}^{k+1} =M_{z}^{k}-\frac{t}{\sqrt{3}nN}\frac{\partial\epsilon}{\partial M _{x}}(\mathbf{M}^{k})\quad\text{where}\quad\frac{\partial\epsilon}{\partial M _{z}}(\mathbf{M}^{k}):=\sum_{i=1}^{n}\frac{\partial\epsilon_{i}}{\partial M_{ z}}(\mathbf{M}^{k})\] Apply support constraint and other regularizers if applicable. **end for** **Output**: \(\mathbf{M}^{K}\) **Algorithm 1** RESIRE-V Figure 2: RESIRE-V diagram: Inputs are the differences between the left and right polarization projection and the support from the scalar reconstruction. The algorithm uses a for loop to refine the magnetization vector field \(\mathbf{M}\). At each iteration, it calculates the forward projections and computes their differences with the measured ones. The residuals (or differences) are back-projected to yield gradients. The algorithm will use these gradients to update the magnetization and apply the support constraint. The step size \(\frac{t}{\sqrt{3}nN}\) is replaced by \(s\) for simplification purposes. The extra constraint \(\langle\vec{\xi},\vec{n}_{\partial}\rangle=0\) is required by the requirement that a point belongs to a plan through the origin if the inner product between \(\vec{\xi}\) and the normal vector of the plan is zero. If sufficient measurements are provided, one can sample the values of all Fourier points on the frequency domain and we can extend Eqn. 11 to every point \(\vec{\xi}\) on the 3D Fourier domain. In order to separate \(\hat{m}_{x}(\vec{\xi})\), \(\hat{m}_{y}(\vec{\xi})\) and \(\hat{m}_{z}(\vec{\xi})\) for a given 3D frequency point \(\vec{\xi}\), we need to find three 2D Fourier slices whose normal vectors \(\vec{n}_{\partial}\) form a linear independent system in \(\mathbb{R}^{3}\) and that go through the origin and contains \(\vec{\xi}\). This is impossible since the set of normal vectors \(\vec{n}_{\partial}\) that satisfies the constraint \(\langle\vec{\xi},\vec{n}_{\partial}\rangle=0\) lies in a linear subspace of dimension two. We give an example of in-plane rotations where the orientation is given by \(Z_{\phi}Y_{\theta}\). Recalling that the normal vector corresponding to the in-plane rotation has the form \(\vec{n}_{\partial}=(\sin\theta\,\cos\phi,\,\sin\theta\,\sin\phi,\,\cos\theta)\), we can find infinitely many 2D slices that contain the point \(\vec{\xi}=(1,\,1,\,1)\). For example, we name three projections with the corresponding Euler angles \((\phi_{1},\,\theta_{1})=(0^{o},\,\,-45^{o})\), \((\phi_{2},\,\,\theta_{2})=(120^{o},\,\,-69.90^{o})\) and \((\phi_{3},\,\,\theta_{3})=(-120^{o},\,\,36.21^{o})\). \[\left\{\begin{array}{l}\sin\theta_{1}\,\cos\phi_{1}\,\hat{m}_{x}(\vec{\xi}) +\sin\theta_{1}\,\sin\phi_{1}\,\hat{m}_{y}(\vec{\xi})+\cos\theta_{1}\,\hat{m}_ {z}(\vec{\xi})=\hat{b}^{-}_{\phi_{1}}(\vec{\xi})\\ \sin\theta_{2}\,\cos\phi_{2}\,\hat{m}_{x}(\vec{\xi})+\sin\theta_{2}\,\sin\phi_{ 2}\,\hat{m}_{y}(\vec{\xi})+\cos\theta_{2}\,\hat{m}_{z}(\vec{\xi})=\hat{b}^{-} _{\phi_{2}}(\vec{\xi})\\ \sin\theta_{3}\,\cos\phi_{3}\,\hat{m}_{x}(\vec{\xi})+\sin\theta_{3}\,\sin\phi_{ 3}\,\hat{m}_{y}(\vec{\xi})+\cos\theta_{3}\,\hat{m}_{z}(\vec{\xi})=\hat{b}^{-} _{\phi_{3}}(\vec{\xi})\end{array}\right. \tag{12}\] One can check that the corresponding normal vectors \((-1/\sqrt{2},\,0,\,1/\sqrt{2})\), (0.4695, -0.8133, 0.3437), and (-0.2953, -0.5116, 0.8069) are linearly dependent with rank two. Consequently, Eqn. 12 does not have a unique solution. It verifies that in-plane rotations are not sufficient for the reconstruction of the magnetization \(\mathbf{M}\). This analysis differs from Norton [43] and Phatak's theoretical development [18], which analyze the reconstruction of the magnetic vector field instead. In that case, the authors can find a linearly independent system of three equations to separate the frequency signals of the magnetic vector field \(\mathbf{B}\). While the first two constraints are obtained from rotations, the last constraint is found by Gauss's law \(\mathbf{V}\cdot\mathbf{B}=0\) (since \(\mathbf{B}\) is divergence-free). The magnetization vector field is not divergence-free but has another important property: the magnetization field can only exist in a magnetic material. Hence, one can utilize a support (defined as a 3D boundary of the magnetic material) as the necessary and complimentary constraint for the completeness of a magnetization reconstruction algorithm. Furthermore, for the case of micro-magnetic and no external dynamics at the boundary, we can add in the boundary condition that the gradient of the magnetization is parallel to surface [48, 49, 50], i.e. \(\frac{\partial\mathbf{M}}{\partial\mathbf{n}}=0\). In practice, since the support and boundaries are difficult to get computed exactly, one should not enforce the constraint rigidly but relax it as a regularizer instead. We add this regularizer to the minimization (7): \[\min_{\mathbf{M}}\,\,\varepsilon(\mathbf{M})=\frac{1}{2}\sum_{0}\left\|\alpha_ {\Theta}\,\Pi_{\Theta}(M_{x})+\beta_{\Theta}\,\Pi_{\Theta}(M_{y})+\gamma_{ \Theta}\,\Pi_{\Theta}(M_{z})-b^{-}_{\Theta}\right\|^{2}+\frac{\varepsilon}{ 2}\|\nabla\mathbf{M}\cdot\mathbf{n}_{\partial\Omega}\|^{2}_{\partial\Omega} \tag{13}\] For the regularizer part, \(\mathbf{n}_{\partial\Omega}=(n_{1},n_{2},n_{3})\) is the normal vector to the boundary surface \(\partial\Omega\) of the magnetic sample and \(\varepsilon\) is the regularizer parameter. The regularizer term \(\|\nabla\mathbf{M}\cdot\mathbf{n}\|^{2}_{\partial\Omega}\) only takes places on the boundary and should not affect the magnetization within the magnetic structure. In further expansion, we can write the regularizer explicitly as \(\|\nabla\mathbf{M}\cdot\mathbf{n}_{\partial\Omega}\|^{2}_{\partial\Omega}=\|n _{1}\frac{\partial M_{x}}{\partial x}+n_{2}\frac{\partial M_{y}}{\partial y}+ \frac{\partial M_{z}}{\partial z}\|^{2}_{\partial\Omega}\). \(\varepsilon\) is tunable and should be small for non-exact support. We can even ignore this regularizer (or set \(\varepsilon=0\)) when the support cannot be computed accurately. In contrast, we can choose large \(\varepsilon\) for larger effect if the exact support is given. The final gradient will get computed with the extra term as below: \[\frac{\partial\varepsilon}{\partial M_{x}}=\sum_{\bar{0}}\alpha_{\Theta}\,\Pi_{ \bar{0}}^{T}\Big{(}\Pi_{\bar{0}}\big{(}\alpha_{\Theta}M_{x}+\beta_{\bar{0}}\,M _{y}+\gamma_{\bar{0}}\,M_{z}\big{)}-b^{-}_{\bar{0}}\Big{)}+\varepsilon\,\,n_{1} \,\,\frac{\partial^{T}}{\partial x}\Big{(}n_{1}\frac{\partial M_{x}}{\partial x }+n_{2}\frac{\partial M_{y}}{\partial y}+n_{3}\frac{\partial M_{z}}{\partial z} \Big{)}_{\partial\Omega} \tag{14}\] Next, we discuss the robustness of each magnetization component reconstruction \(M_{x}\), \(M_{y}\), and \(M_{z}\) concerning the in-plane rotation angles \(\phi\). The linear constraint for in-plane rotations as in Eqn. 12 reveals that the x and y components are coupled by linear factors \(\sin\theta\,\cos\phi\) and \(\sin\theta\,\sin\phi\) while the linear factor of z is only \(\cos\theta\). So the x and y parts are coupled at a higher degree than the z component. As a result, the z component will get decoupling easier and yield a high-quality reconstruction than the other two. Assuming that two in-plane rotation \(\phi_{1}\) and \(\phi_{2}\) are chose, then these two angles should be chosen equally distanced on half of the unit circle to improve the robustness of the reconstruction. We can choose \(\phi_{1}=0^{o}\) and \(\phi_{2}=90^{o}\) as a simple option. Side rotations can improve the robustness of the x and y components, but that approach is experimentally infeasible. The summary of our analysis is shown below: 1. In-plane rotations are necessary but not sufficient to decouple the Fourier coefficients of the three magnetization components. 2. Other constraints, such as support and boundary constraints, and regularizers, need invoking, if possible, for highly accurate reconstruction. 3. The z component gets reconstructed with higher quality than the x and y components in in-plane rotation systems. With the help of a support constraint, we will show that highly accurate vector tomography reconstruction can be obtained numerically with in-plane rotations. ## Vector tomography reconstruction of simulated data In this simulation, the sample is a meta-lattice with a size of \(100\times 100\times 100\) pixels. The signals of the magnetization make up around \(1.65\%\) of the total signal. Two tilt series from two in-plane rotations where \(\phi=0^{o}\) and \(90^{o}\) are inspected. For each tilt series, \(45\) projections of each left and right polarization \(P_{\theta}^{+}\) and \(P_{\theta}^{-}\) are generated in the range of \([-66^{o},66^{o}]\) with an increment of \(3\) degrees. So totally, we generate \(180\) projections with size \(100\times 100\) pixels. To make it realistic, we add Poisson noise to the projections by selecting a flux of \(4e8\) photons. This flux yields an SNR of \(200\) and less than \(1\%\) noise. Reconstructing the non-magnetic part is not the interest of this research. However, we need to assume that the support of the non-magnetic part is given since it plays an essential role in the reconstruction of the magnetization \(\mathbf{M}\). Next, we take the left and right projection difference \(b_{\theta}^{-}=\frac{1}{2}(P_{\theta}^{+}-P_{\theta}^{-})\). Since the magnetic part only makes up a fraction of the total signal, its SNR is much smaller than that of the non-magnetic part. The SNR of the projection difference is approximately \(1.65\%\times 200=3.3\), which is quite small. The high noise level in the projection differences causes the reconstruction of the magnetization \(\mathbf{M}\) to be less robust than the scalar one. Assuming that the noise level in the non-magnetic part stays the same, then the robustness of the reconstruction will decline as the magnetization signals decrease relative to the non-magnetic signals. Now we use our algorithm to reconstruct the three magnetization components \(M_{x}\), \(M_{y}\), and \(M_{z}\). The model and result are shown in Fig. 3. Since the support constraint is enforced, that is, the magnetization field only appears in the magnetic material. In addition, since we use two tilt series at \(\phi=0^{o}\) and \(\phi=90^{o}\), the missing wedge artifact does not significantly affect the reconstruction. Fig. 3d-f, shows \(M_{x}\), \(M_{y}\), and \(M_{z}\) components in the central slice along the z-axis, which are in good agreement with the model the qualities of reconstructions in all directions are comparable and as good as the model (Fig. 3a-c). To quantify the vector tomography reconstruction, we calculate the Fourier shell correlation of the three components between the model and the reconstruction (Fig. 3g). The large correlation coefficients indicate the good quality of the vector tomography reconstruction. Additionally, we observe that the reconstructed \(M_{z}\) has higher quality than \(M_{x}\) and \(M_{y}\), which is consistent with our analysis. Fig. 3h-i show the 3D magnetization vector field of the reconstruction and the central slice along the z-axis, respectively, which agree with the model (Fig. 3l-m). We also plot two topological defects with one positive topological charge (Fig. 3j) and the other negative charge (Fig. 3k), both of which are in accordance with the model (Fig. 3n-o). All these analyses confirm that RESIRE-V can reconstruct a high-quality 3D vector field from multiple tilt series each with a limited number of projections with a missing wedge. ## Vector tomography reconstruction of an experimental data of a ferromagnetic meta-lattice The ferromagnetic meta-lattice consists of \(60\) nm silica nanoparticles forming a face-centred cubic structure infiltered with nickle. The vector tomography experiment was conducted at the COSMIC beamline at the Advanced Light Source, Lawrence Berkeley National Lab. Circularly polarized x-rays of left- and right-helicity were used to achieve differential magnetic contrast based on x-ray magnetic circular dichroism [26, 51]. The x-ray energy was set as \(856\) eV, slightly above the nickel \(L_{3}\) edge Three in-plane rotation angles (\(0^{o}\), \(120^{o}\) and \(240^{o}\)) were chosen, and tilt ranges is from -\(62^{o}\) to +\(61^{o}\) for each in-plane rotation angle. At each tilt angle, 2D diffraction patterns were reconstructed using the regularized ptychographic iterative engine, producing two projections with left and right polarization at each tilt angle (Fig. 4a-b). The scalar tomography reconstruction was performed from three sets of tilt series by summing each pair of the oppositely-polarized projections, from which a 3D support was obtained to separate the magnetic materials from the non-magnetic region. For the vector tomography reconstruction, the difference of the left- and right-circularly polarized projections was calculated (Fig. 4c), producing magnetic contrast projections of three independent tilt series. Using RESIRE-V with the support, we reconstructed the 3D magnetization vector field. Fig. 4d shows the 3D vector field of the magnified square region with dotted lines in Fig. 4a-c, where the colors represent the different directions of the vectors. A thinner slice of the magnified region and a topological defect with a positive charge are shown in Fig. 4e-f, respectively. A more detailed analysis of the 3D magnetization vector field and the topological defects in the ferromagnetic meta-lattice can be found elsewhere [34]. Figure 3: Vector tomography reconstruction of simulated data. (**a-f**) Three magnetic components at the central slice in the z direction of the model (**a-c**) and vector tomography reconstruction (**d-f**) from the simulated data where the normalized cross correlations are 94.1%, 93.8% and 99.1% for \(M_{x}\), \(M_{y}\) and \(M_{z}\) respectively. (**g**) Fourier shell correlation of the three magnetic components, also confirming that the z component has higher quality than the x and y components. (**h-k**) 3D magnetization vector field of the model, including the overall vector field (**h**), the central slice along the z direction (**i**), two topological defects with positive charge (**j**) and negative charge (**k**), where the colors represent the different directions of the vectors. (**l-o**) Reconstructed 3D magnetization vector filed, including the overall vector field (**l**), the central slice along the z direction (**m**), two topological defects with positive charge (**n**) and negative charge (**o**), which are in good agreement with (**h-k**). ## Conclusion We present the mathematical formulation and implementation of RESIRE-V, an iterative algorithm for the 3D reconstruction of the vector field. RESIRE-V requires the acquisition of multiple tilt series of projections and the algorithm iterates between these projections and a 3D structure by using a forward and a backward step. The forward and backward step consist of the Radon transform and a linear transformation, respectively. Our analysis indicates that incorporating a 3D support to separate the magnetic region from a non-magnetic region can help RESIRE-V achieve accurate and robust reconstruction of the 3D vector field. To validate RESIRE-V, we perform a numerical simulation of the 3D magnetization vector field in a meta-lattice. Using only two tilt series and a support, we reconstruct the 3D vector field with high accuracy. We also observe that the reconstructed z component has higher quality than the x and y components, which is consistent with our mathematical analysis. Finally, we apply RESIRE-V to an experimental data set of a ferromagnetic meta-lattice, consisting of three tilt series with different in-plane rotation angles. Each tilt series has two sets of projections with left and right polarization. By using a support constraint, we reconstruct the 3D magnetization vector field inside the ferromagnetic meta-lattice, showing topological defects with positive and negative charges. We expect that RESIRE-V can be a general vector tomography method for the 3D reconstruction of a wide range of vector fields ## Supplementary ### Step-size analysis After the gradient is established, we need to perform a step-size analysis, showing and proving a step-size that works. The iterations will then be guaranteed to converge to a solution as desired. To proceed with our goal, we approximate the Lipchitz constant L of the gradient \(\nabla\varepsilon\), i.e. find L such that the following inequality holds: \[\left\|\nabla\varepsilon(\mathbf{M}_{1})-\nabla\varepsilon(\mathbf{M}_{2}) \right\|\leq L\big{\|}\mathbf{M}_{1}-\mathbf{M}_{2}\big{\|}\quad\forall \,\mathbf{M}_{1},\,\mathbf{M}_{2} \tag{15}\] The step-size then can be chosen to be \(1/L\) to guarantee a convergence. First, let decompose the error metric function \(\varepsilon(\mathbf{M})\) into a sum of \(N\) sub-error metric functions, which correspond to \(N\) projections, i.e. \(\varepsilon(\mathbf{M})=\sum_{\theta}\varepsilon_{\theta}(\mathbf{M})\) where \[\varepsilon_{\theta}(\mathbf{M})=\frac{1}{2}\left\|\alpha_{\theta}\,\Pi_{ \theta}(M_{x})+\beta_{\theta}\,\Pi_{\theta}(M_{y})+\gamma_{\theta}\,\Pi_{ \theta}(M_{z})-b_{\theta}^{-}\right\|^{2} \tag{16}\] It suffices to show the Lipchitz constant \(L_{\theta}\) of the sub-error metric function \(\varepsilon_{\theta}(\mathbf{M})\). The Lipchitz constant \(L\) of the sum function \(\varepsilon(\mathbf{M})\) can be derived via some algebra techniques and triangle inequalities after \(L_{\theta}\) is obtained. Figure 4: 3D reconstruction of the magnetization vector filed in a ferromagnetic meta-lattice. (**a-b**) Representative project with left (**a**) and right (**b**) polarization at \(0^{o}\). (**c**) The magnetic contrast projection at the \(0^{o}\), which is the difference of the left and right polarization projections. (**d**) 3D magnetization vector field in the square with dashed lines in (**a-c**), where the colors represent the different directions of the vectors. (**e**) A thin layer of (**d**). (**f**) A representative topological defect with positive charge. Scale bar 200 nm. Let \(N_{x}\times N_{y}\times N_{z}\) be the size of each reconstructed magnetic component \(M_{x}\), \(M_{y}\) and \(M_{z}\). We further assume that \(M_{x},M_{y},M_{z}\) and \(\mathbf{M}\) are vectorized into 1D vectors and that \(\mathbf{M}\) stacks all \(M_{x}\), \(M_{y}\) and \(M_{z}\) together in this respective order. The purpose of this assumption is for matrix analysis only. Hence we can decompose \(\left\|\nabla\varepsilon_{\Theta}(\mathbf{M})\right\|^{2}\) into a sum as follow: \[\left\|\nabla\varepsilon_{\Theta}(\mathbf{M})\right\|^{2}=\left\|\frac{ \partial\varepsilon_{\Theta}}{\partial M_{y}}(\mathbf{M})\right\|^{2}+\left\| \frac{\partial\varepsilon_{\Theta}}{\partial M_{z}}(\mathbf{M})\right\|^{2}+ \left\|\frac{\partial\varepsilon_{\Theta}}{\partial M_{z}}(\mathbf{M})\right\| ^{2} \tag{17}\] To show an upper bound of \(\left\|\frac{\partial\varepsilon_{\Theta}}{\partial M_{z}}(\mathbf{M})\right\|^ {2}\), we use a sequences of triangle inequalities (or Cauchy-Schwarz): \[\left\|\frac{\partial\varepsilon_{\Theta}}{\partial M_{x}}( \mathbf{M})\right\|^{2} \leq\left\|\alpha_{\Theta}\left(\alpha_{\Theta}\Pi_{\Theta}^{T} \Pi_{\Theta}(M_{x})+\beta_{\Theta}\Pi_{\Theta}^{T}\Pi_{\Theta}(M_{y})+\gamma_ {\Phi}\Pi_{\Theta}^{T}\Pi_{\Theta}(M_{z})\right)\right\|^{2}\] \[\leq 3\,\alpha_{\Theta}^{2}\left\|\Pi_{\Theta}^{T}\Pi_{\Theta} \right\|^{2}\left(\alpha_{\Theta}^{2}\left\|M_{x}\right\|^{2}+\beta_{\Theta}^{ 2}\left\|M_{y}\right\|^{2}+\gamma_{\Theta}^{2}\left\|M_{z}\right\|^{2}\right)\] \[\leq 3\,\alpha_{\Theta}^{2}\,N_{z}^{2}\max\{\alpha_{\Theta}^{2}, \beta_{\Theta}^{2},\gamma_{\Theta}^{2}\}\left\|\mathbf{M}\right\|^{2} \tag{18}\] Here, we use the result \(\left\|\Pi_{\Theta}^{T}\Pi_{\Theta}\right\|\leq N_{z}\) from elsewhere [16]. The Lipchitz constant \(L_{\Theta}\) will be obtained by just summing all three inequality, and taking the square root: \[\left\|\nabla\varepsilon_{\Theta}(\mathbf{M}_{1})-\nabla\varepsilon_{\Theta} (\mathbf{M}_{2})\right\|\leq\sqrt{3}\,N_{z}\,\sqrt{\alpha_{\Theta}^{2}+\beta_ {\Theta}^{2}+\gamma_{\Theta}^{2}}\,\max\{\left|\alpha_{\Theta}\right|,\left| \beta_{\Theta}\right|,\left|\gamma_{\Phi}\right|\}\left\|\mathbf{M}_{1}- \mathbf{M}_{2}\right\| \tag{19}\] Using the fact that \(\alpha_{\Theta}^{2}+\beta_{\Theta}^{2}+\gamma_{\Theta}^{2}=1\), we can further simplify \(L_{\Theta}=\sqrt{3}\,N_{z}\,\max\{\left|\alpha_{\Theta}\right|,\left|\beta_{ \Theta}\right|,\left|\gamma_{\Phi}\right|\}\). Note that \(\gamma_{\Theta}=0\) for \(\theta=0\). We finally obtain an approximation of the Lipchitz constant \(L=\sqrt{3}\,n\,N_{z}\) and the proof is done. Once can choose the step size \(t=1/L\) to maximize the convergence speed. In practice, \(t\) could be slightly larger than \(1/L\) and the algorithm still converges.
2303.04249
Where We Are and What We're Looking At: Query Based Worldwide Image Geo-localization Using Hierarchies and Scenes
Determining the exact latitude and longitude that a photo was taken is a useful and widely applicable task, yet it remains exceptionally difficult despite the accelerated progress of other computer vision tasks. Most previous approaches have opted to learn a single representation of query images, which are then classified at different levels of geographic granularity. These approaches fail to exploit the different visual cues that give context to different hierarchies, such as the country, state, and city level. To this end, we introduce an end-to-end transformer-based architecture that exploits the relationship between different geographic levels (which we refer to as hierarchies) and the corresponding visual scene information in an image through hierarchical cross-attention. We achieve this by learning a query for each geographic hierarchy and scene type. Furthermore, we learn a separate representation for different environmental scenes, as different scenes in the same location are often defined by completely different visual features. We achieve state of the art street level accuracy on 4 standard geo-localization datasets : Im2GPS, Im2GPS3k, YFCC4k, and YFCC26k, as well as qualitatively demonstrate how our method learns different representations for different visual hierarchies and scenes, which has not been demonstrated in the previous methods. These previous testing datasets mostly consist of iconic landmarks or images taken from social media, which makes them either a memorization task, or biased towards certain places. To address this issue we introduce a much harder testing dataset, Google-World-Streets-15k, comprised of images taken from Google Streetview covering the whole planet and present state of the art results. Our code will be made available in the camera-ready version.
Brandon Clark, Alec Kerrigan, Parth Parag Kulkarni, Vicente Vivanco Cepeda, Mubarak Shah
2023-03-07T21:47:58Z
http://arxiv.org/abs/2303.04249v1
Where We Are and What We're Looking At: Query Based Worldwide Image Geo-localization Using Hierarchies and Scenes ###### Abstract Determining the exact latitude and longitude that a photo was taken is a useful and widely applicable task, yet it remains exceptionally difficult despite the accelerated progress of other computer vision tasks. Most previous approaches have opted to learn a single representation of query images, which are then classified at different levels of geographic granularity. These approaches fail to exploit the different visual cues that give context to different hierarchies, such as the country, state, and city level. To this end, we introduce an end-to-end transformer-based architecture that exploits the relationship between different geographic levels (which we refer to as hierarchies) and the corresponding visual scene information in an image through hierarchical cross-attention. We achieve this by learning a query for each geographic hierarchy and scene type. Furthermore, we learn a separate representation for different environmental scenes, as different scenes in the same location are often defined by completely different visual features. We achieve state of the art street level accuracy on 4 standard geo-localization datasets : Im2GPS, Im2GPS3k, YFCC4k, and YFCC26k, as well as qualitatively demonstrate how our method learns different representations for different visual hierarchies and scenes, which has not been demonstrated in the previous methods. These previous testing datasets mostly consist of iconic landmarks or images taken from social media, which makes them either a memorization task, or biased towards certain places. To address this issue we introduce a much harder testing dataset, Google-World-Streets-15k, comprised of images taken from Google Streetview covering the whole planet and present state of the art results. Our code will be made available in the camera-ready version. ## 1 Introduction Image geo-localization is the task of determining the GPS coordinates of where a photo was taken as precisely as possible. For certain locations, this may be an easy task, as most cities will have noticeable buildings, landmarks, or states that give away their location. For instance, given an image of the Eiffel Tower one could easily assume it was taken somewhere in Paris. Noticing some of the finer features, like the size of the tower in the image and other buildings that might be visible, a prediction within a few meters could be fairly easy. However, given an image from a small town outside of Paris, it may be very hard to predict its location. Certain trees or a building's architecture may indicate the image is in France, but localizing finer than that can pose a serious challenge. Adding in different times of day, varying weather conditions, and different views of the same location makes this problem even more complex as two images from the same location could look wildly different. Many works have explored solutions to this problem, with nearly all works focusing on the retrieval task, where query images are matched to a gallery of geo-tagged images to retrieve matching geo-tagged image [16, 19, 20, 23, 27, 28]. There are two variations of the retrieval approach to this problem, same-view and cross-view. In same-view both the query and gallery images are taken at ground level. However, in cross-view the query images are ground level while the gallery images are from an aerial view, either by satellite or drone. This creates a challenging task as images with the exact same location look very different from one another. Regardless of same-view or cross-view, the evaluation of the retrieval task is costly as features need to be extracted and compared for every possible match with geo-tagged gallery images, making global scale geo-localization costly if not infeasible. If, instead, the problem is approached as a classification task, it's possible to localize on the global scale given enough training data [9, 13, 14, 18, 24, 25]. These approaches segment the Earth into Google's S21 cells that are assigned GPS locations and serve as classes, speeding up evaluation. Most previous classification-based visual geo-localization approaches use the same strategy as any other classification task: using an image backbone (either a Convolutional Neural Network or a Vision Transformer [2]), they learn a set image features and output a probability distribution for each possible location (or class) using an MLP. In more recent works [13, 14], using multiple sets of classes that represent different global scales, as well as utilizing information about the scene characteristics of the image has shown to improve results. These approaches produce one feature vector for an image and presume that it is good enough to localize at every geographic level. However, that is not how a human would reason about finding out their location. If a person had no idea where they were, they would likely search for visual cues for a broad location (country, state) before considering finer areas. Thus, a human would look for a different set of features for each geographic level they want to predict. Footnote 1: [https://code.google.com/archive/p/s2-geometry-library/](https://code.google.com/archive/p/s2-geometry-library/) In this paper, we introduce a novel approach toward world-wide visual geo-localization inspired by human experts. Typically, humans do not evaluate the entirety of a scene and reason about its features, but rather identify important objects, markers, or landmarks and match them to a cache of knowledge about various known locations. In our approach, we emulate this by using a set of learned latent arrays called "hierarchy queries" that learn a different set of features for each geographic hierarchy. These queries also learn to extract features relative to specific scene types (e.g. forests, sports fields, industrial, etc.). We do this so that our queries can focus more specifically on features relevant to their assigned scene as well as the features related to their assigned hierarchy. This is done via a Transformer Decoder that cross-attends our hierarchy and scene queries with image features that are extracted from a backbone. We also implement a "hierarchy dependent decoder" that ensures our model learns the specifics of each individual hierarchy. To do this our "hierarchy dependent decoder" separates the queries according to their assigned hierarchy, and has independent weights for the Self-Attention and Feed-Forward stages that are specific to each hierarchy. We also note that the existing testing datasets contain implicit biases which make them unfit to truly measure a model's geo-location accuracy. For instance, Im2GPS [5, 24] datasets contain many images of iconic landmarks, which only tests whether a model has seen and memorized the locations of those landmarks. Also, YFCC [21, 24] testing sets are composed entirely of images posted online that contained geo-tags in their metadata. This creates a bias towards locations that are commonly visited and posted online, like tourist sites. Previous work has found this introduces significant geographical and often racial biases into the datasets [8] which we demonstrate in Figure 4. To this end, we introduce a challenging new testing dataset called Google-World-Streets-15k, which is more evenly distributed across the Earth and consists of real-world images from Google Streetview. Figure 1: A visualization of all 7 hierarchies used. The \(t_{max}\) value is set to 25000, 10000, 5000, 2000, 1000, 750, and 500 respectively for hierarchies 1 to 7, while the \(t_{min}\) value is set at 50 for every hierarchy. This generates 684, 1744, 3298, 7202, 12893, 16150, and 21673 classes for hierarchies 1 to 7 respectively. Figure 2: Example images from 28 different countries in the Google-World-Streets-15k dataset The contributions of our paper include: (1) The first Transformer Decoder for worldwide image geo-localization. (2) The first model to produce multiple sets of features for an input image, and the first model capable of extracting scene-specific information without needing a separate network for every scene. (3) A new testing dataset that reduces landmark bias and reduces biases created by social media. (4) A significant improvement over previous SOTA methods on all datasets. (5) A qualitative analysis of the features our model learns for every hierarchy and scene query. ## 2 Related Works ### Retrieval Based Image Geo-Localization The retrieval method for geo-localization attempts to match a query image to target image(s) from a reference database (gallery). Most methods train by using separate models for the ground and aerial views, bringing the features of paired images together in a shared space. Many different approaches have been proposed to overcome the domain gap, with some methods implementing GANs [4] that map images from one view to the other [16], others use a polar transform that makes use of the prior geometric knowledge to alter aerial views to look like ground views [19, 20], and a few even combine the two techniques in an attempt to have the images appear even more similar [23]. Most methods assume that the ground and aerial images are perfectly aligned spatially. However, this is not always the case. In circumstances where orientation and spatial alignment aren't perfect, the issue can be accounted for ahead of time or even predicted [20]. VIGOR [28] creates a dataset where the spatial location of a query image could be located anywhere within the view of its matching aerial image. Zhu [27] strays from the previous methods by using a non-uniform crop that selects the most useful patches of aerial images and ignores others. ### Image Geo-Localization as Classification By segmenting the Earth's surface into distinct classes and assigning a GPS coordinate to each class, a model is allowed to predict a class directly instead of comparing features to a database. Treating geo-localization this way was first introduced by Weyand et al. [25]. In their paper, they introduce a technique to generate classes that utilizes Google's S2 library and a set of training GPS coordinates to partition the Earth into cells, which are treated as classes. Vo [24] was the first to introduce using multiple different partitions of varying granularity. In contrast, CPlaNet [18] develops a technique that uses combinatorial partitioning. This approach uses multiple different coarse partitions and encodes each of them as a graph, then refining the graph by merging nodes. More details on class generation will be discussed in Section 3.1. Up until Individual Scene Networks (ISNs) [13], no information other than the image itself was used at training time. The insight behind ISNs was that different image contexts require different features to be learned in order to accurately localize the image. They make use of this by having three separate networks for indoor, natural, and urban images respectively. This way each network can learn the important features for each scene and more accurately predict locations. [13] also introduced the use of hierarchical classes. While previous papers had utilized multiple geographic partitions, they observed that these partitions could be connected through a hierarchical structure. To make use of this, they proposed a new evaluation technique that combines the predictions of multiple partitions, similar to YOLO9000 [15], which helps refine the overall prediction. Kordopatis-Zilos [9] developed a method that combines classification and retrieval. Their network uses classification to get a predicted S2 cell, then retrieval within that cell to get a refined prediction. Most recently, TransLocator [14] was introduced, which learns from not only the RGB image but also the segmentation map produced by a trained segmentation network. Providing the segmentation map allows TransLocator to rely on the segmentation if there are any variations in the image, like weather or time of day, that would impact a normal RGB-based model. All of these methods fail to account for features that are specific to different geographic hierarchies and don't fully utilize scene-specific information. We solve these problems with our query-based learning approach. ## 3 Method In our approach, we treat discrete locations as classes, obtained by dividing the planet into Schneider-2 cells at different levels of geographic granularity. The size of each cell is determined by the number of training images available in the given region, with the constraint that each cell has approximately the same number of samples. We exploit the hierarchical nature of geo-location by learning different sets of features for each geographic hierarchy and for each scene category from an input image. Finally, we classify a query image by selecting the set of visual features correlated with the most confident scene prediction. We use these sets of features to map the image to an S2 cell at each hierarchical level and combine the predictions at all levels into one refined prediction using the finest hierarchy. ### Class Generation With global geo-localization comes the problem of separating the Earth into classes. A naive way to do this would be to simply tessellate the earth into the rectangles that are created by simple latitude and longitude lines. However, this approach has a few issues, for one the surface area of each rectangle will vary with the distance from the poles, likely producing large class imbalances. Instead, we utilize Schneider 2 cells using Google's S2 Library. This process initially projects the Earth's sphere onto the 6 sides of a cube, thereby resulting in an initial 6 S2 cells. To create balanced classes, we split each cell with more than \(t_{max}\) images from the training set located inside of it. We also ignore any cells that have less than \(t_{min}\) to ensure we don't have classes with an insignificant number of training instances. The cells are split recursively using this criterion until all cells fall within \(t_{min}\) and \(t_{max}\). This creates a set of balanced classes that cover the entire Earth. These classes and hierarchies are visualized in Figure 1 where we can see the increasing specificity of our hierarchies. We begin with 684 classes at our coarsest hierarchy and increase that to 21673 at our finest. During evaluation we define the predicted location as the mean of the location of all training images inside a predicted class. ### Model One problem faced in geo-localization is that two images in the same geographic cell can share very few visual similarities. Two images from the same location could be taken at night or during the day, in sunny or rainy weather, or simply from the same location but one image faces North while the other faces South. Additionally, some information in a scene can be relevant to one geographic hierarchy (e.g. state) but not another (e.g. country). To that end, we propose a novel decoder-based architecture designed to learn unique sets of features for each of these possible settings. We begin by defining our geographic queries as \(GQ\in\mathbb{R}^{HS\times D}\) where \(H\) is the number of geographic hierarchies, \(S\) is the number of scene labels, and \(D\) is the dimension of the features. We define each individual geographic query as \(gq_{s}^{h}\) where \(h\) and \(s\) represent the index of the hierarchy and scene, respectively. The ground truth scene labels are provided by _Places2_ dataset [26] which has 365 total scene labels. They also provide a hierarchical structure to their scenes, with coarser hierarchies containing 3 and 16 unique scene labels. A pre-trained scene classification model is used to get the initial scene label from the set of 365 and the coarser labels are extracted using the hierarchical structure. We find that the set of 16 scenes instead of 365 gives the best results for our model, we show ablation on this in supplementary material. ### GeoDecoder We have two decoders as shown in Figure 3. Below we describe each in detail. **Hierarchy Independent Decoder** The geographic queries are passed into our GeoDecoder, whose primary function is, for each hierarchical query, to extract geographical information relevant to its individual task for the image tokens Figure 3: Our proposed network. We randomly initialize a set of learned queries for each hierarchy and scene. An image is first encoded by Transformer Encoder and decoded by two decoders. The first decoder consists of \(N\) layers as a Hierarchy Independent Decoder, followed by \(E\) layers of our Hierarchy Dependent Decoder; this decoder only performs self-attention within each hierarchy, instead of across all hierarchies, and has separate Feed-Forward Networks for each hierarchy. To determine which scene to use for prediction, the scene with the highest average confidence (denoted by the \(0^{th}\) channel) is selected and queries are fed to their corresponding classifier to geo-localize at each hierarchy. We get a final prediction by multiplying the class probabilities of the coarser hierarchies into the finer ones so that a prediction using all hierarchical information can be made. which have been produced by a Swin encoder [11]. As previously stated, our decoder performs operations on a series of learned latent arrays called _Geographic Queries_ (scene and hierarchical) in a manner inspired by the Perceiver [7] and DETR [1]. We define \(X\) as the image tokens, \(GQ^{k}\) as the geographic queries at the \(k^{th}\) layer of the decoder, and \(GQ^{0}\) as the initial learned vectors. Each layer performs self-attention on the normalized geographic queries, followed by cross-attention between the output of self-attention and the image patch encodings, where cross-attention is defined as \(CA(Q,K)=softmax(\frac{QK^{T}}{\sqrt{d_{k}}})K\). where \(Q,K\) are Query and Key respectively. Finally, we normalize the output of the cross-attention operation and feed it into an MLP to produce the output of the decoder layer. Therefore, one decoder layer is defined as \[y^{SA} =MSA(LN(GQ^{k-1}))+GQ^{k-1} \tag{1}\] \[y^{CA} =CA(LN(y^{SA},LN(X))+y^{SA},\] (2) \[GQ^{k} =FFN(LN(y^{CA}))+y^{CA} \tag{3}\] **Hierarchy Dependent Decoder** We find that a traditional transformer decoder structure for the entire GeoDecoder results in a homogeneity of all hierarchical queries. Therefore, in the final layers of the decoder, we perform self-attention only in an _intra_ hierarchical manner rather than between all hierarchical queries. Additionally, we assign each hierarchy its own fully connected network at the end of each layer rather than allowing hierarchies to share one network. We define the set of geographic queries _specifically_ for hierarchy \(h\) at layer \(k\)\(GQ^{k}_{h}\). The feed-forward network for hierarchy \(h\) is referred to as \(FFN_{h}\) \[y^{SA} = MSA(LN(GQ^{k-1}_{h}))+GQ^{k-1}_{h}, \tag{4}\] \[y^{CA} = CA(LN(y^{SA}),LN(X))+y^{SA},\] (5) \[GQ^{k}_{h} = FFN_{h}(LN(y^{CA}))+y^{CA} \tag{6}\] After each level, each \(GQ^{k}_{h}\) is concatenated to reform \(GQ\). In the ablations Table 4, we show the results of these _hierarchy dependent layers_. ### Losses As shown in Figure 3, our network is trained with two losses. The first loss is scene prediction loss, \(L_{scene}\), which is a Cross-Entropy loss between the predicated scene label \(\hat{s_{i}}\) ground truth scene labels \(s_{i}\). Our second loss is a geo-location prediction loss, \(L_{geo}\), which is a combination of Cross-Entropy losses for each hierarchy. Given an image \(X\) we define the set of location labels as \(h_{1}\), \(h_{2}\),..., \(h_{7}\), where \(h_{i}\) denotes the ground-truth class distribution in hierarchy \(i\), and the respective prediction distribution as \(\hat{h_{i}}\), we define \(L_{scene(X)}=CE(s_{i},\hat{s_{i}})\) and \(L_{geo}(X)=\sum_{i=1}^{7}CE(h_{i},\hat{h_{i}})\) and \(L(X)=L_{geo}(X)+L_{scene}(X)\). ### Inference With the output of our GeoDecoder \(GQ^{out}\) we can geo-localize the image. As our system is designed to learn different latent embeddings for different visual scenes, we must first choose which features to proceed with. For \(gq^{h}_{s}\in GQ\) we assign the confidence that the image belongs to scene \(s\) to that vector's \(0th\) element. This minimizes the need for an additional individual scene network like in [13] while allowing specific weights within the decoder's linear layers to specialize in differentiating visual scenes. Once we have \(GQ^{out}\), the queries are separated and sent to the classifier that is assigned to their hierarchy. This gives us 7 different sets of class probabilities, one for each hierarchy. To condense this information into one class prediction, and to exploit the hierarchical nature of our classes, we multiply the probabilities of the classes in the coarser hierarchies by their sub-classes found in the finer hierarchies. If we define a class as \(C^{H_{i}}_{j}\) where \(i\) denotes the hierarchy and \(j\) denotes the class label within that hierarchy, we can define the probability of predicting a class \(C^{H_{7}}_{a}\) for image \(X\) as: \(p(X|C^{H_{7}}_{a})=p(X|C^{H_{7}}_{a})*p(X|C^{H_{6}}_{b})*...*p(X|C^{H_{1}}_{g})\), given that \(C^{H_{7}}_{a}\) is a subclass of \(C^{H_{6}}_{b}\), \(C^{H_{6}}_{b}\) is a subclass of \(C^{H_{5}}_{c}\) and so on. We perform this for every class in our finest hierarchy so we can use the finest geographic granularity while also using the information learned for all of the hierarchies. ## 4 Google-World-Streets-15K Dataset We propose a new testing dataset collected using Google Streetview called Google-World-Streets-15k. As previous testing datasets contain biases towards commonly visited locations or landmarks, the goal of our dataset is to eliminate those biases and have a more even distribution across the Earth. In total, our dataset contains 14,955 images covering 193 countries. In order to collect a fair distribution of images, we utilize a database of 43 thousand cities2, as well as the surface area of every country. We first sample a country with a probability proportional to its surface area compared to the Earth's total surface area. Then we select a random city within that country and a GPS coordinate within a 5 Km radius of the center of the city to sample from the Google Streetview API. This ensures that the dataset is evenly distributed according to landmass and not biased towards the countries and locations that people post online. Google Streetview also blurs out any faces found in the photos, so a model that is using people's faces to predict a location will have to rely on other features in the image to get a prediction. In Figure 4 We show a heatmap of Google-World-Streets-15k compared to heatmaps of YFCC26k and Im2GPS3k. We note that a majority of YFCC26k and Im2GPS3k are located in North America and Europe, with very little representation in the other 4 populated continents. While Google-World-Streets-15k's densest areas are still the Northeastern US and Europe, we provide a much more even sampling of the Earth with images on all populated continents. We also note that the empty locations on our dataset's heatmap are mostly deserts, tundras, and mountain ranges. ## 5 Experiments ### Training Data Our network is trained on the MediaEval Placing Tasks 2016 (MP-16) dataset [10]. This dataset consists of 4.72 million randomly chosen geo-tagged images from the Yahoo Flix Creative Commons 100 Million (YFCC100M) [22] dataset. Notably, this subset is fully uncurated, and contains many examples that contain little if any geographic information. These photos include pets, food, and random household objects. We ensure that no photographer's images appear in both the testing and training sets, to guarantee that our model learns from visual geographic signals rather than the styles of individual photographers. ### Testing Data We test our method on five datasets: Im2GPS [5], Im2GPS3k [24], YFCC dataset: YFCC26k [21]YFCC 4k [24], and proposed new dataset Google-World-Street-15K described in the previous section. Im2GPS [5] and Im2GPS3k [24], contain 237 and 2997 images respectively. While small in size, both datasets are manually selected and contain popular sights and landmarks from around the world. We note that many of the landmarks that appear in Im2GPS appear multiple times in the MP-16 dataset, which may cause a bias towards those locations, this is accounted for in our proposed testing dataset. YFCC dataset: YFCC26k [21] and YFCC 4k [24], contain 25,600 and 4536 images respectively. In contrast to Im2GPS and like our training set MP-16, these images are randomly selected and often contain very little geo-localizable information, and therefore pose a more difficult challenge than the Im2GPS datasets. ### Evaluation During evaluation we utilize the finest hierarchy class to get an image's predicted location. We report our accuracy at the street (1 Km), city (25 Km), region (200 Km), country (750 Km), and continent (2500 Km) scales. However, training on multiple hierarchies allows us to employ a parent-child relationship and multiply the probabilities across all hierarchies [13]. This allows the finest set of probabilities to be enhanced to include all of the learned hierarchical information. We also use TenCrop during evaluation, which is a cropping technique that returns the four corner crops, center crop, and their flipped versions. All crops are passed through the model and their outputs are averaged to get one set of probabilities per hierarchy for each image. ## 6 Results, Discussions and Analysis In this section, we compare the performance of our method with different baselines, and conduct a detailed ablation study to demonstrate the importance of different com Figure 4: A comparison of YFCC26k, Im2GPS3k, and our Google World Streets 15k dataset. We see that popular datasets for testing geo-localization systems are heavily concentrated in heavily populated, metropolitan areas, particularly in America and western Europe. By contrast, our dataset more evenly blankets the earth, better representing all countries on earth. ponents in our system. Furthermore, we visualize the interpretability of our method by showing the attention map between each query and the image patches from our encoder. Our results are presented in Table 1. On Im2GPS, our method achieves state of the art accuracy across all distances, improving by as much as 1.7% on the baseline. For Im2GPS3k our method manages to beat the previous techniques on a majority of distances, only falling short on the 200 and 2500 kilometer accuracies. More notably, our system's performance on the far more challenging YFCC4k and YFCC26k datasets vastly outperforms previous geo-localization works. On YFCC4k, ours achieves a score of 10.3%, an improvement of 2.2% over Translocator. Similarly on YFCC26k, we achieve a 1KM accuracy of 10.1%, improving over Translocator by 2.9%. Additionally, we compared our method to [14] on our Google-World-Streets-15k(GWS) validation dataset. As expected, the more realistic and fair nature of this dataset, in contrast to the training set MP-16, resulted in poor performance on all systems. However, we still outperform Translocator by 0.2% on 1KM accuracy and 0.4% on 25KM accuracy, suggesting a stronger ability to focus on defining features of a scene, rather than singular landmarks. ### Qualitative Results We provide a number of qualitative results, outlined in Figure 5. For our attention maps, we use the attention between the image backbone features and the fine-level query (corresponding to the correct scene). First, these results show that each hierarchy query attends to different parts of the image, as per our original hypothesis. Second, we can see that the attention for the _correct_ scene query is far more precise than _incorrect_ scene queries, demonstrating how our system learns different features specific to each scene. \(n=10\). This suggests a point of diminishing returns. Additionally, we experiment with the hierarchy dependent layers on the end of the GeoDecoder (Table 4). Recall, these layers restrict attention operations to queries within the same hierarchy, as well as utilize specialized feed-forward layers. For these experiments we vary the number of hierarchy dependent layers, while assigning the remaining layers to the hierarchy independent decoder, such that the total number of layers is always 8. **Scene Prediction** One contribution of our method is our approach toward distinguishing between different visual scenes of the same location. To show the effectiveness of our separated scene queries, we ablate on scene prediction by evaluating performance with no scene prediction, as well as using scene prediction as a secondary task as in [14]. We then compare it to our scene prediction method. See (Table 6). Our results find that our scene queries selection method outperforms treating scenes as a secondary task by 0.6% and 0.4% on Im2GPS3k and YFCC26k, respectively. **Additional Ablations** We perform additional ablations on the number of scenes as well as the number of hierarchies in the supplementary. ## 7 Conclusion In this work, we reformulated visual geo-localization via the learning of multiple sets of geographic features. Given an RGB image of any location on planet earth, our system first learns a set of patch features, then uses the GeoDecoder to extract hierarchy-specific features for each possible scene, choosing the most confident scene before prediction. Our proposed method improves over other geo-localization methods on multiple benchmarks, especially on uncurated datasets most similar to real-world use cases. Figure 5: A qualitative analysis of different queries. Here we show the attention maps between every query our model produces when probed with the original Im2GPS3k image seen in the top left. Each row shows a hierarchy query for all scenes, while each column shows each scene query for all hierarchies. This specific query image is of an outdoor sports field. We observe that the most relevant scene labels were predicted as most confident and that their attention maps are more localized to specific features that would define a sports field. Looking at the less confident scenes, we see that the attention maps look at more general features or at random areas of the image. This is because those queries are trained to find features for their specific scenes. For example, the shopping and dining query will be looking for things like tables, chairs, or storefronts that aren’t present in this query image, which is why we see the attention maps looking more generally at the image rather than looking at specific features. \begin{table} \begin{tabular}{l|l|c c c c c} \hline \hline \multirow{2}{*}{**Dataset**} & \multirow{2}{*}{**Model**} & \multicolumn{4}{c}{**Distance (\(\alpha\), [\%] \(\theta\) km)**} \\ & & **Street** & **City** & **Region** & **Country** & **Contient** \\ & & 1 km & 25 km & 200 km & 750 km & 2500 km \\ \hline \hline \multirow{3}{*}{**YFCC26k**} & ViT & 6.9 & 17.3 & 27.3 & 40.5 & 59.5 \\ & Swin & 9.6 & 22.3 & 33.6 & 48.0 & 67.5 \\ \cline{1-1} & Ours (ViT) & 8.7 & 21.4 & 31.6 & 47.8 & 66.2 \\ \cline{1-1} & Ours (Swin) & **10.1** & **23.9** & **34.1** & **49.6** & **69.0** \\ \hline \hline \end{tabular} \end{table} Table 5: **Ablation Study on Encoder Type** We show our method performs better than simple image encoders.
2306.16093
Retrospective: Flipping Bits in Memory Without Accessing Them: An Experimental Study of DRAM Disturbance Errors
Our ISCA 2014 paper provided the first scientific and detailed characterization, analysis, and real-system demonstration of what is now popularly known as the RowHammer phenomenon (or vulnerability) in modern commodity DRAM chips, which are used as main memory in almost all modern computing systems. It experimentally demonstrated that more than 80% of all DRAM modules we tested from the three major DRAM vendors were vulnerable to the RowHammer read disturbance phenomenon: one can predictably induce bitflips (i.e., data corruption) in real DRAM modules by repeatedly accessing a DRAM row and thus causing electrical disturbance to physically nearby rows. We showed that a simple unprivileged user-level program induced RowHammer bitflips in multiple real systems and suggested that a security attack can be built using this proof-of-concept to hijack control of the system or cause other harm. To solve the RowHammer problem, our paper examined seven different approaches (including a novel probabilistic approach that has very low cost), some of which influenced or were adopted in different industrial products. Many later works from various research communities examined RowHammer, building real security attacks, proposing new defenses, further analyzing the problem at various (e.g., device/circuit, architecture, and system) levels, and exploiting RowHammer for various purposes (e.g., to reverse-engineer DRAM chips). Industry has worked to mitigate the problem, changing both memory controllers and DRAM standards/chips. Two major DRAM vendors finally wrote papers on the topic in 2023, describing their current approaches to mitigate RowHammer. Research & development on RowHammer in both academia & industry continues to be very active and fascinating. This short retrospective provides a brief analysis of our ISCA 2014 paper and its impact.
Onur Mutlu
2023-06-28T10:49:09Z
http://arxiv.org/abs/2306.16093v1
_Retrospective:_ Flipping Bits in Memory Without Accessing Them: An Experimental Study of DRAM Disturbance Errors ###### Abstract Our ISCA 2014 paper [1] provided the first scientific and detailed characterization, analysis, and real-system demonstration of what is now popularity known as the RowHammer phenomenon (or vulnerability) in modern commodity DRAM chips, which are used as main memory in almost all modern computing systems. It experimentally demonstrated that more than 80% of all DRAM modules we tested from the three major DRAM vendors were vulnerable to the RowHammer read disturbance phenomenon: one can predictably induce bitflips (i.e., data corruption) in real DRAM modules by repeatedly accessing a DRAM row and thus causing electrical disturbance to physically nearby rows. We showed that a simple unprivileged user-level program induced RowHammer bitflips in multiple real systems and suggested that a security attack can be built using this proof-of-concept to hijack control of the system or cause other harm. To solve the RowHammer problem, our paper examined seven different approaches (including a novel probabilistic approach that has very low cost), some of which influenced or were adopted in different industrial products. Many later works from various research communities examined RowHammer, building real security attacks, proposing new defenses, further analyzing the problem at various (e.g., device/circuit, architecture, and system) levels, and exploiting RowHammer for various purposes (e.g., to reverse-engineer DRAM chips). Industry has worked to mitigate the problem, changing both memory controllers and DRAM standards/chips. Two major DRAM vendors finally wrote papers on the topic in 2023, describing their current approaches to mitigate RowHammer. Research & development on RowHammer in both academia & industry continues to be very active and fascinating, This short retrospective provides a brief analysis of our ISCA 2014 paper and its impact. We describe the circumstances that led to our paper, mention its influence on later works and products, describe the mindset change we believe it has helped enable in hardware security, and discuss our predictions for future. ## I Background and Circumstances Our stumbling on the RowHammer problem and creation of its first scientific analysis happened as a result of a confluence of multiple factors. First, my group was working on DRAM technology scaling issues since late 2010. We were very interested in failure mechanisms that appear or worsen due to aggressive technology scaling. To study such issues (e.g., data retention errors [2]), we built an FPGA-based DRAM testing infrastructure [2] between 2011-2012, which we later open sourced as SoftML [3, 4] and DRAM Bender [5, 6]. Second, around the same timeframe, we were investigating similar technology scaling issues in flash memory using real NAND flash chips [7, 8]. We knew read disturbance errors significant in NAND flash memory [7, 8] and were very interested in how prevalent they were in DRAM. Third, we were collaborating with Intel (e.g., [2]) to understand and solve DRAM technology scaling problems and build our DRAM infrastructure. Three of my students and I spent the summer of 2012 at Intel to work closely with our collaborators (two are co-authors): during this time, we finalized the calibration and stabilization of our infrastructure and had significant technical discussions and experimentation on DRAM scaling problems. Although there was awareness of the RowHammer problem in industry in 2012 (see Footnote 1 in [1]), there was no comprehensive experimental analysis and detailed real-system demonstration of it. We believed it was critical to provide a rigorous scientific analysis using a wide variety of DRAM chips and scientifically establish major characteristics and prevalence of RowHammer. Hence, in the summer of 2012, we set out to use our DRAM testing infrastructure to analyze RowHammer. Our initial results showed how widespread the read disturbance problem was across the (at the time) recent DRAM chips we tested, so we studied the problem comprehensively and developed many solutions to it. The resulting paper was submitted to MICRO in May 2013 but was rejected. We strengthened the results, especially of the mitigation mechanisms and the number of tested chips, and made the analysis more comprehensive before it was accepted to ISCA 2014 (2 of the 6 reviewers still rejected it for interesting reasons). ## II Major Contribution and Influence The major contribution of our paper is the exposure and detailed analysis of a fundamental hardware failure mechanism that breaks memory isolation in real systems and thus has huge implications on system reliability, security, and safety. Our paper is a comprehensive study of a major DRAM technology scaling problem. RowHammer, including its first scientific analysis, experimental characterization, real system demonstration, and solutions with their evaluation. To our knowledge, RowHammer is the first example of a hardware failure mechanism that creates a significant and widespread system security vulnerability [12, 13, 14, 15], as our ISCA 2014 paper suggested. Our work has had large influence on both industry & academia. Individual follow-on works are many to list here; we refer the reader to longer invited retrospectives we wrote [12, 13, 14]. We give major examples of influence, focusing on RowHammer's effect on the collective mindset of security research and major industry milestones related to RowHammer. _RowHammer Attacks & Mindset Shift in Hardware Security_. Our demonstration that one can easily and predictably induce bitflips in commodity DRAM chips using a real user-level program enabled a major mindset shift in hardware security. It showed that general-purpose hardware is fallible in a very widespread manner and its problems are exploitable. Tens of works (see [13, 14]) built directly on our work to exploit RowHammer bitflips to develop many attacks that compromise system integrity and confidentiality, starting from the first RowHammer exploit by Google Project Zero in 2015 [16, 17] to recent works in 2022-2023 (e.g., [18, 19]). These attacks showed increasingly sophisticated ways by which an unprivileged attacker can exploit RowHammer bitflips to circumvent memory protection and gain complete control of a system (e.g., [16, 20, 21, 22, 23, 24, 25, 26, 27, 28]), gain access to confidential data (e.g., [18, 9, 29]), or maliciously destroy the safety and accuracy of a system, e.g., an otherwise accurate machine learning inference engine (e.g., [30, 31]). The mindset enabled by RowHammer bitflips caused a renewed interest in hardware security research, enticing many researchers to deeply understand hardware's inner workings and find new vulnerabilities. Thus, hardware security issues have become mainstream discussion in top security & architecture venues, some having sessions entitled RowHammer. _RowHammer Defenses_. Tens of works proposed mitigations against RowHammer, some of which were inspired by the solutions we discussed in our ISCA 2014 paper. To date, the search for more efficient and low-cost RowHammer solutions continues. We refer the reader to our prior overview papers [13, 14, 32] and more recent works in 2023 (e.g., [33, 34, 35]). _RowHammer Analyses_. Our paper initiated works at both architectural & circuit/device-levels to better understand RowHammer and reverse-engineer DRAM chips, to develop better models, defenses, and attacks (see [13, 14]). Our ISCA'20 work [36] revisited RowHammer, comprehensively analyzed/c of 1580 DRAM chips of three different types from at least two generations, showing that RowHammer has gotten much worse with technology scaling & existing solutions are not effective at future vulnerability levels. _Industry Reaction: Attacks, Analyses, and Mitigations_. Folks developing industrial memory testing programs immediately included RowHammer tests, e.g., in memets86 [37], citing our work. Industry needed to immediately protect RowHammer-vulnerable chips already in the field, so almost all system vendors increased refresh rates; a solution we examined in our paper and deemed costly for performance and energy, yet it was the only practical lever that could be used in the field. Apple publicly acknowledged our work in their security release [38] that announced higher refresh rates to mitigate RowHammer. Intel designed memory controllers that performed probabilistic activations (i.e., pTRR [39, 40]), similar to our PARA solution [1]. DRAM vendors modified the DRAM stan dard to introduce TRR (target row refresh) mechanisms [39] and claimed their new DDR4 chips to be RowHammer-free [39, 41]. This bold claim was later refuted by our TRRespas work [39] in 2020, which introduced the many-sided RowHammer attack to circumvent internal protection mechanisms added to the DRAM chips. Our later work, Uncovering TRR [41] showed that one can almost completely reverse-engineer and thus easily bypass RowHammer mitigations employed in all tested DRAM chips, i.e., RowHammer solutions in DRAM chips are broken. The industry done by our two major works in 2020 [36, 39] caused the industry to reorganize the RowHammer task group at JEDEC, which produced two white papers on mitigating RowHammer [42, 43]. Nine years after our paper, in 2023, two major DRAM vendors, SK Hynix and Samsung, finally wrote papers [44, 45] on the RowHammer problem, describing their solutions. Several of these industry solutions build on the probabilistic & access-counter-based solution approaches our ISCA 2014 paper introduced. Major Internet and cloud systems companies also took a deep interest in RowHammer as it can greatly impact their system security, dependability, and availability. Multiple works from Google, e.g., by Google Project Zero in 2015 [16, 17] and Half Double in 2021-2022 [46] directly built on our paper to demonstrate attacks in real systems. Researchers from Microsoft have developed deeper analyses of RowHammer [47], along with new RowHammer attacks [48] and defenses (e.g., [49, 50, 51, 48]). ## III Summary and Future Outlook Since 2012-2014, RowHammer vulnerability has become much worse due to technology scaling: without mitigation, one can now induce RowHammer bitflights with orders of magnitude smaller number of activations (e.g., \(\sim\)10K) and cause much higher rates of errors in cutting-edge DRAM chips [36, 41]. Sophisticated attacks are continuously developed to circumvent the mitigations employed in real DRAM chips. Fortunately, we have also come a long way in further understanding and better mitigating the RowHammer vulnerability. The industry is now (hopefully) fully aware of the importance of the problem and of avoiding ptitips. Unfortunately, an efficient and completely-secure solution is not found yet. The solution space poses a rich area of tradeoffs in terms of security, performance, power/energy, cost/complexity. All solutions forego some desirable properties in favor of others. As such, a critical direction for the future is to find solutions superior to what we have today. We believe system-DRAM cooperation [52, 14] will be important to enabling complete solutions. We also believe it is critical to deeply understand the properties of RowHammer under many different conditions so that we can develop effective solutions that work under all circumstances. Unfortunately, we do not yet fully understand many facets of RowHammer (see [53, 14, 54, 14, 55, 15]). DRAM technology scaling will continue to create problems that will exacerbate the bitflips and the resulting robustness (i.e., safety/security/reliability) problems. Our ISCA 2023 paper on RowPress [55] provides the first scientific and detailed characterization, analysis, and real-system demonstration of yet another read disturbance mechanism in DRAM. What other fascinating problems will we see and can we completely solve them efficiently? Will we ever be free of bitflips at the system and application levels?
2307.10913
Practical Active Noise Control: Restriction of Maximum Output Power
This paper presents some recent algorithms developed by the authors for real-time adaptive active noise (AANC) control systems. These algorithms address some of the common challenges faced by AANC systems, such as speaker saturation, system divergence, and disturbance rejection. Speaker saturation can introduce nonlinearity into the adaptive system and degrade the noise reduction performance. System divergence can occur when the secondary speaker units are over-amplified or when there is a disturbance other than the noise to be controlled. Disturbance rejection is important to prevent the adaptive system from adapting to unwanted signals. The paper provides guidelines for implementing and operating real-time AANC systems based on these algorithms.
Woon-Seng Gan, Dongyuan Shi, Xiaoyi Shen
2023-07-20T14:35:53Z
http://arxiv.org/abs/2307.10913v1
# Practical Active Noise Control: ###### Abstract This paper presents some recent algorithms developed by the authors for real-time adaptive active noise (AANC) control systems. These algorithms address some of the common challenges faced by AANC systems, such as speaker saturation, system divergence, and disturbance rejection. Speaker saturation can introduce nonlinearity into the adaptive system and degrade the noise reduction performance. System divergence can occur when the secondary speaker units are over-amplified or when there is a disturbance other than the noise to be controlled. Disturbance rejection is important to prevent the adaptive system from adapting to unwanted signals. The paper provides guidelines for implementing and operating real-time AANC systems based on these algorithms. **Keywords: adaptive active noise control algorithm, output saturation, 2Gd-FxLMS, leaky FxLMS** ## I Introduction Active noise control (ANC) system produces anti-noise to cancel out unwanted noise [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]. However, adaptive ANC (AANC) in real-time [9, 11, 12, 13, 14] may fail if the audio amplifier is overdriven to saturation, causing the adaptive algorithm to diverge. Saturation distortion affects both the amplitude and phase of the secondary path in ANC [15, 16]. Some nonlinear algorithms based on neural networks or Volterra filters [17] can handle saturation distortion, but they are costly and complex to implement, and the vanishing gradient issue is a problem for real-world scenarios. Moreover, they do not solve the problem of insufficient power when the control signal exceeds the amplifier threshold. Thus, these nonlinear solutions are not practical for real applications and may only be used for partially overdriven actuators. Furthermore, the nonlinear adaptive algorithm without constraints is not a desired solution. A better approach is to limit the amplifier's output power and keep it within a certain power budget. Some of the common algorithms for this approach are (i) clipping or rescaling algorithms [18], which either truncate the output signal above the threshold or adjust the weight of the control filter; (ii) leaky-type filtered-reference Least Mean Square (FxLMS) algorithm [19], which adds a leak term or penalty factor to stabilize the algorithm, but requires trial and error to choose the optimal factor; (iii) A two-gradient FxLMS (2GD-FxLMS) [20] algorithm, which imposes a specific output constraint with no extra computation compared to the conventional FxLMS algorithm. Recent research has revealed new insights into constraint-FxLMS algorithms. This paper highlights the key differences and evaluates their merits and drawbacks. A typical block diagram of the ANC with saturation effect can be viewed in Fig. 1 and will be used throughout the discussion of different constraint FxLMS. Before we move into the definition, explanation, and implementation of the various types of constraint FxLMS algorithms, we have defined a table of variables and their descriptions for ease of reading. ## II Clipping FxLMS The clipping algorithm mainly truncates the parts of the output signal that exceeds a certain voltage level. However, Fig. 1: Block diagram of the ANC with the saturation effect.
2304.09912
Exploring a CNN Model for Earthquake Magnitude Estimation using HR-GNSS data
High rate Global Navigation Satellite System (HR GNSS) data can be highly useful for earthquake analysis as it provides continuous high-rate measurements of ground motion. This data can be used to estimate the magnitude, to assess the potential of an earthquake for generating tsunamis, and to analyze diverse parameters related to the seismic source. Particularly, in this work, we present the first results of a deep learning model based on a convolutional neural network for earthquake magnitude estimation, using HR GNSS displacement time series. The influence of different dataset configurations, such as station numbers, epicentral distances, signal duration, and earthquake size, were analyzed to figure out how the model can be adapted to various scenarios. We explored the potential of the model for global application and compared its performance using both synthetic and real data from different seismogenic regions. The performance of our model at this stage was satisfactory in estimating earthquake magnitude from synthetic data. Comparable results were observed in tests using synthetic data from a different tectonic region than the training data. Furthermore, the model was tested using real data from different regions and magnitudes, resulting in a good accuracy, provided that the data from a particular group of stations had similar epicentral distance constraints to those used during the model training. The robustness of the DL model can be improved to work independently from the window size of the time series and the number of stations, enabling faster estimation by the model using only near field data. Overall, this study provides insights for the development of future DL approaches for earthquake magnitude estimation with HR-GNSS data, emphasizing the importance of proper handling and careful data selection for further model improvements.
Claudia Quinteros Cartaya, Jonas Koehler, Wei Li, Johannes Faber, Nishtha Srivastava
2023-04-19T18:22:46Z
http://arxiv.org/abs/2304.09912v1
# Exploring a CNN Model for Earthquake Estimation using HR-GNSS data ###### Abstract High-rate Global Navigation Satellite System (HR-GNSS) data can be highly useful for earthquake analysis as it provides continuous high-rate measurements of ground motion. This data can be used to estimate the magnitude, to assess the potential of an earthquake for generating tsunamis, and to analyze diverse parameters related to the seismic source. Particularly, in this work, we present the first results of a deep learning model based on a convolutional neural network for earthquake magnitude estimation, using HR-GNSS displacement time series. The influence of different dataset configurations, such as station numbers, epicentral distances, signal duration, and earthquake size, were analyzed to figure out how the model can be adapted to various scenarios. We explored the potential of the model for global application and compared its performance using both synthetic and real data from different seismogenic regions. The performance of our model at this stage was satisfactory in estimating earthquake magnitude from synthetic data with \(0.07\leq\mathrm{RMS}\leq 0.11\). Comparable results were observed in tests using synthetic data from a different tectonic region than the training data, with \(\mathrm{RMS}\leq 0.15\). Furthermore, the model was tested using real data from different regions and magnitudes, resulting in a good accuracy, in the best cases with \(0.09\leq\mathrm{RMS}\leq 0.33\), provided that the data from a particular group of stations had similar epicentral distance constraints to those used during the model training. The robustness of the DL model can be improved to work independently from the window size of the time series and the number of stations, enabling faster estimation by the model using only near-field data. Overall, this study provides insights for the development of future DL approaches for earthquake magnitude estimation with HR-GNSS data, emphasizing the importance of proper handling and careful data selection for further model improvements. **Keywords:** Earthquake magnitude, geodetic data, deep learning. ## 1 Introduction The Global Navigation Satellite System (HR-GNSS) can provide high-frequency and high-precision position measurements that facilitate the detection of very small ground displacements caused by large earthquakes. Unlike inertial sensors, HR-GNSS instruments can record the signal of large earthquakes near the source without saturation, providing valuable information on both dynamic (far-field) and static (near-field) displacements (Bock et al., 2000; Ge et al., 2000; Kouba, 2003; Larson, 2009). Furthermore, earthquake magnitude estimation from GNSS waveforms has been made possible through empirical relationships between the peak ground displacement and the seismic moment (Crowell et al., 2013; Melgar et al., 2015; Goldberg et al., 2021). In the last decades, researchers have explored incorporating GNSS data to improve the accuracy of earthquake magnitude estimation compared to using seismic data alone from seismometers and accelerometers (Bock et al., 2011; Wang et al., 2013). As a result, HR-GNSS networks for earthquake monitoring and the continuous recording of data for near-real-time analysis have increased. The 2004 Sumatra-Adaman earthquake with a magnitude of Mw 9.1 was a significant event that motivated the implementation and improvement of early warning systems in potential seismogenic regions using HR-GNSS sensors (Blewitt et al., 2006; Satake, 2014). Subsequent great earthquakes such as the Mw 8.8 Maule (2010), Mw 9.0 Tohoku (2011), and Mw 8.4 Illapel (2015), among others, have demonstrated the importance of the fast and reliable assessment of seismic sources, leading to the development of diverse algorithms for HR-GNSS data analysis that enable proper early warning of earthquakes and tsunamis (e.g., Crowell et al., 2009; 2016; 2018; Allen & Ziv, 2011; Fang et al., 2014; Grapenthin et al., 2014; Minson et al., 2014; Kawamoto et al., 2016; Ruhl et al., 2017; 2019; Psimoulis et al., 2018). Moreover, seismologists aim to develop complementary tools to outperform traditional analysis methods through deep learning (DL) approaches, which have proven to have great capacity in big data processing and feature extraction for fast and robust results. DL methods have been widely introduced to deal with various seismological tasks, such as earthquake detection, phase picking, seismic source assessment, and denoising of seismic signals (e.g., Chakraborty et al., 2022; 2022b; Jiao & Alavi, 2020; Kuang et al., 2021; Li, 2022a; Li, 2022b; Mousavi & Beroza, 2022; Perol et al., 2018; van den Ende et al., 2020). However, training DL algorithms with HR-GNSS data for seismic analysis is one of the most recent challenges still in development. For instance, Lin et al. (2021) showed a first demonstration of seismic source patterns analysis and magnitude estimation through a DL algorithm and HR-GNSS data, which focused on the seismic activity in the Chile subduction zone by using peak ground displacement time series. On the other hand, Dittmann et al. (2022) introduced a DL algorithm for earthquake detection through velocity time series obtained from HR-GNSS by the time-differenced carrier phase. In this work, we present a preliminary DL model based on a convolutional neural network for the magnitude estimation of HR-GNSS data. Unlike previous algorithms, our model is trained merely on displacement time series, and we evaluate the possibility of extending its application on a global scale. We tested the performance of our model using both synthetic and real data from different seismogenic regions. We also analyzed the influence of different dataset configurations, such as epicentral distances, signal duration, and earthquake size, to determine how the model can be adapted to different scenarios. ## 2 Architecture We propose a deep learning model using a sequential Convolutional Neural Network (CNN) for a regression problem (Le Cun, 1989; Schmidhuber, 2015; Geron, 2019; Goodfellow et al., 2016). The CNN architecture consists of six 2D-convolutional layers, three max-pooling layers (Scherer et al., 2010; Zhou & Chellappa, 1988), and three fully-connected or dense layers (Le Cun et al., 1998). The input layer for the DL model is comprised of displacement time series for each earthquake with 1 Hz sampling rate. These time series are stored in a tensor with dimensions \(N_{i}\) x \(N_{i}\) x 3, where \(N_{i}\) represents the number of stations, \(N_{i}\) represents the number of samples in the time series, and three channels correspond to the U, N, and E components referring to the up, north, and east directions of the sensor in each GNSS station (as shown in Figure 1). The architecture of our model is summarized in Figure 2. For each convolutional layer, we used different numbers of filters with kernel size (1, 3) and stride (1, 1). A zero-padding was only used in the first and the last convolutional layer. A pool size of (1, 2) was used in each max-pooling layer. Thus, we are down sampling the data, while keeping the extraction of features in the time series separated by station up to the dense layers. We chose a rectified linear unit activation function (ReLU) as the transfer function to activate the output in every convolutional and dense layer (Nair & Hinton, 2010). The last tensor that results from the convolutional layers is transformed through a flattening layer to a one-dimensional vector (Krizhevsky et al., 2017) to be the input for the dense layers. Figure 1: The input data for the HR-GNSS displacement time series is stored in a tensor whose shape depends on the number of earthquakes (\(N_{E}\)), station numbers (\(N_{s}\)), samples in the time series (\(N_{t}\)), and 3 channels (U: up, N: north, and E: east directions). The amplitudes in the time series represent displacements in meters, and the sampling rate is 1Hz, with every sample representing one second in time. Then, the three dense layers consist of 128, 32, and 1 neuron, respectively. The weights of the kernels for the two first dense layers are initialized using a normal distribution and constrained by max-norm regularization with a maximum norm value of 3 (Geron, 2019). Since we do not adopt any normalization for the values of the labels in the training (magnitudes), we obtain a target variable in the output layer whose value is equivalent to the earthquake moment magnitude Mw (Hanks & Kanamori, 1979). ## 3 Data In optimal cases, HR-GNSS stations located within about 5 km from the epicenter can detect displacements caused by moderate earthquakes with a magnitude of around Mw 5 (e.g., Mendoza et al., 2013). However, to increase the likelihood of having sufficient HR-GNSS recordings to observe earthquake signals, we focused our analysis on Mw \(>\) 6 earthquakes. Nonetheless, the number of large earthquakes available for analysis using HR-GNSS instruments may not be enough to form a representative dataset for training a deep learning (DL) model. Therefore, we utilized synthetic HR-GNSS signals from a previously generated database by Lin et al. (2020), containing a large volume of data of 36,800 earthquakes ranging from magnitudes of 6.6 to 9.6. These earthquakes are associated with rupture scenarios specifically modeled for the Chile subduction zone (Figure 3). We utilized the synthetic HR-GNSS data from the Chile region for training, validation, and testing of our DL model. Furthermore, we evaluated the performance of our model by testing it with synthetic signals of a Mw 8.7 Cascadia earthquake (Melgar et al., 2016) and real data (Melgar & Ruhl, 2018) from six large earthquakes from diverse regions (Figure 4). Figure 2: Sequential Convolutional Neural Network architecture proposed in this work for earthquake magnitude estimation using displacement time series in three components (3 channels) from a specific number of HR-GNSS stations, \(N_{s}\), and number of samples, \(N_{t}\), in the time domain. **Figure 3.** Chile Subduction Zone. The synthetic data represent the displacements hypothetically recorded by the HR-GNSS stations shown as black triangles on the map (Baez et al., 2018), corresponding to the earthquakes whose hypocenters are shown as dots in color scale by depth. ## 4 Training ### Data preparation Since we explore the influence of using data from different numbers of stations, epicentral distances, and time series lengths on the model performance, we prepared data sets to correspond to three cases: Case I, Case II, and Case III, which involved the training of three models with the same architecture, but using different shapes for the input data (Table 1). The initial time in the time series, t=0, was referenced to the earthquake origin time. The amplitude displacements have physical units in meters, and the sampling is in the time domain, in seconds. For Case I and Case II, we used data from three and seven stations, respectively, located within a 3\({}^{\mathrm{o}}\) radius from the epicenter (\(\Delta\leq 3^{\mathrm{o}}\); Figure 5a), and time series that contain 180 seconds after the earthquake origin time. Then, for Case III, we sought to incorporate a greater range of maximum displacements observed in the time series, particularly those in the near and mid-field (Blewitt et al., 2006). To achieve this, we utilized data from seven different stations (Figure 5b), ensuring a balanced representation by selecting at least three and up to five stations within a radius of 3\({}^{\mathrm{o}}\) from the epicenter. For the remaining stations farther than 3\({}^{\mathrm{o}}\) away, we included those with epicentral distances that did not cause amplitude displacements too small to detect in the time series. The maximum distance depended on the magnitude of the event and how the modeling was previously constrained to generate the Figure 4: Earthquakes used for testing the model. Red stars represent the epicenter of the earthquakes used with real HR-GNSS signals, and the green star is the epicenter of the Cascadia earthquake used with synthetic signals. synthetic dataset (Lin et al., 2021; Lin et al., 2020). In this last case, Case III, every time series contains the first 500 seconds after the origin time of the earthquakes. The stations were selected randomly for each case, with the caveat that we avoided having data from several stations too close to each other for a particular earthquake. We included only those cases in which at least three stations had azimuths that differed by 40\({}^{\circ}\) from each other. Thus, we aimed to have time series with features as different as possible, such as amplitude values, time wave arrivals, duration of the earthquake signal, and so on. Lastly, we selected a total of 34,567 earthquakes and split them into different sets: 90% for the learning process (training and validation) and 10% for testing. \begin{table} \begin{tabular}{p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}} \hline **Cases** & **Number of Stations per earthquake (\(\Delta\): epicentral distance)** & **Time series window size (samples)** & **Input Shape** \\ \hline Case I & 3 stations, \(\Delta\leq 3^{\circ}\) & 181 & (3 x 181 x 3*) \\ \hline Case II & 7 stations, \(\Delta\leq 3^{\circ}\) & 181 & (7 x 181 x 3*) \\ \hline Case III & 7 stations: from 3 to 5 stations \(\Delta\leq 3^{\circ}\) and the rest \(\Delta>3^{\circ}\) & 501 & (7 x 501 x 3*) \\ \hline \multicolumn{4}{p{113.8pt}}{* 3 Channels: components in U, E, and N directions.} \\ \end{tabular} \end{table} Table 1: Setting of the input data for the three cases. Each case corresponds to one training instance of the same architecture for the DL models, but the input data have shapes that differ between the cases. ### Learning process We split the synthetic dataset into a training set and a validation set, taking 90% of the earthquakes for the learning process (Figure 6). The training set was further split into an 80/20 ratio, with 80% used for training (72% of the total earthquakes in the database) and 20% for fine-tuning hyperparameters that control the learning through the validation process (18% of the total earthquakes in the database). Each earthquake in the training set was labeled with a target variable that corresponds to its magnitude value rounded to one decimal. The mean squared error function (MSE) was used to evaluate the losses during the training process. We optimized the model using the Adaptive Moment (ADAM) estimation method to reduce the losses (Geron, 2019; Kingma & Ba, 2015). To prevent update steps from exceeding the initial learning rate, we used a learning rate schedule with a standard decay function (decay rate \(=\) learning rate/epochs). This helped to increase the performance of the training. We set the initial learning rate to 0.01, the decay to 0.1/maximum number of epochs (see Appendix A), the maximum number of epochs to 200, and the batch size to 128. We used early stopping to reduce the possibility of overfitting, stopping the training process when the minimum validation loss was reached, with a patience of 20 epochs. Figure 5: Example of station distributions for the selection of data. The earthquake epicenter is in the center of the circle and the stations that were randomly chosen are shown as green triangles. In (a), three stations with epicentral distances \(\Delta\leq 3^{\circ}\) are shown as an example of station distributions for Case I. For Case II, we could use the same example as Case I, but choose seven stations instead. In (b), a particular example of seven stations for Case III: three stations \(\Delta\leq 3^{\circ}\), and four stations \(\Delta>3^{\circ}\). Figure 6: Histograms of earthquake distribution in the database are shown, separated by magnitude (left) and depth (right). The dataset was split into three sets: 72% for training, 18% for validation during training, and 10% for testing. The number of earthquakes is displayed on a logarithmic scale to better visualize the smaller data groups. ## 5 Testing _5.1 Chile Synthetic data_ We evaluated the performance of the three models using 10% of the earthquakes in the database that were not used for training or validation (Figure 6). The selection criteria for the testing set were the same as those for the training set in each case. As shown in Figure 7, the magnitude estimations resulted in low errors in all three cases. The lowest root mean squared error (RMS) of 0.06 was achieved in Case III, where most of the estimated magnitudes were accurate. The error distributions by magnitude are also presented in Figure 7d, 7e, and 7f. In particular, for Case I and Case II, the majority of the estimated magnitudes in the range of \(7.0\leq\mathrm{Mw}\leq 8.3\) were accurate. For Case III, the best fit was observed for almost all magnitudes, ranging from Mw 7.0 to 9.6. In all three cases, the errors increased with increasing magnitude. This trend was more noticeable in Case I and Case II, where the RMS values increased from Mw 7.9. In contrast, the RMS values slightly increased from Mw 9.2 in Case III. For lower magnitudes (\(\mathrm{Mw}\leq 6.9\)), the estimations were mostly overestimated in all three cases. For higher magnitudes (\(\mathrm{Mw}\geq 9.5\)), the estimations were underestimated in Case I and Case II. However, due to the uneven distribution of earthquakes by magnitude in the testing set, we had very few earthquakes with \(\mathrm{Mw}\leq 6.9\) and \(\mathrm{Mw}\geq 9.5\), resulting in a higher error for those magnitudes and an RMS value that may not be as representative as those of other magnitudes. In the case of the highest magnitudes, the estimations may be affected by the windowing of the time series. For the largest earthquakes, 181 seconds in the recordings from stations nearly 3\({}^{\mathrm{o}}\) away from the epicenter may not be long enough to contain the complete earthquake signal, resulting in increased error in Case I and Case II. Figure 8 shows a comparison of waveforms from earthquakes with different magnitudes and epicentral distances, in displacement time series of 181 and 501 seconds. For example, the synthetic waveforms of an Mw 6.7 earthquake with epicentral distances \(\Delta\leq 3^{\mathrm{o}}\) are complete before 150 seconds, whereas the waveforms of an Mw 8.1 earthquake, from stations \(\Delta\leq 3^{\mathrm{o}}\), just barely fit within a time window of 181 seconds. Furthermore, the waveforms of an Mw 9.0 earthquake require more than 181 seconds to fit the complete signal for epicentral distances larger than 1.5\({}^{\mathrm{o}}\). Figure 7: Error of magnitude estimation using synthetic HR-GNSS data from Chile, for every case described in Table 1. Each circle corresponds to one bin whose color represents the percentage of tests done for each real magnitude. Plots (a), (b), and (c) are the fits of the magnitude estimations, where the RMSs are shown by the dotted lines. The results are binned on a 0.1 magnitude grid. Plots (d), (e), and (f) are the distribution of errors and RMS by each magnitude. The errors correspond to the difference between the real magnitude and the magnitude estimated by the model. The results are binned on a grid, 0.1 magnitude \(\times\) 0.1 magnitude error. The RMS values by each magnitude are shown as green diamond symbols. The testing data distribution by magnitudes is represented through histograms in the background of the plots. **Figure 8**. Comparison of HR-GNSS synthetic data from earthquakes of magnitudes Mw 6.7, 8.1, and 9.0, in three components (U: up, N: north, and E: east directions). The initial time is referenced to the origin time of the earthquake. The amplitudes are normalized by their maximum absolute value and colors are related to the epicentral distance of stations (for distances \(\Delta\leq 3^{\circ}\)). The dashed lines refer to a window size of 181 seconds. Synthetic data provided by Lin et al., 2020) _5.2 Comparison using Chile and Cascadia Synthetic Data_ To evaluate the performance of the models using synthetic data from regions with different tectonic regimes, we tested the models with two Mw 8.7 earthquakes: one from Chile (Lin et al., 2020) and the other from Cascadia (Melgar et al., 2016). We randomly selected an Mw 8.7 earthquake from the testing dataset of Chile. The data corresponded to displacement time series from 63 HR-GNSS stations for the Chile earthquake and 62 HR-GNSS stations for the Cascadia earthquake. For each earthquake, we randomly generated 500 distinct groups of three and seven stations, with a window size of the time series according to each case. In Figure 9, the results of the models in the three cases are quite similar for both earthquakes. Although the results of the model in Case III are again the best in most estimations, with the least scattered errors and an RMS of 0.07 for Chile and 0.11 for Cascadia, in general, all three cases show a suitable performance of the model with a relatively low RMS. There is no discernible pattern indicating whether those groups of stations farther away from the epicenter tend to have a higher error than the group with the closest stations to the epicenter. However, as we outlined in the first testing (section 5.1), for great magnitudes, the performance of the model is better when using time series with long window sizes. Hence, we assume that the use of time series long enough in Case III could contribute to obtaining better results than in the other two cases. **Figure 9. Errors of magnitude estimations using synthetic data from an Mw 8.7 Cascadia and an Mw 8.7 Chile earthquake, for the DL models in the three cases (Table 1). The circles represent the magnitude error for each group of stations defined by 500 different random combinations. Both the color scale and circle sizes (sorted from left to right) depend on the median of the epicentral distances (\(\Delta\) Median) of each combination of stations, in each case.** _5.3 Real earthquakes_ The noise in real GNSS signals is a significant factor that could affect the accuracy of magnitude estimations, especially if the DL model used was trained with synthetic data that lacks noise. Therefore, we tested the models when faced with real HR-GNSS data from six earthquakes with different magnitudes and from different tectonic regions (Figure 4): Maule, Iquique, and Illapel earthquakes, from Chile, and the others from Tehuantepec (Mexico), Nicoya (Costa Rica), and Mentawai (Indonesia). The waveforms of these earthquakes are from the database provided by Melgar & Ruhl (2018) and consist of displacement waveforms with a signal-to-noise ratio greater than 3 dB and a minimum peak amplitude of 0.04 m (Ruhl et al., 2019). Because the number of stations in the database differs for each earthquake, we tested with different numbers of station groups for each one. Thus, we had some cases with only one group tested (Nicoya, Tehuantepec, and Mentawai), whereas for the Illapel, Maule, and Iquique, we had numerous station groups, from which we randomly selected 500 cases for testing. In Figure 10, we show the results for every case and earthquake. In cases such as Iquique and Tehuantepec, the magnitude estimations reached RMS values of 0.09 and 0.1, respectively, which are comparable to the tests done using synthetic data. However, in general, we observed more scattered error values in these results from real data than from synthetic data. Our first assumption is that the noise content in real data causes an increase in the error value since our models are trained with ideal and clean data. However, we can analyze some other considerations: 1. The highest RMS value of 0.49 was obtained for the Mw 7.6 Nicoya earthquake. This occurred when all stations in the group were within an epicentral distance of less than 1o, and large amplitudes, with displacements of nearly 50 cm, were observed in the first 20 seconds (Figure 10(a)). It is possible that the model overestimated the magnitude in this case because, during training, the grouped stations were rarely within such a short radius with large displacements. 2. The results for the Mw 7.7 Mentawai earthquake are acceptable for Case I and Case II, with an RMS of 0.2. However, in Case III, the RMS of 0.44 is associated with the same cause as the Nicoya earthquake since, in this case, only one station is at \(\Delta>3^{\circ}\) (Figure 10(b)), and the model was trained using groups with at least two stations \(\Delta>3^{\circ}\). 3. As mentioned above, the Mw 8.1 Iquique earthquake had the best fit in Case II, with an RMS value of 0.09. But for Case III, the RMS value was 0.33, obtained from groups of stations with epicentral distances whose median is less than 3.5o. Despite having data from stations with epicentral distances of up to approximately 8o, the dataset only contained three stations farther than 3.5o away (Figure 11(a)). 4. The most accurate magnitude estimations for the three largest earthquakes: Tehuantepec, Illapel, and Maule, were obtained in Case III, with RMS values of 0.1, 0.17, and 0.13, respectively. These results are consistent with the results obtained from synthetic data. Specifically, the test using data from the Mw 8.2 Tehuantepec earthquake only had seven stations, and only two of them were located within a 3o radius from the epicenter (Figure 10(c)). In Cases I and II, stations located more than 3o away were used in the models, which resulted in waveforms with smaller displacements than those used in the model training. Therefore, the magnitude estimations for these models were expected to be underestimated. However, the errors for these models were still low. 5. For the Mw 8.8 Maule earthquake, only five stations were located within a radius of 3o from the epicenter (Figure 11(c)). In Case II, stations located at \(\Delta>3^{\circ}\) were used in the testing. This could have caused the scattered errors observed in Case II and higher errors than those in Case I and Case III. Figure 10: Errors of magnitude estimations using real data of earthquakes from different regions and with different magnitudes. The plots correspond to the results in Case I, Case II, and Case III, from up to the bottom, respectively. The circles indicate the magnitude error for each group of stations, which were defined by different random combinations. Both the color scale and circle sizes (sorted from left to right) depend on the median of the epicentral distances (\(\Delta\) Median) of each combination of stations, for each earthquake. **Figure 11.** HR-GNSS real data from earthquakes from Tehuantepec (Mexico), Nicoya (Costa Rica), and Mentawai (Indonesia), which were used for the model testing. The time series are in three components (U: up, N: north, and E: east directions). The initial time is referenced to the origin time of the earthquake. The color scale is related to the epicentral distance. The dashed lines refer to a window size of 181 seconds. **Figure 12.** HR-GNSS real data from Maule, Iquique, and Illapel earthquakes (Chile), which were used for the model testing. The time series are in three components (U: up, N: north, and E: east directions). The initial time is referenced to the origin time of the earthquake. In the left side, the signals are from epicentral distance \(\Delta\leq 3^{\circ}\), and in the right, from \(\Delta>3^{\circ}\). Colors are related to the epicentral distance of the station. The dashed lines refer to a window size of 181 seconds. ## 6 Conclusions The DL architecture proposed in this work is an experimental version for earthquake magnitude estimation. It has been trained using synthetic displacement time series from groups of three and seven high-rate Global Navigation Satellite System (HR-GNSS) stations, and different window sizes containing 180 samples (3 minutes) and 500 samples (over 8 minutes) after the earthquake origin time. The performance of the DL model for the estimation of earthquake magnitude from synthetic data has been satisfactory. Despite being trained with synthetic data from Chile, the model has given comparable results in tests using synthetic data from Cascadia, which represents a different tectonic region. Additionally, the results of using real data from earthquakes with different magnitudes and from different regions showed good accuracy of the estimations, provided that the data from a particular group of stations have similar epicentral distance constraints to those used during the model training. The length of the time series should also be long enough to fit most of the earthquake signals within the time window, as incomplete signals could affect the estimations. While the DL models performed well using real data, regardless of the tectonic region of the earthquake, it would be advisable to evaluate their robustness by including noise in the training data and addressing accuracy problems related to the imbalanced amount of data by earthquake magnitude and non-normalized displacement time series in the training. This approach proposes a DL model designed for specific shapes of input data (number of stations and windowing), but the architecture could be improved to work independently from the window size of the time series and the number of stations. This would enable a faster estimation by the model using only near-field data from stations within a radius of less than 1\({}^{\circ}\) from the epicenter, which could provide reliable magnitude estimation with less than two minutes of data after the earthquake origin time. The DL architecture proposed in this work is the result of preliminary analysis that will be helpful for the improvement of future DL approaches that could be used for earthquake magnitude estimation with HR-GNSS data. It is still necessary to introduce some changes in the DL model to adapt it for real-time magnitude estimation. Also, it is important to note that this is a numerical method, and therefore the robustness of the models and the physical sense depends on proper handling and selection of the training data set. ## Acknowledgments We would like to express our sincere thanks to J. Alejandro Gonzalez, Carlos Reinoza, and B. Crowell for the constructive comments and information about Geodesic Data analysis for seismology. The databases used in this work are provided [https://doi.org/10.5281/zenodo.4008690](https://doi.org/10.5281/zenodo.4008690) and [https://doi.org/10.5281/zenodo.1434374](https://doi.org/10.5281/zenodo.1434374). This research was supported by the Federal Ministry of Education and Research of Germany (BMBF), grant SAI 01IS20059. Modeling and data processing were performed at the Frankfurt Institute for Advanced Studies, with a GPU cluster funded by BMBF for the project Seismologie und Artifizielle Intelligenz (SAI). ## Statements & Declarations _Competing Interests_ The authors have no relevant financial or non-financial interests to disclose. ### Author Contributions All authors contributed to the study conception and design. Conceptualization, data collection, programming, and analysis were performed by Claudia Quinteros Cartaya. Methodology was contributions by Wei li, Jonas Kohler, and Johannes Faber. Conceptualization and review by Nishtha Srivastava. The first draft of the manuscript was written by Claudia Quinteros Cartaya, and all authors commented on previous versions of the manuscript. All authors read and approved the final manuscript.
2305.06908
CoMoSpeech: One-Step Speech and Singing Voice Synthesis via Consistency Model
Denoising diffusion probabilistic models (DDPMs) have shown promising performance for speech synthesis. However, a large number of iterative steps are required to achieve high sample quality, which restricts the inference speed. Maintaining sample quality while increasing sampling speed has become a challenging task. In this paper, we propose a "Co"nsistency "Mo"del-based "Speech" synthesis method, CoMoSpeech, which achieve speech synthesis through a single diffusion sampling step while achieving high audio quality. The consistency constraint is applied to distill a consistency model from a well-designed diffusion-based teacher model, which ultimately yields superior performances in the distilled CoMoSpeech. Our experiments show that by generating audio recordings by a single sampling step, the CoMoSpeech achieves an inference speed more than 150 times faster than real-time on a single NVIDIA A100 GPU, which is comparable to FastSpeech2, making diffusion-sampling based speech synthesis truly practical. Meanwhile, objective and subjective evaluations on text-to-speech and singing voice synthesis show that the proposed teacher models yield the best audio quality, and the one-step sampling based CoMoSpeech achieves the best inference speed with better or comparable audio quality to other conventional multi-step diffusion model baselines. Audio samples are available at https://comospeech.github.io/.
Zhen Ye, Wei Xue, Xu Tan, Jie Chen, Qifeng Liu, Yike Guo
2023-05-11T15:51:46Z
http://arxiv.org/abs/2305.06908v4
# CoMoSpeech: One-Step Speech and Singing Voice Synthesis via Consistency Model ###### Abstract Denoising diffusion probabilistic models (DDPMs) have shown promising performance for speech synthesis. However, a large number of iterative steps are required to achieve high sample quality, which restricts the inference speed. Maintaining sample quality while increasing sampling speed has become a challenging task. In this paper, we propose a **C**onsistency **M**odel-based Speech synthesis method, CoMoSpeech, which achieve speech synthesis through a single diffusion sampling step while achieving high audio quality. The consistency constraint is applied to distill a consistency model from a well-designed diffusion-based teacher model, which ultimately yields superior performances in the distilled CoMoSpeech. Our experiments show that by generating audio recordings by a single sampling step, the CoMoSpeech achieves an inference speed more than 150 times faster than real-time on a single NVIDIA A100 GPU, which is comparable to FastSpeech2, making diffusion-sampling based speech synthesis truly practical. Meanwhile, objective and subjective evaluations on text-to-speech and singing voice synthesis show that the proposed teacher models yield the best audio quality, and the one-step sampling based CoMoSpeech achieves the best inference speed with better or comparable audio quality to other conventional multi-step diffusion model baselines. Audio samples are available at [https://comospeech.github.io/](https://comospeech.github.io/). ## 1 Introduction Speech synthesis (Tan et al., 2021) aims to produce realistic audio of humans and has broadly included text-to-speech (TTS) (Taylor, 2009) and singing voice synthesis (SVS) (Nishimura et al., 2016) tasks due to the increasing applications in human-machine interaction and entertainment. The mainstream of speech synthesis has been dominated by the deep neural network (DNN)-based methods (Wang et al., 2017) (Kim et al., 2021), and typically a two-stage pipeline is adopted (Ren et al., 2019) (Lu et al., 2020), in which the acoustic model first converts the textual and other controlling information into acoustic features (e.g., mel-spectrogram) and then the vocoder further transforms the acoustic features into audible waveforms. The two-stage pipeline has achieved substantial success since the acoustic features, which are expressed by frames, effectively act as the "relay" to alleviate the dimension-exploding problem of converting short texts to long audios with a high sampling frequency. The quality of the acoustic feature produced by the acoustic model, typically mel-spectrogram, crucially affects the quality of the synthesized speeches. Approaches widely used in the industry, such as Tacotron (Wang et al., 2017), DurIAN (Yu et al., 2019), and FastSpeech (Ren et al., 2019), generally adopt the convolutional neural network (CNN) and Transformers to predict the mel-spectrogram from the controlling factor. Diffusion model methods have attracted much attention because their potential to produce high-quality samples is well recognized. A diffusion model [Ho et al., 2020], also named score-based model [Song et al., 2020b], is based on two processes, a diffusion process that gradually perturbs data to noise and a reverse process that progressively converts noise back to data. A critical drawback [Song et al., 2020a] [Yang et al., 2022] of the diffusion model is that it requires many iterations for the generation. Several methods based on the diffusion model have been proposed for acoustic modeling in speech synthesis. Most of these works still have the issue of slow generation speed. Grad-TTS [Popov et al., 2021] formulated a stochastic differential equation (SDE) [Anderson, 1982] to gradually transform the noise to the mel-spectrogram and a numerical ODE solver is used for solving reverse SDE [Song et al., 2020b]. Although yielding high audio quality, the inference speed is low due to the large number of iterations (\(10\sim 1000\) steps) in the reverse process. Prodiff [Huang et al., 2022] was further developed to use progressive distillation [Salimans and Ho, 2022] to reduce the sampling steps. In Liu et al. [2022b], DiffGAN-TTS adopted an adversarially-trained model to approximate the denoising function for efficient speech synthesis. In Chen et al. [2022b], the ResGrad uses the diffusion model to estimate the prediction residual from pre-trained FastSpeech2 [Ren et al., 2020] and ground truth. Apart from normal speaking voice, recent studies also focus on voice with more complex variations in pitch, timing, and expression. For example, Diffsinger [Liu et al., 2022a] also shows that a well-designed diffusion model can achieve high quality on synthesized singing voice through one hundred steps of iteration. From the above discussion, the objectives of speech synthesis are three-fold: * High audio quality: The generative model should accurately express the nuances of speaking voice which contribute to the naturalness and expressiveness of the synthesized audio. Additionally, artefacts and distortions in the generated audio should also be avoided. * Fast inference speed: Real-time applications, including communication, interactive speech and music systems, require the fast generation speed of audio. When considering making time for other algorithms in an integrated system, simply being faster than real-time is insufficient for speech synthesis. * Beyond speaking: Instead of normal speaking voice, more complex modeling of voice on pitch, expression,rhythm, breach control and timbre is required such as singing voice. Although many efforts have been made, due to the mechanism of the denoising diffusion process when performing sampling, the trade-off problem among the synthesized audio quality, model capability and inference speed still exists in TTS and particularly pronounced in SVS. Existing methods generally seek to alleviate the slow inference problem rather than solve it fundamentally, and their speed is still not comparable to conventional methods without relying on diffusion models such as FastSpeech2 [Ren et al., 2020]. Recently, by expressing the stochastic differential equation (SDE) describing the sampling process as an ordinary differential equation (ODE), and further enforcing the consistency property of model on the ODE trajectory, the consistency model [Song et al., 2023] has been developed, yielding high-quality images with only one sampling step. However, despite such success in image synthesis, no speech synthesis model based on the consistency model is known so far. This indicates the potential of designing a consistency model based speech synthesis method to achieve both high-quality synthesis and fast inference speed. In this paper, we propose **C**onsistency **M**odel based method for speech synthesis, namely CoMoSpeech, which achieves fast and high-quality audio generation. Our CoMoSpeech is distilled from a pre-trained teacher model. More specifically, our teacher model leverages the SDE to smoothly transform the mel-spectrogram into the Gaussian noise distribution and learn the corresponding score function. After training, we utilize the corresponding numerical ODE solvers to construct the teacher denoiser function, which is used for further consistency distillation. Through distillation, our CoMoSpeech with consistency property is obtained. Ultimately, high-quality audio can be produced by our CoMoSpeech with a single-step sampling. We conducted experiments for both TTS and SVS, and the results show that the CoMoSpeech can generate speeches with one sampling step, more than 150 times faster than in real-time. The audio quality evaluation also shows that the CoMoSpeech achieves better or comparable audio quality to other diffusion model methods involving tens to hundreds of iterations. This makes the speech synthesis based on the diffusion model truly practical for the first time. Background of Consistency Model Now we briefly introduce the consistency model. Supposing that we have a dataset with distribution as \(p_{\text{data}}(\mathbf{x})\). The diffusion model performs sampling by progressively adding the Gaussian noise to diffuse data, and then adopting a reverse denoising process to generate samples from noise. For data \(\{\mathbf{x}\}_{t=0}^{T}\) in the diffusion process where \(p_{0}(\mathbf{x})=p_{\text{data}}(\mathbf{x})\), \(p_{T}(\mathbf{x})\) is Gaussian, and \(T\) is the time constant, the forward diffusion process can be expressed by a SDE (Song et al., 2020) as \[\mathrm{d}\mathbf{x}=f(\mathbf{x},t)\mathrm{d}t+g(t)\mathrm{d}\mathbf{w}, \tag{1}\] where \(\mathbf{w}\) is the standard wiener process, \(f(\cdot,\cdot)\) and \(g(\cdot)\) are drift and diffusion coefficients, respectively. \(f(\mathbf{x},t)\) always acts as a time-dependent scaling as \(f(\mathbf{x},t)=f(t)\mathbf{x}\) in previous work (Song et al., 2020; Karras et al., 2022), thus \[\mathrm{d}\mathbf{x}=f(t)\mathbf{x}\mathrm{d}t+g(t)\mathrm{d}\mathbf{w}. \tag{2}\] A notable property of the above SDE is that it corresponds to a probability flow ODE which indicates the sampling trajectory distribution of SDE at time \(t\)(Song et al., 2020; Karras et al., 2022), as \[\mathrm{d}\mathbf{x}=\left[f(t)\mathbf{x}-\frac{1}{2}g(t)^{2}\nabla\log p_{t} (\mathbf{x})\right]\mathrm{d}t, \tag{3}\] where \(\nabla\log p_{t}(\mathbf{x})\) is the score function of \(p_{t}(\mathbf{x})\)(Hyvarinen and Dayan, 2005). The probability flow ODE eliminates the stochastic \(\mathbf{w}\), thus generating a deterministic sampling trajectory. As long as the score function \(\nabla\log p_{t}(\mathbf{x})\) is known, the probability flow ODE in (3) can be used for sampling. Supposing \(D(\mathbf{x}_{t},t)\) is the "denoiser" which denoise the sample \(\mathbf{x}_{t}\) at step \(t\), the score function can be obtained by minimizing the denoising error \(||D(\mathbf{x}_{t},t)-\mathbf{x}||^{2}\)(Karras et al., 2022), yielding: \[\nabla\log p_{t}(\mathbf{x})=(D(\mathbf{x}_{t},t)-\mathbf{x}_{t})/\sigma_{t}^ {2}, \tag{4}\] where \(\sigma_{t}^{2}=\int g(t)^{2}dt\). Further, the probability flow ODE based sampling can be performed by first sampling from a noise distribution and then denoising to the true sample by the numerical ODE solver such as Euler and Heun solvers (Song et al., 2020; Karras et al., 2022). However, the ODE solvers normally involve many iterations causing a slow sampling. To accelerate sampling (Song et al., 2023), consistency property has been proposed to impose both: \[D(\mathbf{x}_{t},t)=D(\mathbf{x}_{t^{\prime}},t^{\prime}) \tag{5}\] Figure 1: An illustration of CoMoSpeech. Our CoMoSpeech distills the multi-step sampling of the teacher model into one step utilizing the consistency constraint. for any \(t\) and \(t^{\prime}\), and \[D(\mathbf{x}_{0},0)=\mathbf{x}_{0}. \tag{6}\] In this way, a consistency model can be obtained, and one-step sampling \(D(\mathbf{x}_{T},T)\) can be achieved since all points in the sampling trajectory of probability flow ODE is directly linked to the original distribution \(p_{0}(\mathbf{x})\). The consistency model can be trained either in isolation or by distilling from a pre-trained diffusion-based teacher model, and the later approach generally produces better performances. Detailed discussions can be referred to (Song et al., 2023). In our work, a distillation-based consistency model for speech synthesis, called CoMoSpeech, is proposed as below. ## 3 CoMoSpeech This section presents the proposed CoMoSpeech, a one-step speech synthesis model. The framework of the proposed method is shown in Figure. 1, which has two main stages. The first stage trains a diffusion-based teacher model to produce audios conditioned on the textual (for TTS and SVS) and musical score inputs (for SVS). Then in the second stage, by forcing the consistency property, we distill the CoMoSpeech from the teacher model to finally achieve a one-step inference given the conditional inputs. How to design the teacher model, perform consistency distillation and implement the training and inference will be discussed. ### Teacher Model As a blossoming class of generative models, many speech synthesis systems apply diffusion models and generate high-quality audio. However, specific criteria must be met to be the teacher model. First, the model needs to meet the theoretical requirement. As mentioned in Section 2, we aim to adopt the denoiser to implement the one-step generation, which means this function should point to the clean data instead of noise. In other words, we follow the term in (Huang et al., 2022) that our teacher model should be a generator-based rather than a gradient-based method. This restriction requires us to modify the state-of-art model Grad-TTS (Popov et al., 2021) to be our teacher model. We inherited the setting of learning from the distribution of the prior mel-spectrogram, \(\mu\), to the distribution of ground truth mel-spectrogram \(gt_{\text{med}}\), and the main architectures. In addition, we also adopt the EDM(Karras et al., 2022) as our design choice for the diffusion model to ensure further consistency distillation (Song et al., 2023). Specifically, we set mel-spectrogram as \(\mathbf{x}\) in (2) with the drift and diffusion coefficients as \(t\) and 1, respectively. Combined with (4), our ODE can be formulated as \[\mathrm{d}\mathbf{x}_{t}=[(D_{\theta}(\mathbf{x}_{t},t,cond)-\mathbf{x}_{t})/ t]\mathrm{d}t, \tag{7}\] where \(cond\) is the conditional input that will be introduced in the following section, and \(D_{\theta}(\mathbf{x}_{t},t,cond)\) is designed to precondition the neural network with a t-dependent skip connection as \[D_{\theta}(\mathbf{x}_{t},t,cond)=c_{\text{skip}}\left(t\right)\mathbf{x}_{t }+c_{\text{out}}\left(t\right)F_{\theta}(\mathbf{x}_{t},t,cond) \tag{8}\] where \(F_{\theta}\) is the network to be trained whose architecture can be flexibly chosen. For instance, the architectures of WaveNet (Liu et al., 2022)(Oord et al., 2016) or U-Net (Popov et al., 2021)(Ronneberger et al., 2015) can be selected to construct \(F_{\theta}\). The \(c_{\text{skip}}\left(t\right)\) and \(c_{\text{out}}\left(t\right)\) are used to modulate the skip connection and scale the magnitudes of \(F_{\theta}\), which can be given by (Song et al., 2023) \[c_{\text{skip}}(t)=\frac{\sigma_{\text{data}}^{2}}{(t-\epsilon)^{2}+\sigma_{ \text{data}}^{2}},\quad c_{\text{out}}(t)=\frac{\sigma_{\text{data}}(t- \epsilon)}{\sqrt{\sigma_{\text{data}}^{2}+t^{2}}}, \tag{9}\] where \(\sigma_{\text{data}}=0.5\) is used to balance the ratio between \(c_{\text{skip}}\) and \(c_{\text{out}}\) and \(\epsilon=0.002\) as the smallest time instant during sampling. The first reason for choosing the above formulation is it can meet (6) since \(c_{\text{skip}}\left(\epsilon\right)=1\) and \(c_{\text{out}}\left(\epsilon\right)=0\). The second reason is that both scaling factors can help the predicted results of \(F_{\theta}\) scale to the unit variance, which avoids the large variation in gradient magnitudes at different noise levels. To train the \(D_{\theta}\), the loss function can be formulated as \[\mathcal{L}_{\theta}=||D_{\theta}(\mathbf{x}_{t},t,cond)-\mathbf{x}_{0}||^{2}, \tag{10}\] which is a weighted \(\mathcal{L}_{2}\) loss between the predicted mel-spectrogram \(pred_{mel}\) and ground truth mel-spectrogram \(gt_{\text{mel}}\), and we also re-weight the loss function for different \(t\) as the same as EDM (Karras et al., 2022). Finally, the teacher model can be trained, and the synthesized mel-spectrogram can be sampled by Algorithm 1. During the inference on the teacher model, we first sample \(\mathbf{x}_{N}\) from \(\mathcal{N}(\mu,I)\), and then iterative the numerical ODE solver for \(N\) steps. More specifically, we choose the Euler solver, and the \(N\) discrete-time points are designed as the same setting in EDM (Karras et al., 2022). ``` 1:Input: The denoiser function \(D_{\theta}\); the prior mel-spectrogram \(\mu\); a set of time points \(t_{i\in\{0,...,N\}}\) 2:Sample \(\mathbf{x}_{N}\sim\mathcal{N}(\mu,I)\) 3:\(\mathbf{x}\gets D_{\theta}(\mathbf{x}_{N},t_{N},\mu)\) 4:if one-step synthesis 5:Output:\(\mathbf{x}\) Optional: multi-step synthesis 6:for\(i=N-1\)to\(1\)do 7: Sample \(\mathbf{z}\sim\mathcal{N}(0,I)\) 8:\(\mathbf{x}_{i}\leftarrow\mathbf{x}+\sqrt{t_{i}^{2}-e^{2}}\mathbf{z}\) 9:\(\mathbf{x}\gets D_{\theta}(\mathbf{x}_{i},t_{i},\mu)\) 10:endfor Output:\(\mathbf{x}\) ``` **Algorithm 2** Sampling procedure of the proposed CoMoSpeech ### Consistency Distillation A one-step diffusion sampling-based model is further trained from the teacher model based on consistency distillation, resulting in the proposed CoMoSpeech. Now we re-examine the constraints defined in (5) and (6). We note that given the choice of \(c_{\text{skip}}\left(t\right)\) and \(c_{\text{out}}\left(t\right)\) in (9), the denoiser \(D_{\theta}\) in the proposed teacher model already satisfies (6), therefore, the remaining training objective is to fulfill the property in (5). Inspired by (Song et al., 2023), we utilize the momentum-based distillation to train the proposed CoMoSpeech. The consistency distillation loss is defined as \[\mathcal{L}_{\theta}=||D_{\theta}(\mathbf{x}_{i+1},t_{i+1},cond)-D_{\theta^{- }}(\hat{\mathbf{x}}_{i}^{\phi},t_{i},cond)||^{2}, \tag{11}\] where \(\theta\) and \(\theta^{-}\) are initialized weights of CoMoSpeech inherited from the teacher model, \(\phi\) is the ODE solver of the teacher model in section 3.1, and \(i\) is a step-index uniformly sampled from the total ODE steps from \(N\) to \(1\). \(\hat{\mathbf{x}}_{i}^{\phi}\) is estimated from \(x_{i+1}\) and the ODE solver \(\phi\). During training, gradient-based iterations directly optimize the weight \(\theta\) over the loss function, and \(\theta^{-}\) is recursively updated by \[\theta^{-}\leftarrow\mathrm{stopgrad}(\alpha\theta^{-}+(1-\alpha)\theta), \tag{12}\] where \(\alpha\) is a momentum coefficient empirically set as 0.95. After distillation, the consistency property can be exploited so that the original data point \(\mathbf{x}_{0}\) can be transformed from any point \(\mathbf{x}_{t}\) on the ODE trajectory as shown in Figure 2. Therefore, we can directly generate the target sample from the distribution \(\mathbf{x}_{N}\sim\mathcal{N}(\mu,I)\) at the step \(t_{N}\), as: \[mel_{\text{pred}}=D_{\theta}(\mathbf{x}_{N},t_{N},cond). \tag{13}\] Therefore, one-step mel-spectrogram generation can be achieved. In addition, multi-step synthesis can be conducted by Algorithm 2 as a trade-off between audio quality and sampling speed, similar to other stochastic samplers. ### Conditional Input A remaining problem in the framework shown in Figure 1 is how to obtain the conditional input \(cond\), which will be used throughout the algorithm design. A well-designed speech synthesizer is expected to perform well not only on reading speech synthesis (TTS) but also on other more complicated tasks, such as SVS which additionally produces highly dynamic melodies. In producing the conditional inputs, both TTS and SVS tasks are considered to examine the proposed framework's effectiveness comprehensively. Concretely, we adopt the phoneme as the basic input for TTS and SVS. Then, a simple lookup table is used for embedding the phoneme feature. Additionally, for the SVS task, we add a music score that specifies the note levels time-aligned to the phonemes. For note feature extraction, we use the embedding method for both categorical feature note pitch and slur indicator and rely on a linear layer for continuous feature note duration. Summing all the feature sequences together, we utilize the encoder structure and variance adaptor in FastSpeech (Ren et al., 2019). Specifically, \(N\) feed-forward transformer blocks (FFT blocks) are stacked to extract the phoneme hidden sequence. A duration predictor is used to estimate the duration of each phoneme \(d_{\text{pred}}\), and the corresponding loss function is expressed as \[\mathcal{L}_{\text{duration}}=||\log(d_{\text{pred}})-\log(d_{\text{gt}})||^{2} \tag{14}\] where \(d_{\text{gt}}\) indicates the ground-truth phoneme duration. Further, the length regulator projects the phoneme hidden sequence into the hidden sequence in the mel-spectrogram domain, with the phoneme duration denoted as \(hidden_{\text{mel}}\). Then, the prior mel-spectrogram \(\mu\) is predicted using the \(hidden_{\text{mel}}\) with prior loss function as \[\mathcal{L}_{\text{prior}}=||\mu-gt_{\text{mel}}||^{2}. \tag{15}\] As shown in the bottom left part of Figure 1, since the expanded features belonging to the same phoneme in \(hidden_{\text{mel}}\) are repeated, the predicted \(prior_{\text{mel}}\) can only roughly approximate the time-frequency structure of \(gt_{\text{mel}}\) based on the phoneme sequence. Nevertheless, it significantly reduces the distance to \(gt_{\text{mel}}\) by sampling from \(\mathcal{N}(\mu,\mathbf{I})\) rather than \(\mathcal{N}(\mathbf{0},\mathbf{I})\). Figure 2: An illustration of consistency property. A function with consistency property maps any points on the ODE trajectory to the original data. For the neural network and conditional inputs in the denoiser, we investigated different combinations and finally selected a) the WaveNet architecture[Oord et al., 2016] and \(hidden_{\text{mel}}\) as feature \(cond\) in (13) for SVS and b) U-Net architecture [Ronneberger et al., 2015] and \(\mu_{\text{mel}}\) as \(cond\) for TTS. ### Training procedure The whole process can be summarised as two stages which are the training of the teacher model and the consistency distillation. As for the training of the teacher model, the loss term can consist of three parts which are duration loss (in (14)), prior loss (in (15)), and denoising loss (in (10)). These three losses are summed up together without any extra weight. The objective of this stage is to build a speech synthesis system that can generate high-quality audios with multi-step synthesis and have the potential for further consistency distillation. The second stage is consistency distillation. There is only one loss function as defined by (11), which helps the model learn the consistency property. The parameters are initialized from the teacher model. During training, the parameters of the encoder are fixed, which means only the weight in the denoiser is updated. After distillation, high-quality recordings with one-step synthesis (13) can be achieved. ## 4 Experiments To evaluate the performance of the proposed CoMoSpeech, we conduct experiments on both TTS and SVS. ### Experimental Setup #### 4.1.1 Data and Preprocessing We adopt the public LJSpeech [Ito and Johnson, 2017] as the TTS dataset, which includes around \(24\) hours of English female voice recordings sampled at \(22.05\) kHz. Similar to [Ren et al., 2019] [Chen et al., 2022b], we split the dataset into three sets: \(12,228\) samples for training, \(349\) samples (with document title LJ003) for validation, and \(523\) samples (with document title LJ001 and LJ002) for testing. Following the common practice in [Ren et al., 2020][Huang et al., 2022] for TTS, we extract the 80-bin mel-spectrogram with the frame size of \(1024\) and hop size of \(256\). For the SVS task, we use the Opencpop dataset [Wang et al., 2022] containing 100 Chinese pop songs which are split into \(3,756\) utterances with a total duration of around \(5.2\) hours. All recordings are from a single female singer and labeled with aligned phoneme and MIDI-pitch sequences. We follow the official train/test split [Wang et al., 2022], i.e., \(95\) songs and \(5\) songs for training and evaluation, respectively. Same as the setting in [Chen et al., 2020][Liu et al., 2022a] for SVS, the recordings are resampled at \(24\)kHz rates with \(16\)-bit precision, and the \(80\)-bin mel-spectrogram is extracted with a frame size of \(512\) and hop size of \(128\). #### 4.1.2 Implementation Details For TTS, for a fair comparison, the encoder and duration predictor are exactly the same as those in Grad-TTS [Popov et al., 2021]. The encoder contains \(6\) feed-forward transformer (FFT) blocks [Ren et al., 2019], and The hidden channel is set to \(192\). The duration predictor uses two convolutional layers for prediction. Both the teacher model and CoMoSpeech are trained for \(1.7\) million iterations on a single NVIDIA A100 GPU with a batch size of \(16\). The Adam optimizer [Kingma and Ba, 2014] is adopted with the learning rate 1e-4. For SVS, we adopt almost the same architecture as in TTS with different hyperparameters. The encoder adopts \(4\) FFT blocks, and we set the hidden channel to \(256\) in the encoder. The duration predictor consists of \(5\) convolutional layers to estimate the duration. The teacher model of SVS and CoMoSpeech are trained on a single GPU for \(250\)k steps with the AdamW [Loshchilov and Hutter, 2017] optimizer. The initial learning rate is 1e-3, and the exponential decay strategy with a decreasing factor of \(0.5\) every \(50\)k steps is adopted. #### 4.1.3 Evaluation Metrics we conduct both objective and subjective evaluations to measure the sample quality (MOS & FD) and the model inference speed (RTF & NFE): * MOS (mean opinion score) [12] is used to measure the perceived quality of the synthesized audio, which is obtained by presenting 10 listeners with the test set and asking them to rate the quality of the synthesized audio on a scale of 1 to 5. * FD (frechet distance)2 is similar to the frechet inception distance [10] in image generation. We use frechet distance [11] in audio to measure the similarity between generated samples and target samples utilizing the large-scale pretrained audio neural networks PANNs [13]. FRTF (real-time factor) determines how quickly the system can synthesize audio in real-time applications. It is defined as the ratio between the total time a speech system takes to synthesize a given amount of audio and the duration of that audio. FRE (number of function evaluations) measures the computational cost, which refers to the total number of times the denoiser function is evaluated during the generation process. Footnote 2: [https://github.com/haoheliu/audidolm_eval](https://github.com/haoheliu/audidolm_eval) ### Performances on Text-to-Speech We compare the above four metrics of the samples generated by the teacher model and CoMoSpeech with the following systems: * GT, the ground truth recordings. * GT (Mel+HiFi-GAN), using ground-truth mel-spectrogram to synthesize waveform with HiFi-GAN vocoder [13]. * FastSpeech 2 [14], synthesizing high-quality speech at a fast speed with FFT blocks and variance adaptor. * DiffGAN-TTS [11]3, applying an adversarially-trained model to approximate the denoising function for efficient speech synthesis. Footnote 3: [https://github.com/kconlee9420/DiffGAN-TTS](https://github.com/kconlee9420/DiffGAN-TTS) * ProDiff [15]4, directly adopting progressive distillation [16] to TTS for fast generation speed. Footnote 4: [https://github.com/Rongjiehuang/ProDiff](https://github.com/Rongjiehuang/ProDiff) * DiffSpeech [11]5, using an auxiliary acoustic model to generate mel-spectrogram and injects K steps noise to a noisy mel-spectrogram. Then, the mel-spectrogram is generated from the noisy mel-spectrogram by DDPM iteratively. Footnote 5: [https://github.com/MoonInTheRiver/DiffSinger/blob/master/docs/README-TTS.md](https://github.com/MoonInTheRiver/DiffSinger/blob/master/docs/README-TTS.md) * Grad-TTS [15]6, using stochastic differential equation modelling for the mel-spectrogram and use corresponding ODE solver for audio generation. Footnote 6: [https://github.com/huawei-noah/Speech-Backbones/tree/main/Grad-TTS](https://github.com/huawei-noah/Speech-Backbones/tree/main/Grad-TTS) The evaluation results of TTS are shown in Table 1. For audio quality, our teacher model achieved the highest MOS and Grad-TTS ranked second because our teacher model is based on the design of Grad-TTS, but we adopt better choices on drift and diffusion coefficients in SDE. The proposed CoMoSpeech takes 3rd place among all methods, but it is substantially better than other fast-inference methods ProDiff, DiffGAN-TTS and Fastspeech2. This demonstrates the effectiveness of the consistency distillation and the effectiveness of teacher model selection. In addition, we also observe that our teacher model and CoMoSpeech achieve the best frechet distance scores among all methods, further demonstrating the proposed methods' superior performance on modeling data distribution. Regarding inference speed, while Fastspeech2 obviously achieves the best, our CoMoSpeech also yields a very low RTF, and is faster than all other baselines. Compared with the diffusion-based methods involving a large number of iterations including DiffSpeech, Grad-TTS and our teacher model, our method achieves about 50 times faster with similar or even better audio quality. In addition, our CoMoSpeech also achieves faster speed and better quality than methods for speeding up diffusion sampling, i.e., DiffGAN-TTS and ProDiff. ### Performances on Singing Voice Synthesis To further examine the modeling capability of our methods, we compare the proposed SVS-version models, teacher-SVS and CoMoSpeech-SVS, with several baselines on SVS, and the baselines include: * GT, the ground truth recordings. * GT(Mel+HiFi-GAN), synthesizing song samples using HiFi-GAN (Kong et al., 2020) vocoder with ground truth mel-spectrogram inputs. * FFTSinger (Bliaauw and Bonada, 2020), adopting FFT blocks to predict mel-spectrogram, and using the HiFi-GAN vocoder to synthesize audio; * HiFiSinger (Chen et al., 2020), using a novel sub-frequency GAN (SF-GAN) to generate the mel-spectrogram. Since our aim is to compare the acoustic model, we modify the original vocoder to HiFi-GAN, being the same as other methods. * DiffSinger (Liu et al., 2022)7, using DDPM to generate the mel-spectrogram from noisy mel-spectrogram. DiffSinger uses an auxiliary acoustic model to generate a prior mel-spectrogram injecting K steps noise to a noisy mel-spectrogram. Footnote 7: [https://github.com/MoonInTheRiver/DiffSinger/blob/master/docs/README-SVS-opencpop-cascade.md](https://github.com/MoonInTheRiver/DiffSinger/blob/master/docs/README-SVS-opencpop-cascade.md) The results of SVS are shown in Table 2. As for audio quality, it can be seen that our CoMoSpeech and other diffusion model based methods can significantly surpass all non-iterative methods including FFTSinger and HIFiSinger on frechet distance and mean opinion score. Among diffusion models, our teacher model achieves the best performance, and our student model CoMoSpeech has a close performance to it. For the inference speed, with one-step inference, the proposed CoMoSpeech could maintain a speed similar to non-iterative methods and significantly outperform other diffusion model based methods. \begin{table} \begin{tabular}{l c c c c} \hline \hline METHOD & NFE & RTF (\(\downarrow\)) & FD (\(\downarrow\)) & MOS (\(\uparrow\)) \\ \hline GT & / & / & / & 4.778 \\ GT(Mel+HiFi-GAN) & / & / & 0.282 & 4.590 \\ FastSpeech 2 (Ren et al., 2020) & 1 & 0.0017 & 10.48 & 4.034 \\ DiffGAN-TTS (Liu et al., 2022) & 4 & 0.0084 & 8.310 & 3.889 \\ ProDiff (Huang et al., 2022) & 4 & 0.0097 & 3.503 & 3.374 \\ DiffSpeech (Liu et al., 2022) & 71 & 0.1030 & 2.349 & 4.103 \\ Grad-TTS (Popov et al., 2021) & 50 & 0.1694 & 1.882 & 4.487 \\ \hline Teacher & 50 & 0.1824 & **0.748** & **4.538** \\ **CoMoSpeech** & **1** & **0.0058** & 0.774 & 4.239 \\ \hline \hline \end{tabular} \end{table} Table 1: Evaluation results on LJSpeech for TTS. \begin{table} \begin{tabular}{l c c c c} \hline \hline METHOD & NFE & RTF (\(\downarrow\)) & FD (\(\downarrow\)) & MOS (\(\uparrow\)) \\ \hline GT & / & / & / & 4.675 \\ GT(Mel+HiFi-GAN) & / & / & 0.882 & 4.588 \\ FFTSinger (Bliaauw and Bonada, 2020) & 1 & 0.0032 & 7.867 & 2.769 \\ HiFiSinger (Chen et al., 2020) & 1 & 0.0034 & 6.340 & 3.156 \\ DiffSinger (Liu et al., 2022) & 60 & 0.1338 & 3.466 & 3.506 \\ \hline Teacher-SVS & 50 & 0.1282 & **3.162** & **4.050** \\ **CoMoSpeech-SVS** & **1** & **0.0048** & 3.571 & 3.794 \\ \hline \hline \end{tabular} \end{table} Table 2: Evaluation Results on Opencpop for SVS. In addition, we also compare the results between speaking voice and singing voice synthesis. Based on two methods DiffSinger and DiffSpeech, which are basically the same, we can observe that the singing voice has a greater FD than the speaking voice, indicating that it is more difficult to model the data. However, the proposed teacher model and CoMoSpeech still achieve the best performances on audio quality and inference speed, respectively. This shows the capability of CoMoSpeech for speech synthesis beyond speaking voices. In addition, we can observe that our CoMoSpeech-SVS is faster than CoMoSpeech because the denoiser function in SVS follows the WaveNet architecture which is faster than U-Net architecture in TTS. This observation inspires us that if a more efficient denoiser function that runs faster than the decoder in FastSpeech2 can be designed, we can make CoMoSpeech even faster than non-iterative methods in future work. ### Ablation Studies of Consistency Distillation In this part, we will show the importance of consistency distillation. As shown in Figure 3, we visualize the differences before and after consistency distillation in the results,in other words, the teacher model and our CoMoSpeech. At \(t_{N}\) steps, the denoiser function before distillation points to a smooth mel-spectrogram, indicating a great distance between ground truth mel-spectrogram. However, we can observe that the results after distillation significantly improve the performances by enriching many details, resulting in natural and expressive sounds. \begin{table} \begin{tabular}{c|c c} \hline & \multicolumn{2}{c}{Frechet Distance (\(\downarrow\))} \\ \cline{2-3} NFE & Teacher model & CoMoSpeech \\ \hline 1 & 7.526 & 0.774 \\ 2 & 4.558 & 0.762 \\ 4 & 2.477 & 0.784 \\ 10 & 1.197 & **0.725** \\ 50 & **0.748** & 0.850 \\ \hline \end{tabular} \end{table} Table 3: Comparison between CoMoSpeech and its teacher model with different sampling steps for TTS. Figure 3: Effect of consistency distillation: Compared to the denoiser of teacher model before consistency distillation, our CoMoSpeech can generate a high-quality mel-spectrogram instead of an over-smoothed mel-spectrogram by calling the denoiser function only one time. In Table 3 and Table 4, we also conduct experiments using the frechet distance metric to further demonstrate the effectiveness of consistency distillation. For teacher models for both TTS and SVS tasks, the frechet distance decrease when the iteration steps increase. This trade-off property between inference speed and sample quality has also been observed in other diffusion model methods. Surprisingly, we can find that our CoMoSpeech can achieve nearly the best performance in one step, and the best performance can be achieved at 4 and 10 steps on TTS and SVS, respectively. However, the trade-off property seems to disappear after \(10\) steps. The issue that the model performance improves in a few sampling steps and then declines slightly as the number of steps increases is called the sampling drift challenge [11][14][4][15]. We will leave the exploration to future work. ## 5 Conclusions and Future Work In this paper, we propose CoMoSpeech, a one-step acoustic model for speech synthesis based on the consistency model. With different conditional inputs, our CoMoSpeech can generate high-quality speech or singing voice by transforming the noise mel-spectrogram into the predicted mel-spectrogram in a single step. However, there are still some limitations to our method. Since our CoMoSpeech needs to distill from a teacher model for better performance, this makes the pipeline of constructing a speech synthesis system more complicated. Therefore, how to directly train the CoMoSpeech without distillation of the teacher model is our next step to investigate. In addition, we show the capability of CoMoSpeech on the SVS task. Even though CoMoSpeech achieves the best result among all the methods, it still has a significant gap between the ground truth recording.
2301.02041
A Note On Square-free Commuting Probabilities of Finite Rings
It is shown that the commuting probability of a finite ring cannot be a fraction with square-free denominator, resolving a conjecture of Buckley and MacHale.
Andrew Mendelsohn
2023-01-05T12:25:06Z
http://arxiv.org/abs/2301.02041v2
# A Note On Square-free Commuting Probabilities of Finite Rings ###### Abstract It is shown that the commuting probability of a finite ring cannot be a fraction with square-free denominator, resolving a conjecture of Buckley and MacHale. ## 1 Introduction Let \(G\) be a finite group and \(U\) denote the uniform distribution. Let the commuting probability of \(G\), \(P_{G}\), be denoted \[P_{G}:=Pr_{a,b\gets U(G)}(ab=ba).\] An alternative characterisation is \[P_{G}=\frac{1}{|G|^{2}}\{(x,y)\in G^{2}:xy=yx\}.\] Joseph made the following conjectures, where \(\mathcal{G}\) is the set of all commuting probabilities of finite groups [1]: 1. All limit points of \(\mathcal{G}\) are rational. 2. The set \(\mathcal{G}\) is well ordered by \(>\). 3. The set \(0\cup\mathcal{G}\) is closed (that is, contains all its accumulation points). In [2], Eberhard resolved conjectures A and B, and in [3] conjecture C was resolved. One can extend the definition of commuting probability to finite rings: set \[P_{R}=\frac{1}{|R|^{2}}|\{(x,y)\in R^{2}:xy=yx\}|=\frac{1}{|R|^{2}}|\{(x,y) \in R^{2}:xy-yx=0\}|.\] In [4], the following conjectures were made, where \(\mathcal{R}\) is the set \(P_{R}\) over all finite rings \(R\): 1. \(1/n\not\in\mathcal{R}\) when \(n\in\mathbb{N}\) is square-free. 2. \(\mathcal{R}\subset\mathcal{G}\). 3. \(\mathcal{R}\) coincides with the set of values of \(P_{G}\) as \(G\) ranges over all finite nilpotent groups of class at most \(2\). 4. All limit points of \(\mathcal{R}\) are rational. 5. For each \(0<t\leq 1\), there exists \(\epsilon_{t}>0\) such that \(\mathcal{R}\cap(t-\epsilon_{t},t)=\emptyset\). 6. \(\mathcal{R}\) does not contain any of its accumulation points. Note conjectures 4, 5, and 6 correspond to Joseph's conjectures. Moreover, conjectures 4 and 5 would follow from the veracity of conjectures 2 or 3, since Eberhard showed \(\mathcal{G}\) has rational limit points and is well-ordered. In [5], conjecture 2 was in fact resolved, and thus conjectures 4 and 5. Moreover, conjecture 3 was partially resolved: the authors obtained that \(\mathcal{R}\) is a subset of the set of values of \(P_{G}\) as \(G\) ranges over all finite nilpotent groups of class at most 2. We conclude that conjectures 1, 3, and 6 are open. In this work, we resolve conjecture 1. ## 2 Preliminaries and Prior Results Definition 1: Two finite groups \(G,H\) are called _isoclinic_ if \(G/Z(G)\cong H/Z(H)\) and \(G^{\prime}\cong H^{\prime}\), and if the diagram below commutes: Isoclinism preserves nilpotency class and commuting probability [10]. A _stem group_ is a group in a given isoclinism class of minimal order. It is well known that if \(G\) is a stem group, then \(Z(G)\leq G^{\prime}\). For more on isoclinism, see [11]. Below we state existing results in the literature we will need below. Lemma 1: _[_5_]_ \(\mathcal{R}\subset\mathcal{G}_{n,2}\)_, where \(\mathcal{G}_{n,2}\) is the set of commuting probabilities of all finite nilpotent groups of class at most 2._ This statement is proved as follows: let \(R\) be a finite ring. We can turn \(R\oplus R\) into a nilpotent ring of class 3 by endowing it with the multiplication rule \((a,x)(b,y)=(0,ab)\). This ring can be turned into a nilpotent group \(G_{R}\) of class at most 2 by endowing it with the binary operation \(a\circ b=a+b+ab\). Both of these transformations preserve the commuting probability. Thus the values of \(\mathcal{R}\setminus\{1\}\) are a subset of the values \(P_{G}\), running over nilpotent groups \(G\) of class equal to 2. Note that if \(R\) has size \(n\), then the resulting group \(G_{R}\) has order \(n^{2}\), and if \(R\) is noncommutative then the resulting group is nonabelian. Lemma 2: _[_7_]_ \(\Pr(G)=\frac{1}{\left\lvert G^{\prime}\right\rvert}\left(1+\frac{\left\lvert G^ {\prime}\right\rvert-1}{\left\lvert G:Z(G)\right\rvert}\right)\) _if and only if \(G\) is nilpotent._ Lemma 3: _[_6_]_ _If \(G\) is a nilpotent group, then \(P_{G}\neq\frac{1}{p}\)._ ## 3 Results Theorem 3.1: \(\frac{1}{p}\not\in\mathcal{R}\) _for all \(p\in\mathbb{N}_{\geq 2}\)._ Proof.: By Lemma 1, \(\mathcal{R}\) is contained within the set of commuting probabilities of finite nilpotent groups of class at most \(2\). By Lemma 3, this latter set does not contain \(\frac{1}{p}\) for any prime \(p\). Denote the set of commuting probabilities of rings of prime power order for some prime \(p\) by \(\mathcal{R}_{p}\). **Proposition 1**.: \(\frac{1}{n}\not\in\mathcal{R}_{p}\) _for any prime \(p\in\mathbb{N}_{\geq 2}\) and \(n\in\mathbb{N}_{>1}\)._ Proof.: By Lemma 1, we need only consider commuting probabilities of finite nilpotent groups of class at most \(2\). By Lemma 2, we know _a fortiori_ the commuting probability of finite nilpotent groups of class at most \(2\) in terms of derived subgroups and centers. Suppose that for some \(n\in\mathbb{N}_{\geq 2}\) we have \[\frac{1}{|G^{\prime}|}\left(1+\frac{|G^{\prime}|-1}{|G:Z(G)|}\right)=\frac{1}{ n}.\] By the construction of [5] considered above, wlog let \(|G|=p^{e}\) for some even positive integer \(e\). Then \(Z(G)=p^{f}\) and \(G^{\prime}=p^{g}\) with \(0<g\leq f<e\) (since \(G\) is at most class \(2\)). Then \[\frac{1}{n} =p^{-g}\left(1+\frac{p^{g}-1}{p^{e-f}}\right)=p^{-g}+\frac{p^{g}- 1}{p^{e-f+g}} \tag{1}\] \[=\frac{p^{e-f}+p^{g}-1}{p^{e-f+g}}. \tag{2}\] For this to hold, some power \(p^{h}\) must divide the numerator, with \(h>0\); but this cannot hold; for if so, then one must have \(p^{e-f}+p^{g}-1=kp\), for some integer \(k\neq 0\). But then \(-1\equiv 0\bmod p\), a contradiction. **Theorem 2**.: \(\frac{\ell}{n}\not\in\mathcal{R}\) _for any squarefree \(n\in\mathbb{N}_{>1}\) and \(\ell<n\) with \(\gcd(\ell,n)=1\)._ Proof.: Any finite ring can be turned into a nilpotent group of class at most \(2\), such that the commuting probability of the ring is equal to the commuting probability of the group. The construction (outlined above) turns a commutative ring into a group of class \(1\), and a noncommutative ring into a nonabelian group of class at most \(2\), therefore of class equal to \(2\). The order of the group is the square of the order of the ring, so the Sylow subgroups of the group have order at least the square of a prime. Since the group is nilpotent, it can be written as a product of its Sylow subgroups, which are all of class at most \(2\), and the commuting probability of the group is the product of the commuting probabilities of its Sylow subgroups. Thus it remains to analyse the equation \[\frac{\ell}{n}=\prod_{i=1}^{m}\frac{p_{i}^{e_{i}-f_{i}}+p_{i}^{g_{i}}-1}{p_{i }^{e_{i}-f_{i}+g_{i}}},\] for \(m>1\) and the \(p_{i}\) distinct, where the \(e_{i},f_{i}\), and \(g_{i}\) are as before. Via isoclinism, we may replace \(G_{R}\) by a class two nilpotent (stem) group \(G\) with identical commuting probability and minimal order. Thus we may assume that (note isoclinism preserves nilpotency class and \(G_{R}\) is class two), and moreover that none of the Sylow subgroups are abelian. The above equality simplifies to \[\frac{\ell}{n}=\prod_{i=1}^{m}\frac{p_{i}^{e_{i}^{\prime}-f_{i}^{\prime}}+p_{i}^ {f_{i}^{\prime}}-1}{p_{i}^{e_{i}^{\prime}}},\] where the exponents \(e_{i}^{\prime},f_{i}^{\prime}\) correspond to the group \(G\). We now proceed by induction on the number of prime factors of \(|G_{R}|\), denoted \(m\). By Lemma 14 of [9], if \(P_{G_{R}}=\frac{\ell}{n}\) in lowest terms, the prime factors of \(n\) are precisely the prime factors of \(|G_{R}|\). If \(m=1\), it is known that \(\frac{\ell}{n}\not\in\mathcal{R}_{q}\) for any prime \(q\) and square-free \(n\) (in fact, we know this to hold for \(m\leq 69\) by Theorem 9 of [4]). Suppose the statement is true up to \(n=k-1\), and consider the case \(n=k\); suppose, for a contradiction, that the commuting probability is equal to \(\frac{\ell}{n}\), for some square-free integer \(n\) and \(\ell\leq n\) with \(\gcd(\ell,n)=1\), and without loss of generality that \(n\) has prime factors equal to the set of \(p_{i}\), \(i=1,...,k\): \[\frac{\ell}{n}=\prod_{i=1}^{k}\frac{p_{i}^{e_{i}^{\prime}-f_{i}^{\prime}}+p_{i }^{f_{i}^{\prime}}-1}{p_{i}^{e_{i}^{\prime}}}.\] Rearrange for the following: \[\frac{\ell\cdot p_{k}^{e_{k}^{\prime}}}{n\cdot(p_{k}^{e_{k}^{\prime}-f_{k}^{ \prime}}+p_{k}^{f_{k}^{\prime}}-1)}=\prod_{i=1}^{k-1}\frac{p_{i}^{e_{i}^{ \prime}-f_{i}^{\prime}}+p_{i}^{f_{i}^{\prime}}-1}{p_{i}^{e_{i}^{\prime}}}.\] Writing the left hand side in lowest terms, we have \[\frac{\ell\cdot p_{k}^{e_{k}^{\prime\prime}}}{n^{\prime}\cdot(p_{k}^{e_{k}^{ \prime}-f_{k}^{\prime}}+p_{k}^{f_{k}^{\prime}}-1)}=\prod_{i=1}^{k-1}\frac{p_{i }^{e_{i}^{\prime}-f_{i}^{\prime}}+p_{i}^{f_{i}^{\prime}}-1}{p_{i}^{e_{i}^{ \prime}}},\] where \(n^{\prime}\) is not divisible by \(p_{k}\). We have a commuting probability on the right hand side with \(k-1\) prime factors; so by the induction hypothesis, the denominator of the left hand side has no square factors. But we also have \(p_{k}^{e_{k}^{\prime}-f_{k}^{\prime}}+p_{k}^{f_{k}^{\prime}}-1\) on the denominator of the left hand side, which is not divisible by \(p_{k}\), and by Lemma 14 of [9] must have prime factors equal to the set of \(p_{i}\), for \(i=1,...,k-1\); moreover, there can be no cancellation between these factors and \(\ell\), by assumption on \(\ell\). But then for at least one index \(j\), \(n^{\prime}\cdot(p_{k}^{e_{k}^{\prime}-f_{k}^{\prime}}+p_{k}^{f_{k}^{\prime}}-1)\) has a prime factor \(p_{j}\) with multiplicity at least two, which is a contradiction. Remark 1: Since \(\frac{1}{p}\) is an accumulation point of \(\mathcal{R}_{p}\), \(\frac{1}{n}\) is an accumulation point of \(\mathcal{R}\) for all \(n\). The above result thus means that many accumulation points of \(\mathcal{R}\) are not contained in \(\mathcal{R}\). As well as resolving the first conjecture stated at the beginning of this note, the result also makes progress on the sixth conjecture stated.
2310.14162
Augmenting End-to-End Steering Angle Prediction with CAN Bus Data
In recent years, end to end steering prediction for autonomous vehicles has become a major area of research. The primary method for achieving end to end steering was to use computer vision models on a live feed of video data. However, to further increase accuracy, many companies have added data from light detection and ranging (LiDAR) and or radar sensors through sensor fusion. However, the addition of lasers and sensors comes at a high financial cost. In this paper, I address both of these issues by increasing the accuracy of the computer vision models without the increased cost of using LiDAR and or sensors. I achieved this by improving the accuracy of computer vision models by sensor fusing CAN bus data, a vehicle protocol, with video data. CAN bus data is a rich source of information about the vehicle's state, including its speed, steering angle, and acceleration. By fusing this data with video data, the accuracy of the computer vision model's predictions can be improved. When I trained the model without CAN bus data, I obtained an RMSE of 0.02492, while the model trained with the CAN bus data achieved an RMSE of 0.01970. This finding indicates that fusing CAN Bus data with video data can reduce the computer vision model's prediction error by 20% with some models decreasing the error by 80%.
Rohan Gupta
2023-10-22T03:24:53Z
http://arxiv.org/abs/2310.14162v1
# Augmenting End-to-End Steering Angle Prediction with CAN Bus Data ###### Abstract In recent years, end-to-end steering prediction for autonomous vehicles has become a major area of research. The primary method for achieving end-to-end steering was to use computer vision models on a live feed of video data. However, to further increase accuracy, many companies have added data from light detection and ranging (LiDAR) and/or radar sensors through sensor fusion. However, the addition of lasers and sensors comes at a high financial cost. In this paper, l address both of these issues by increasing the accuracy of the computer vision models without the increased cost of using LiDAR and/or sensors. I achieved this by improving the accuracy of computer vision models by sensor fusing CAN bus data, a vehicle protocol, with video data. CAN bus data is a rich source of information about the vehicle's state, including its speed, steering angle, and acceleration. By fusing this data with video data, the accuracy of the computer vision model's predictions can be improved. The data was collected with a dashboard mounted camera as well as a computer connected to the CAN Bus Wires. I then trained NVIDIA's DAVE 2 model on the video data and CAN bus data and compared this model to one trained without CAN bus data. The metric I chose to measure success in the experiment is the Root Mean Square Error (RMSE). When I trained the model without CAN bus data, I obtained an RMSE of 0.02492, while the model trained with the CAN bus data achieved an RMSE of 0.01970. This finding indicates that fusing CAN Bus data with video data can reduce the computer vision model's prediction error by 20%. Furthermore, additional models were tested to determine if they also had a decrease in error when fused with CAN Bus data. I found that large computer vision models performed better with the addition of CAN Bus data. For example, RESNET50's error decreased by 52%, EfficientNetB7's error decreased by 70%, VGG19's error decreased by 25%, NasNetLarge's error decreased by 80%, and RESNET152V2's error decreased by 70%. These results suggest that fusing CAN Bus data with video data can reduce the error rate of computer vision models. Since the results are better with CAN bus data, it shows that CAN bus data can improve accuracy while lowering financial costs. This is because CAN bus data is a relatively inexpensive source of data, and it can be collected using off the-shelf hardware. I believe that training end-to-end steering prediction models with both CAN Bus data and video data has the potential to improve the safety and performance of autonomous vehicles. **Index Terms**: CAN bus, end-to-end, self driving ## 1 Introduction The idea of self-driving cars was first conceived in the late 1930s, but it was not until the 1980s that they began to be developed at Carnegie Mellon University. When neural networks were first introduced into autonomous vehicles, the main source of data was from cameras. Over time, computers became more powerful and neural networks became larger. With more computational power, autonomous driving models were able to handle other sources of data, such as Radio Detection and Ranging (RADAR) and Light Detection and Ranging (LiDAR). The addition of sensors improved accuracy and safety; however, the sensors also increased the cost of autonomous vehicles, which led many companies to balance cost and accuracy. Companies like Tesla aim to make self-driving cars highly safe and more affordable, as evidenced by the highly marketed Tesla Model 3 and the upcoming Tesla Hitchback. In contrast, companies like Google value accuracy over price and continue to outfit their cars with the most up-to-date and expensive sensors available. The aim of this paper is to research methods that allow a balance of cost and accuracy. Controller Area Network (CAN) bus data can be added to a typical computer vision model to decrease error rate while maintaining costs. As cars became more popular, regulations related to the safety of cars also increased. One such regulation is the On-Board Diagnostic II (OBDII) standard, which introduced the CAN bus protocol. CAN bus allows a car's Electronic Control Units (ECUs), which are connected to the different sensors in the car, to communicate without complex wiring through the use of high and low voltage signals. Due to the OBDII regulations, all gas-powered vehicles are required to have an OBDII port and the majority of electric vehicles also have OBDII ports. This allows external hardware to be connected to the car in order to collect diagnostic packets of information from the CAN bus wires. As a result, the process of collecting data from a car's onboard sensors is fairly simple. The data from these sensors can then be added to the end-to-end steering angle prediction model to increase its accuracy. While sensor fusion techniques exist to combine computer vision models with other types of data, no widely used technique uses CAN Bus data. The contribution this paper makes is by adding CAN bus data to machine learning models to decrease error. ## 2 Related Work The research in the steering angle prediction has been divided. Researchers with access to LIDAR and RADAR are proponents of them and use them for regular steering prediction. However, researchers without access to expensive sensors are proponents of end-to-end steering prediction [1]. The difference between these steering prediction models, regular steering prediction and end-to-end steering prediction, is that in the latter, a model is trained to predict the steering angle, while in the former, many smaller models are trained on distinct parts of the data like the weather conditions or the road lanes. This is can be shown in Figure 1. NVIDIA made a significant contribution to the field of end-to-end steering prediction with publication of the DAVE-2 model [3]. DAVE-2 is a relatively simple Convolution Neural Network (CNN) model that predicts the steering angle of a vehicle based on three camera angles: front, left, and right. NVIDIA was able to achieve low error with the model, and they also released the Udacity dataset that they trained their model on, which has allowed other researchers to optimize and develop more advanced models [4, 5]. Many researchers have written papers which try to calculate the model which predicts the best steering angle. A paper compared four different models to determine which model predicts the best end to end steering angle: Predict 0, 3D LSTM, RESNET 50, and NVIDIA's DAVE-2 model [6]. It found that RESNET 50 had the lowest loss. However, creating and determining which model provides the best accuracy is very labor intensive. This paper also used an alternate method to decrease loss, this paper proposes to add CAN bus data to the model which results in the greatest decrease in loss. Though there are no other end-to-end steering papers that sensor fuse on both CAN Bus data and image data, there are other papers that sensor fuse image data with LIDAR and or RADAR data [7]. This paper was able to decrease the error by fusing LIDAR and image data together when compared to model training on only image data or only LIDAR data. In recent years, there has been a development in implementing steering angle prediction models with other models like course planning. Though this paper only goes into the steering angle part, a paper called DiffStack is able to combine modules like steering angle prediction and course planning while still training them under the same neural network [8]. ## 3 Methodology ### Dataset Initially, the Udacity dataset provided by NVIDIA's Dave 2 system was considered. However, as it does not host data from the Controller Area Network (CAN) bus, data was collected locally instead. This was important because the CAN bus data provides valuable information about the car's state, such as the engine speed, throttle position, and brake pedal position. The training data collected was diverse in terms of lighting, incline, and traffic conditions. This was done to ensure that the model would be able to generalize to different driving situations. Most of the data collected was on highways between San Jose and San Francisco, California. This was because these roads are representative of the types of roads that the model would be used on. In addition, other types of roads included in the data were driveways, residential streets, and two-lane roads. This was done to ensure that the model would be able to handle a variety of driving conditions. As seen in Figure 2, a Tesla Model 3 was used to collect the data. Tesla provides a "dashcam" mode that allows a user to connect a storage device to the Tesla in order to obtain a video feed from the on-board cameras. A 2TB hard drive (HDD) was connected to the Tesla through a USB to HDD cable. To obtain the CAN bus data, an OBD2 adapter was connected to a group of OBD2 cables behind the center console of the car. Then, an OBDLink LX was connected to the adapter. Finally, the "Scan My Tesla" app was downloaded to capture the OBD2 data. When driving the Tesla for data, the hard drive would record the video data and a phone with the "Scan My Tesla" app collected the CAN Bus data. This is to be expected, as the car was driving on mostly straight roads. The video data was recorded at 36 frames per second and there were four camera angles: center front, center back, right passenger area, and left driver area. This was done to capture a complete view of the car's surroundings. The CANbus data refreshed every one-thousandth of a second and there were 248 parameters. ### Preprocessing Due to the two different data streams, each stream had to be processed and then synchronized to the same time frame. The CAN bus data originally had 248 parameters, but most of these were unnecessary as they were not updated or had minimal value. The CAN bus data was trimmed down to 5 parameters as shown in Table 1. Each parameter was chosen for a different reason. For the battery parameter, it was concluded that power, voltage, and current of the battery could possibly correlate to the car's speed. The consumption and sleep current were also chosen because they were updated regularly, which could help the model have a richer dataset. The CAN bus data was then compressed by a factor of 25 to match the refresh rate of the video. For the video data, the right, left, and back camera angles were discarded and only the front camera angle was used, as there was sufficient data from one camera. The front camera data consisted of multiple 1-minute videos, so they were joined together using FFMPEG. Then each frame of the new video was turned into an image. The video and the CAN bus data were synchronized together with an error of a couple milliseconds by finding a landmark where there was no measured movement. When looking for landmarks in the data, I looked for frames where it was evident there was no movement and then I looked for entries where the speed was zero in CAN bus data and I used that point to sync them together. An issue with the video feed was encountered, as the video data from the Tesla needed to be saved every 20 minutes. This required clicking on an icon to save, and the resulting save took 203 seconds. During this time, CAN bus data was still being captured, which caused a desynchronization between the CAN bus data and the video data. To combat this, each separate save Figure 1: _end-to-end break down [2]_ Figure 2: _Data Collection Process_ was grouped into individual groups: * Group 1 is mainly residential street with a bit of highway driving * Group 2 is only highway driving * Group 3 is halfway highway driving and half residential driving * Group 4 is only highway driving * Group 5 is mainly residential street with a bit of highway driving ### Model Though models are not the focus of this paper, multiple different models were used in order to provide quantitative evidence on how CAN Bus data diminishes loss. The first model used was NVIDIA's DAVE-2 model, visualized in Figure 3. Though fairly simple when compared to the other models the data was trained on, the NVIDIA model is shown to have lowest error when it comes to end-to-end steering with computer vision. More details about this model can be found in NVIDIA's paper [3]. The two RESNET models used were RESNET50 [9] and RESNET152V2 [10]. The RESNET models were originally trained on images which allows it have lower error when trained on different datasets. Additional benefits include each layer consisting of residual blocks allowing RESNET to keep consistent performance compared to alternate models at the time. In addition, the model included shortcut connections allowing RESNET to keep the parameters from previous layers resulting in much better accuracy and performance. RESNET was groundbreaking as it didn't lose the accuracy with an increasing number of layers. More models were also trained on the data. Many other image models were trained on the data due to them being similar to RESNET. This includes NASNETLARGE [11], VGG [12], and EfficientNet [13]. When training the models on the CAN bus data, the data was first piped into a Multi-Layer Perceptron (MLP). The MLP is a type of neural network that is well-suited for classification tasks on numerical data such as the CAN Bus data collected from the car. The output of the MLP was then concatenated with the output of the CNN. The concatenated data was then passed through two dense layer and then a final dense layer to produce the output. The output of the final dense layer is a probability distribution over the possible vehicle states. ## 4 Results and Discussion ### Metrics Two different loss functions were used: Mean Square Error (MSE) and Root Mean Square Error (RMSE). RMSE is the root of MSE. MSE was used when training the model, while RMSE was used when comparing the model. Accuracy was not used to compare the models as to determine accuracy the model has to predict the exact Y value. However, since the steering angle was so precise it is difficult to predict to that level of precision. Instead, end to end steering prediction uses root mean square error which determines how far the predicted value is from the actual value. Root Mean square error is hence a more optimal \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline **Voltage** & **Current** & **Power** & **Steering Speed** & **Speed** \\ \hline 400.368000 & -110.42000 & -44.208831 & 2.583333 & 35.790981 \\ \hline 400.584000 & -112.580000 & -45.097740 & 0.000000 & 35.592142 \\ \hline 400.810000 & -113.600000 & -45.532016 & 2.750000 & 35.492723 \\ \hline 401.810000 & -119.300000 & -47.935933 & 6.750000 & 34.647658 \\ \hline 401.620000 & -118.950000 & -47.772712 & 8.500000 & 34.597948 \\ \hline 401.847500 & -119.125000 & -47.870120 & 9.250000 & 34.548238 \\ \hline \end{tabular} \end{table} Table 1: _Sample Preprocessed CAN Bus Data_ Figure 4: _RESNET Model Visualized_ Figure 3: _NVIDIA model Visualized_ metric for this model. \[\text{MSE}=\frac{1}{n}\Sigma(\hat{y}_{i}-y_{i})^{2} \tag{1}\] \[\text{RMSE}=\sqrt{\frac{1}{n}\Sigma(\hat{y}_{i}-y_{i})^{2}} \tag{2}\] Also, the adam optimizer was used with a learning rate of 0.0001. ### Results When comparing the results when the models were trained with the CAN Bus data and without it as shown in Table 2, there is a variance in decrease of error. An example of the variance is that RESNET152V2's error dropped by about 70% for the validation loss while NVIDIA only dropped by 39% for its validation loss. The reason for the discrepancy for the decrease in error is most likely due to the structure of the model. For example, the NVIDIA model was made to work specifically on only image data for steering angle prediction while the RESNET152V2 model was made to work on a variety of datasets which could have been the reason why it performed so well with the CAN Bus Data. Though it is still shown that overall, supplementing existing computer vision models with CAN Bus data decreases the error. The results of the study show that adding CAN bus data to the training data reduces the loss of the model. This means that the model is able to learn more accurately when it is trained with CAN bus data. This is likely because CAN bus data provides more information about the vehicle's behavior than the other data that was used to train the model. ### Discussion During the collection process, many errors were encountered. Initially, multiple pieces of software such as Inpa had to be downloaded in order to interface with the CAN bus. Even though data could be collected from the CAN bus, the steering angle could not be accessed. This led to significant time trouble shooting which eventually led to discarding the approach of gathering CAN bus data through the ODB2 port as the it was deemed that pin in the OBD2 port was most likely broken. This would lead to some of the data not showing up when extracting the CAN Bus Data. As an alternative, CAN bus data was acquired by man in the middling the CAN Bus wires. An OBD2 adapter was needed to extract the CAN bus data straight from the wires. Finally, setting up the CAN bus extraction process was time consuming; however, benefits outweigh the costs. ## 5 Conclusions and Future Work To increase the robustness of the model in real world applications, it is important to collect data from a variety of weather conditions and seasons. This will allow the model to learn how to perform well in different environments, such as sunny, rainy, and snowy days, as well as different seasons. The model will be able to handle different lighting conditions, road surfaces, and other factors that can affect its performance. In addition to collecting data from a variety of weather conditions and seasons, data augmentation can also improve the performance of the model. Data augmentation is a technique that artificially increases the size of the dataset by creating new data points from existing data. This can be done by applying transformations to the data, such as adding noise, cropping the images, or rotating them. It helps to prevent overfitting by providing the model with more data to learn from which allows the model to generalize to new situations by making it more robust to variations in the data. The performance of the model can also be improved by breaking down the CAN Bus data into individual components and training each component on the same model. This will al \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline & \multicolumn{2}{c|}{**Without CAN Bus Data**} & \multicolumn{2}{c|}{**With CAN Bus Data**} \\ \cline{2-5} & **Training Error** & **Validation Error** & **Training Error** & **Validation Error** \\ \hline NVIDIA & 0.01063 & 0.02492 & 0.009990 & 0.01970 \\ \hline RESNET50 & 0.03464 & 0.03464 & 0.01230 & 0.01659 \\ \hline EfficientNetB7 & 0.04 & 0.04 & 0.01584 & 0.01197 \\ \hline VGG19 & 0.02299 & 0.01940 & 0.01080 & 0.01477 \\ \hline NASNetLarge & 0.01386 & 0.08366 & 0.009690 & 0.01631 \\ \hline RESNET152V2 & 0.01589 & 0.02035 & 0.006115 & 0.006083 \\ \hline \end{tabular} \end{table} Table 2: Model RMSE with/without CAN Bus Data Figure 5: NVIDIA model with CAN bus input low us to see which components of the CAN Bus data are most important for predicting the steering angle. This information can be used to improve the model's accuracy and make it more robust to changes in the data. Additionally, the model can be trained unsupervised on the CAN Bus data. This means that model will not be provided with any labels for the steering angle. The model will have to learn to predict the steering angle from the CAN Bus data on its own. This is a more challenging task, but it can lead to a more accurate and robust model. Unsupervised learning is a valuable technique for self-driving cars because it allows the model to learn from unlabeled data. This is important because there is a lot of unlabeled data available, such as data from sensors that are not used for steering. Unsupervised learning can help the model to learn from this data and improve its performance in new situations. This would be a valuable addition to the model, as it would allow it to learn from a much larger dataset. This would make the model more robust to noise and other factors that can affect its performance. Finally, other sources of data, such as GPS and online weather data, could be added to the model. This would allow the model to learn from a variety of sources, which would further improve its performance. GPS data can provide information about the car's location, which can be used to improve the model's accuracy in different parts of the world. For example, the model can learn to take into account the incoming turns which would help it predict future steering angles. Online weather data that can provide information about the current weather conditions, which can be used to improve the model's performance in different weather conditions. For example, the model can learn to adjust its steering angle for slippery roads or poor visibility. By adding additional data to the model, the model can learn to take into account more factors when making predictions. This would allow the model to be more accurate in a variety of situations. Overall, there is still a lot of room for improvement in the future, and these are just a few of the ways that the model could be enhanced. By collecting more data, using different models, and adding more sources of data, the model could be made more robust and accurate. The more data the model has to learn from, the better it will be able to make predictions. This is why it is important to collect data from a variety of sources, including different weather conditions, seasons, and driving environments.
2303.03526
Can observations of 511 keV line from the M31 galaxy shed light on the AGN jet composition?
Positron annihilation line at 511~keV is a known component of the gamma-ray diffuse emission. It is believed to be produced in the Galaxy, but there could be possible extragalactic contribution as well. E.g., positrons can be produced in jets of active galactic nuclei (AGN) and after that accumulate and gradually annihilate in hot gaseous halos around galaxies. In this work we test this hypothesis in application to an individual object -- the Andromeda galaxy (M31) which is close and has a supermassive black hole in its center, which powered an AGN before. We compute the growth history of the supermassive black hole in M31, relate it to the evolution of jet luminosity and estimate the positron content in its halo. We calculate the 511~keV photon flux due to positron annihilation which should be observed at Earth and find the value of around $10^{-4}$ photon cm$^{-2}$s$^{-1}$. It is very close to the observational limits ($<10^{-4}$photon cm$^{-2}$s$^{-1}$) set by the INTEGRAL/SPI in the assumption of the point source, so further observations would be able to constrain leptonic models of the jets and propagation of cosmic rays in the circumgalactic medium of large spiral galaxies.
B. A. Nizamov, M. S. Pshirkov
2023-03-06T22:24:48Z
http://arxiv.org/abs/2303.03526v1
# Can observations of 511 keV line from the M31 galaxy shed light on the AGN jet composition? ###### Abstract Positron annihilation line at 511 keV is a known component of the gamma-ray diffuse emission. It is believed to be produced in the Galaxy, but there could be possible extragalactic contribution as well. E.g., positrons can be produced in jets of active galactic nuclei (AGN) and after that accumulate and gradually annihilate in hot gaseous halos around galaxies. In this work we test this hypothesis in application to an individual object - the Andromeda galaxy (M31) which is close and has a supermassive black hole in its center, which powered an AGN before. We compute the growth history of the supermassive black hole in M31, relate it to the evolution of jet luminosity and estimate the positron content in its halo. We calculate the 511 keV photon flux due to positron annihilation which should be observed at Earth and find the value of around \(10^{-4}\) photon cm\({}^{-2}\)s\({}^{-1}\). It is very close to the observational limits (\(<10^{-4}\)photon cm\({}^{-2}\)s\({}^{-1}\)) set by the INTEGRAL/SPI in the assumption of the point source, so further observations would be able to constrain leptonic models of the jets and propagation of cosmic rays in the circumgalactic medium of large spiral galaxies. astroparticle physics - galaxies: active - galaxies: jets - gamma-rays: galaxies 2100, Vol. 1, No. 5, p. 1-1. Translated from Pis'ma v Astronomicheskij Zhurnal. **Can observations of 511 keV line from the M31 galaxy shed light on the AGN jet composition?** **B.A. Nizamov\({}^{1}\)1, M.S. Pshirkov\({}^{1,2}\),** \({}^{1}\) Sternberg Astronomical Institute, Lomonosov Moscow State University, Universitetsky pr., 13, Moscow, 119234, Russia \({}^{2}\) Lebedev Physical Institute, Pushchino Radio Astronomy Observatory Footnote 1: e-mail: [email protected] ## 1 Introduction Super-massive black holes (SMBHs) in active galactic nuclei (AGNs) could launch powerful relativistic jets during accretion phases. These jets carry energy in particles and fields from the central engine to the surrounding medium. The exact particle composition of jets is not perfectly known: it could consist either of ions (mainly protons) and electrons, where number of protons and electrons are close, or, alternatively, jet could be pair-dominated, i.e. with a large fraction of \(e^{+}e^{-}\) pairs and \(n_{e^{+}}\approx n_{e^{-}}\gg n_{p}\), where \(n_{e^{+}},n_{e^{-}},n_{p}\) are number densities of positrons, electrons and protons respectively. Relativistic leptons - both electrons and positrons-actively participate in radiative processes in the jet via synchrotron and inverse Compton emission in magnetic fields and background photon fields correspondingly, thus allowing us to observe the jet in different frequency ranges from radio to gamma. Eventually, these leptons make their way into surrounding medium - interstellar (ISM) and, later, circumgalactic (CGM). If the jets are pair-dominated then the AGNs could possibly be one of the main sources of positrons in the Universe. On much smaller scales it could also be the case in the Galaxy, where jets of microquasars, powered by accretion on stellar mass BHs could produce significant or even the major fraction of the galactic positrons [1; 2]. The pair content has been studied in literature via modelling of the radiative properties of jets. Zdziarski et al. [3], [4], [5] used synchrotron-self Compton spectral fits to deduce the electron flow in the jet and hard X-ray data to find the pair production rate in its base, and the two quantities appeared to correspond to each other. In the works [6] and [7] the pair content was estimated from the overall energetics of jets. A more direct observational test was suggested by Ghisellini [8] who proposed that pair-abundant jet bases should have enhanced brightness at \(\sim 1\) MeV even for misaligned sources. An interesting scenario is discussed in [9]: if positrons are produced in powerful extragalactic sources such as AGNs and escape to intergalactic medium, they can survive there practically infinitely until they accrete on to galaxies, e.g. Milky Way. The authors do not estimate the production rate of positrons, but they find that a positron to electron density ratio in the Universe of \(10^{-5}\) is sufficient to explain the 511 keV diffuse emission in the Galaxy. In the present work, we propose quite a straightforward test of pair production via estimation of the 511 keV flux. More concretely, we study the ultimate fate of the positrons after they leave the jet. Observations show that the Milky Way-size galaxies, i.e. \(M_{*}\sim 10^{11}\;M_{\odot}\) are surrounded by vast tenuous halos where density gradually decreases to \(\sim 10^{-4}\) cm\({}^{-3}\) at galactocentric radii \(r\sim(50-70)\) kpc (10). This CGM has a very complex multiphase structure, where denser and colder regions with \(T\sim 10^{5}\) K are surrounded by hotter and more diluted plasma with temperature closer to virial \(T\sim 10^{6}\) K. We assume the following scenario: positrons brake and are trapped in the extended CGM of the host galaxy. Time scales of braking and thermalization with the plasma of the CGM depends on the density, after that annihilation begins to operate effectively.1 The annihilation could proceed either through direct in-flight channel, \(e^{+}+e^{-}\longrightarrow 2\gamma,\;E_{\gamma}=511\) keV or through the bound state, so-called positronium (Ps). Ps could form in two states, depending on mutual orientation of \(e^{+}\) and \(e^{-}\) spins: the singlet one, para-positronium (p-Ps) in \(\sim 10^{-10}\) s decays in 2 photons with \(E_{\gamma}=511\) keV; the triplet, ortho-positronium (o-Ps) lives longer, \(\sim 10^{-7}\) s and decays into three gamma photons, forming a continuum. Total branching ratio of p-Ps and o-Ps formation is 1:3. Annihilation time scales depend on density and temperature of halo gas and the initial positron energy and appear to be extremely long (\(>t_{\rm br}\)) as we show in Sec. 2.3. Still if the positrons are effectively retained in the halo (see, e.g., [12; 13; 14; 15] for case of CR protons) for Gyrs, we could expect emergence of 511 keV annihilation line as a smoking gun for pair-dominated jets. An analogous idea was previously put forward in (16), but in that work the authors assumed that the positrons are ejected to the intra-cluster medium. Its high temperature (above \(10^{7}\) K) suppresses positronium formation so that effectively all annihilations result in 511 keV line emission. In this paper we focus on the constraints that could be obtained from the observations of individual galaxies in the local Universe. Constraints from the cosmological signal, produced by the possible background from red-shifted annihilation lines from the halos of all galaxies would be studied elsewhere. Sgr A\({}^{*}\) as a source of positrons was considered in [17; 18]. The difference is that these papers were studying present time (\(\sim\) Myr) positron production and annihilation in the galactic ISM, while we are dealing with much longer time scales (\(\sim\) Gyr) and annihilation in the much more extended halo. Footnote 1: For the realistic properties of leptons and halo the energy losses before thermalization due to in-flight direct annihilation amount only to several % of the total energy. E.g., in [11] it is shown that, in the Galaxy, positrons injected with the Lorentz factors 20, 6, 2 lose, respectively, 11, 5.5, 1.4% of their energy before they thermalize. In our work, we will assume \(\Gamma\leq 20\) (see Section 3), therefore we will neglect the energy lost via in-flight annihilation. ## 2 511 keV Flux Estimation The radiation at 511 keV is produced by the annihilation of electron-positron pairs. In our approach, there are several assumptions which provide physical ground for the whole calculation. We first outline them briefly and then discuss in more detail in the following subsections. 1) Growth history of the SMBH can be derived from the luminosity function of AGNs at various redshifts via the continuity equation, if some relation between the AGN luminosity, accretion rate and accretion efficiency is assumed (Section 2.1). Due to this relation, we obtain not only the SMBH growth history, but also the AGN luminosity history. 1) Positrons are born in the jet of AGN. We assume that \(n_{\rm pair}\sim 15\) positrons are produced per proton (Section 2.2). 2) We suppose that the host galaxy is surrounded by the halo with the density \(n\sim 10^{-4}\) cm\({}^{-3}\). For the temperature we consider two bracketing cases, i.e. \(T=10^{5}\) K and \(T=10^{6}\) K. We expect that results for real multiphase system would lie in between them. In this halo, positrons brake due to Coulomb collisions and thermalize. Subsequently, they annihilate with the medium electrons directly or via positronium formation. We assume that the positrons are retained in the CGM for cosmological times. The final result depends on several parameters, such as the Eddington ratio, accretion efficiency, bulk Lorentz factor of the jet, the halo density. We choose values which are preferred observationally or have some theoretical basis in the literature. ### SMBH growth and AGN luminosity Since the famous work of Soltan (19), there has been many attempts to relate the growth history of SMBHs with the evolution of the luminosity function of AGNs hosted by them. In our work, we follow Marconi et al. (20) (hereafter M04). For the paper to be more self-contained, we briefly repeat the ideas of this paper relevant to us. If \(N(M,t)\) is the comoving number density of SMBHs with the mass \(M\) at the cosmic time \(t\) and \(\langle\dot{M}\rangle\) is the average accretion rate of a SMBH with the mass \(M\), then the continuity equation holds: \[\frac{\partial N(M,t)}{\partial t}+\frac{\partial}{\partial M}\left[N(M,t) \langle\dot{M}(M,t)\rangle\right]=0. \tag{1}\] The AGN luminosity function can be related to the mass function as follows: \[\phi(L,t)d\log L=\delta(M,t)N(M,t)dM \tag{2}\] where \(\delta(M,t)\) is the fraction of SMBHs active at time \(t\). The accretion efficiency \(\varepsilon\) is defined as the fraction of the infalling matter rest energy which is converted into radiation, and the Eddington ratio \(\lambda\) is the ratio between AGN luminosity and Eddington luminosity. With these definitions, the luminosity, mass and accretion rate are related as follows: \[L=\lambda\frac{Mc^{2}}{t_{\rm Edd}}=\varepsilon\dot{M}_{\rm acc}c^{2} \tag{3}\] where \(t_{\rm Edd}\) is the Eddington time and \(\dot{M}_{\rm acc}\) is the accretion rate. The BH growth rate then equals \(\dot{M}=(1-\varepsilon)\dot{M}_{\rm acc}\). The quantity \(\langle\dot{M}\rangle\) is the growth rate averaged over the whole population of BHs of mass \(M\), in other words, \(\langle\dot{M}\rangle=\delta(M,t)\dot{M}\). From Eqs. 2-3 follows \[N(M,t)\langle\dot{M}(M,t)\rangle=\frac{1-\varepsilon}{\varepsilon c^{2}\ln 1 0}\phi(L,t)_{L=\lambda Mc^{2}/t_{\rm Edd}}\frac{{\rm d}L}{{\rm d}M}. \tag{4}\] This can be substituted to Eq. 1 and, with \(\varepsilon\) and \(\lambda\) constant, we obtain \[\frac{\partial N(M,t)}{\partial t}=-\frac{(1-\varepsilon)\lambda^{2}c^{2}}{ \varepsilon t_{\rm Edd}^{2}\ln 10}\left[\frac{\partial\phi(L,t)}{\partial L} \right]_{L=\lambda Mc^{2}/t_{\rm Edd}}. \tag{5}\] For an initial condition, M04 assumed that at the starting redshift \(z_{s}=3\) all the SMBHs were active, i.e. \(\delta(M,t(z_{s}))=1\) which implies \[MN(M,t_{s})=[\phi(L,t_{s})]_{L=\lambda Mc^{2}/t_{\rm Edd}}. \tag{6}\] Equations 5, 6 can be solved once we have data on the AGN luminosity function from sufficiently large \(z\) up to the present. Like M04, we used a representation of \(\phi(L,t)\) by Ueda et al. (21). From Eq. 5 we get an expression for the average growth rate: \[\langle\dot{M}(M,t)\rangle=\frac{1}{t_{\rm Edd}\ln 10}\frac{(1-\varepsilon) \lambda}{\varepsilon N(M,t)}\phi(L,t)_{L=\lambda Mc^{2}/t_{\rm Edd}}. \tag{7}\] In Fig. 1 we show how \(\langle\dot{M}(M,t)\rangle\) evolves. On the right axis of this plot we also show the average bolometric luminosity which is obtained from Eq. 3 if we replace \(\dot{M}(M,t)\) with \(\langle\dot{M}(M,t)\rangle\). We also need to select candidates that could provide the strongest constraints. It is obvious that the stronger signal would be expected from the nearest galaxies with more massive black holes. There are three possible candidates - the Milky Way (\(M_{\rm SMBH}=4\times 10^{6}\ M_{\odot},d\sim 50\) kpc), where we use the characteristic halo radius as a source distance (22), the M31 galaxy (\(M_{\rm SMBH}\sim(1-2)\times 10^{8}\ M_{\odot},\ d\sim 750\) kpc,(23)), the Cen A galaxy (\(M_{\rm SMBH}=5.5\times 10^{7}\ M_{\odot},\ d\sim 3.5\) Mpc, (24)). M31 seems to be the perfect candidate - the expected total flux is comparable to one from the Milky Way. However, the signal from the halo of our own galaxy is almost uniform, while from the M31 galaxy it would be much more localized with the angular size around \(10^{\circ}\). Now we can choose a starting mass of a BH at \(z_{s}=3\) and integrate the equation \(dM=\dot{M}(M,t)dt\) to obtain the BH mass at \(z=0\). In particular, we can find by trial such a starting mass that finally gets to the present mass of the SMBH in M31. Different estimations of this mass fall into the range Figure 1: Average growth rate of an SMBH with the initial mass \(1.56\times 10^{4}M_{\odot}\). On the right axis is shown the corresponding average AGN bolometric luminosity. \(5\times 10^{7}-1.4\times 10^{8}M_{\odot}\) (25), we adopt the value of \(10^{8}M_{\odot}\). We checked ourselves by calculating a number of tracks for different starting BH masses and obtained a plot similar to Fig. 8 of M04, it is shown in Fig 2. The track shown in bold starts at the mass of \(1.56\times 10^{4}M_{\odot}\) and ends with \(10^{8}M_{\odot}\), we use it in our calculations as a proxy of M31 SMBH growth history. Like M04, we adopt the value of the accretion efficiency \(\varepsilon=0.1\) and the Eddington ratio \(\lambda=1\). Furthermore, we assume that the jet kinetic power is tightly connected to the accretion rate (26): \[P_{\rm j}=\eta\dot{M}_{\rm acc}c^{2},\ \eta\sim 1 \tag{8}\] The kinetic power is close to the accretion power, as also shown in (26). Note that the estimate that the factor \(\eta\) is close to unity was obtained in (26) in assumption that there was one proton per electron in the jet. Although the authors argue for the presence of protons in the jet, the number of protons per electron is not a parameter of the model used, as stated in (27). If the actual proton load of the jet is smaller, i.e. there are more than one electron per on proton, then the total jet power is also smaller (see discussion before Eq. 9). ### Positron production The composition of AGN jets is currently a matter of investigation. In (8) it is argued that pairs can be created in the inner part of the jet due to photon-photon collisions if the luminosity is sufficient, i.e. above \(10^{44}\) erg/s at 1 MeV. The discrepancy between jet powers calculated from blazar spectral fits and from radio lobe calorimetry was discussed by Sikora in (6). He investigated non-zero pair content as a cause for the power underestimation by spectral fits and came to conclusion that presence of about 15 pairs per proton can reconcile estimations by the two methods. Similar question was studied in (7). The authors compared jet powers estimated by spectral fitting, radio-core shift, radio lobes and a phenomenological estimate based on gamma-ray luminosity. They found that on average spectral fitting and core shift method give powers which are ten times larger than that from radio lobes method. Like in (6), they suggest the presence of \(\sim 15\) pairs per proton as a possible explanation which would reduce the power estimated from spectral fitting. In a recent series of papers, Zdziarski et al. investigated the composition of jets in two black hole X-ray binaries MAXI J1820+070 (3) and Cyg X-1 (4) and in a radio galaxy 3C 120 (5). In particular, they estimated the pair production rate at the jet base of 3C 120 and it appeared to correspond quite well to the flux of synchrotron emitting electrons downstream in the jet. They also found that the kinetic power in ions greatly exceeds the maximum possible jet power if ions are equally abundant as electrons. Interestingly, they arrive at similar conclusions in the case of two X-ray binaries. One can notice that in the works dealing with jets in AGN, the estimates of \(n_{\rm pair}\) is approximately 10-20. We can adopt this value in our calculations, but we show later that the exact value does not affect our conclusions. We only suppose that positrons are produced in a certain amount compatible with observations. The total jet power is given by Eq. 8. As we already mentioned, this estimation is obtained in (26) in the assumption that there is one proton per electron in the jet. They also found that the total power of the jet appears to be dominated by protons: their power is, on average, more than an order of magnitude larger than the radiation power - the second-largest component. The total number of leptons is constrained by the observed amount of radiation. The number of protons could be obtained from that, given that we know the ratio of the number of protons to the number of leptons. Therefore, if we allow for positrons in the jet, the number of protons is reduced accordingly. Since the jet power is dominated by protons, it will diminish proportionally. In particular, if there are \(n_{\rm pair}\) positrons per proton then the jet power is diminished by a factor \(2n_{\rm pair}\): \[P_{\rm j}=\eta\dot{M}_{\rm acc}c^{2}/2n_{\rm pair}. \tag{9}\] As this power is supplied by protons, we can relate the Figure 2: Growth history for SMBHs of various initial masses calculated by method of (20). The track used in the calculations is shown in bold. jet power to the rate of the proton flux \(\dot{N}_{\rm p}\) (i.e. the number of protons traversing the jet cross-section per unit time) via the jet bulk Lorentz factor \(\Gamma\) for which we choose the benchmark value of 10: \[P_{\rm j}=\dot{N}_{\rm p}\Gamma m_{\rm p}c^{2}. \tag{10}\] The positron production rate equals \(\dot{N}_{+}=n_{\rm pair}\dot{N}_{\rm p}\). Combining this with Eqs. (9-10) we obtain \[\dot{N}_{+}=\frac{\eta\dot{M}_{\rm acc}}{2\Gamma m_{\rm p}}, \tag{11}\] for further numerical estimates we will assume benchmark value \(\eta=1\). One can see that \(n_{\rm pair}\) is cancelled out. This is natural in our setup: with larger \(n_{\rm pair}\) more pairs are produced, but the jet kinetic power estimated from the electron content decreases, and both dependencies are linear. We emphasize that the above formulae are valid if the jet power is dominated by protons. This may be not the case when \(n_{\rm pair}\gtrsim 10\). However, such a scenario is rather questionable because, as stated in (26), it would mean the jet power is less than the radiation power and the jet would stop. ### Positron braking and annihilation Positrons produced in the jet are (at least) moderately relativistic. To annihilate efficiently, they should thermalizae in the ambient medium. The medium is the halo gas with temperature of order \(10^{5}-10^{6}\) K and density \(10^{-4}-10^{-3}\) cm\({}^{-3}\). This gas is ionized, therefore positrons brake mostly due to Coulomb collisions with ions. For the energy loss rate we take Eq. (14) from (28) and estimate the brake time as the initial positron kinetic energy divided by the loss rate: \[t_{\rm br}=(\Gamma-1)mc^{2}\left\{7.7\times 10^{-9}\frac{n}{\beta}\left[\ln \left(\frac{\Gamma}{n}\right)+73.6\right]\right\}^{-1} \tag{12}\] where the expression in braces is in eV/s, \(n\) is the halo density in cm\({}^{-3}\), \(\beta\) is the positron velocity in units of \(c\). Positrons are not expected to be monoenergetic. However, if their spectrum is not too hard, the overwhelming majority of the particles is contained in the low-energy part of the spectrum, therefore we assume that all the positrons initially have the same Lorentz factor as the jet. For the parameter range of interest, the above equation can be written as \[t_{\rm br}\approx 2.2\times(\Gamma/10)(n/10^{-4}{\rm cm}^{-3})^{-1}\quad{\rm Gyr}. \tag{13}\] Also the leptons could brake down more efficiently, if they lose their energy trough adiabatic cooling. It could be the case if they stayed attached to the expanding galactic wind rather than simply diffuse away in the outer halo. In this case the characteristic time scale could be estimated as \(t_{ad}\sim r_{halo}/v_{wind}\), and for benchmark values of \(r_{halo}=50\) kpc, \(v_{wind}=300\)km/s this time scale would be around 200 Myr. Positrons can annihilate directly or after formation of positronium, the rate constants of the two processes being respectively \(\langle\sigma_{\rm a}v\rangle\) and \(\langle\sigma_{\rm r}v\rangle\) where subscripts stand for "annihilation" and "recombination". We calculate these rates according to the formulae from (29). For the halo density \(n\sim 10^{-4}\) cm\({}^{-3}\) and temperature \(T\sim 10^{6}\) K the time of annihilation appears to exceed the braking time. Again, for reasonable \(T,n\) we can roughly estimate these times as \[t_{\rm a}\approx 20\times(T/10^{6}{\rm K})^{0.5}(n/10^{-4}{\rm cm }^{-3})^{-1}\quad{\rm Gyr} \tag{14}\] \[t_{\rm r}\approx 27\times(T/10^{6}{\rm K})^{1.1}(n/10^{-4}{\rm cm }^{-3})^{-1}\quad{\rm Gyr} \tag{15}\] To obtain the total annihilation rate in the halo, we must calculate the positron content there which is determined by the balance between the positron production and annihilation. We neglect annihilation 'inflight', i.e. before thermalization, therefore what we need is the number of _thermalized_ positrons. Their production rate at time \(t\) equals \(\dot{N}_{+}\) from Eq. 11 at time \(t-t_{\rm br}\) because newly created positrons need \(t_{\rm br}\) to brake. To sum up, we solve the following equation \[\frac{dN_{+}(t)}{dt}=\dot{N}_{+}(t-t_{\rm br})-nN_{+}(t)(\langle\sigma_{\rm a} v\rangle+\langle\sigma_{\rm r}v\rangle) \tag{16}\] with the condition that no positrons had been produced before the start time \(t_{s}\): \(\dot{N}_{+}(t<t_{s})=0\). The results are shown in Fig. 3. Note that the thermalized positron content starts to grow at \(z=1.44\). The time between \(z=3\) and \(z=1.44\) is 2.2 Gyr and equals the braking time for \(T=10^{6}\) K, \(\Gamma=10\). Finally, to calculate the resulting 511 keV photon flux, we notice that each direct annihilation produces two photons and so do one quarter of positronium annihilation (the remaining 3/4 decay into three continuum photons). The flux at the Earth is \[F=\frac{2nN_{+}(t_{e})(\langle\sigma_{\rm a}v\rangle+\frac{1}{4}\langle\sigma_ {\rm r}v\rangle)}{4\pi d^{2}} \tag{17}\] where \(d=750\) kpc is the assumed distance to M31. For the halo parameters \(n=10^{-4}\) cm\({}^{-3}\), \(T=10^{6}\) K, we obtain \(F=2.5\times 10^{-4}\) photon cm\({}^{-2}\)s\({}^{-1}\). This is larger than what was obtained in (16) for the Virgo cluster and Cen A (\(\sim 10^{-6}\) and \(\sim 10^{-5}\) photon cm\({}^{-2}\)s\({}^{-1}\) respectively.) In order to estimate possible effects of faster braking due to advection we also considered the limiting case with \(t_{br}=0\), i.e. instantaneous braking. It only slightly changed our results for \(T=10^{6}\) K but for \(T=10^{5}\) K the resulting flux estimate decreased almost two-fold. This behavior could be expected: in the former case the annihilation speed is low and we do not depend too much on the details of the immediate history of the SMBH evolution. In the latter case it is not true, now, with the decreased braking time we 'probe' more recent epoch of accretion when the accretion rate was considerably lower. ## 3 Discussion The calculation of the SMBH growth history depends on several parameters and assumptions. First of all, we postulated the values of the accretion efficiency \(\varepsilon=0.1\) and Eddington ratio \(\lambda=1\). The accretion efficiency is theoretically bound between 0.054 for a non-rotating black hole and 0.42 for a maximally rotating black hole. In a number of studies of SMBH growth, including M04, this parameter was found in the range 0.05-0.4 (see (30) and references therein). Discussion of the \(\lambda\) parameter in (30) shows that it can be roughly estimated as 0.01-0.1. In (20), Marconi et al. compared the current mass function of the black holes which have hosted AGNs (they call them relic black holes) with the mass function of the SMBHs observed in the local Universe (they call them local black holes). In the calculations they adopted the values \(\varepsilon=0.1\) and \(\lambda=1\) because they provided the relic SMBH mass function fairly close to that of the local SMBHs. One can think that the value of \(\lambda\) is overestimated. However, one should bear in mind that the mass growth is assumed to take place when the AGN is active, i.e. radiates at the luminosity given by Eq. 3. The luminosity function is related to the SMBH mass function via Eq. 2 where there is a dependence on AGN duty cycle coming from \(\delta(M,t)\) function. As Marconi et al. argue, the fact that the relic and local SMBH mass functions agree at \(\lambda=1\) simply means that the growth takes place effectively when the AGN luminosity is close to the Eddington limit. When calculating the SMBH growth history, we used the _average_ growth rate \(\langle M\rangle\). Of course, for an individual object we do not have the precise growth history. However, our analysis shows that, if the annihilation time is long (e.g. as in the case \(T=10^{6}\) K) then the positrons survive for a long time as well, hence the most important parameter is the final BH mass, because it determines the total mass accreted, the total energy radiated and the total positron population created. Even if the annihilation time is relatively short, our conclusions still hold if the main mass growth was at the same epoch, i.e. \(z\approx 1.2\) or later. Another assumption vital for our reasoning is the existence of a hot gaseous halo around M31. Such hot halos are predicted by theories of galaxy formation (31), and currently, in a number of massive spirals they have been found (32), (33). Discovery in X rays of a hot halo around M31 was also reported (34), and there is evidence from radio and UV data as well. In the work (35) the authors observed high velocity clouds near M31, and two of these clouds demonstrated a 'head-tail' structure which can be attributed to the interaction with the ambient medium. Observations with Hubble Space Telescope in UV reveal the massive halo around M31 which includes hot components with \(T\sim 10^{5}-10^{6}\) K (36). To investigate how the result depends on the parameters which are loosely constrained, we perform the calculations for several combinations of them. We try the halo temperature \(T=10^{5},10^{6}\) K and the jet bulk Lorentz factor \(\Gamma=5,10,20\). The resulting 511 keV fluxes are shown in Table 1. For \(T=10^{6}\) K both annihilation times exceed the Hubble time, therefore the flux is mostly affected by the total number of produced pairs. With the jet power fixed, the larger \(\Gamma\) implies the smaller proton and pair production rates, hence the smaller resulting flux. However, for \(T=10^{5}\) K positronium annihilation time is only 2 Gyr, while the braking time \(t_{\rm br}\) for \(\Gamma=10\) and 20 are respectively 2.2 Figure 3: Evolution of the content of _thermalized_ positrons in the halo of M31. Note that \(z\) range is different from that in Figs. 1 and 2. and 4.4 Gyr and there an interesting interplay emerges: larger \(\Gamma\) leads to longer braking times, meaning that more positrons survive since the epoch of the AGN luminosity peak. On the other hand, larger \(\Gamma\) leads to lower rates of pair production (see eq. 11). Consequently, the resulting photon flux depends on the bulk Lorentz factor non-monotonically which is evident from the second line of the table. Finally, in Fig. 4 we show the 511 keV photons production rate in the case \(\Gamma=10\) for the two values of the halo temperature, \(10^{6}\) and \(10^{5}\) K. One can see that, as long as new pairs are created, the \(T=10^{5}\) K medium produces more photons due to shorter annihilation time. When the accretion starts do decline and pair production drops, fast annihilation exhausts the positron population quicker. As a result, the present time photon production rate is larger when \(T=10^{6}\) K. The estimated flux can be compared to the observations. Siegert et al. (2) investigated the positron annihilation line in the Milky Way with INTEGRAL/SPI. Along with its diffuse emission they tried to model several point sources, such as Sgr A*, Crab, Cyg X-1. They also modeled M31 and found an upper limit of \(10^{-4}\) photon cm\({}^{-2}\)s\({}^{-1}\) at the significance level of \(2\sigma\). The M31 galaxy was modeled as a _point_ source. In our setup, the emission originates from the halo, and even with the source size of 30 kpc, its angular size is \(2^{\circ}\). So with the flux of \(\sim 10^{-4}\) cm\({}^{-2}\)s\({}^{-1}\) it is tentatively close to the current limits and can be potentially detected with INTEGRAL or future MeV missions like e-ASTROGAM [37]. ## 4 Conclusions AGN jets are a viable source of positrons in the Universe. If they are produced in AGN jets and are trapped in galactic gaseous halos, they can survive for substantial amount of time due to considerable braking time and long (for certain medium parameters) annihilation times. We calculated the positron production rate, their braking and annihilation times, and finally estimated the 511 keV photon flux at the present time due to the presumable past activity of the M31 nucleus, the source with the highest expected signal. To do so, we calculated the growth history of the SMBH in M31 following [20] and linked it to the AGN luminosity and positron production rate. We found that for a reasonable parameter combination, the present 511 keV photon flux at Earth can be as high as few times \(10^{-4}\) cm\({}^{-2}\)s\({}^{-1}\) and can be potentially observed in the near future. ## Acknowledgements The authors thank Prof. Konstantin Postnov for fruitful discussions and the anonymous referees whose comments helped to improve the paper. The work of the authors was supported by the Ministry of Science and Higher Education of Russian Federation under the contract 075-15-2020-778 in the framework of the Large Scientific Projects program within the national project "Science". This research has made use of NASA's Astrophysics Data System.
2306.06280
Matrices for finite group representations that respect Galois automorphisms
We are given a finite group $H$, an automorphism $\tau$ of $H$ of order $r$, a Galois extension $L/K$ of fields of characteristic zero with cyclic Galois group $\langle\sigma\rangle$ of order $r$, and an absolutely irreducible representation $\rho\colon H\to\operatorname{\sf GL}(n,L)$ such that the action of $\tau$ on the character of $\rho$ is the same as the action of $\sigma$. Then the following are equivalent. $\bullet$ $\rho$ is equivalent to a representation $\rho'\colon H\to\operatorname{\sf GL}(n,L)$ such that the action of $\sigma$ on the entries of the matrices corresponds to the action of $\tau$ on $H$, and $\bullet$ the induced representation $\operatorname{\sf ind}_{H,H\rtimes\langle\tau\rangle}(\rho)$ has Schur index one; that is, it is similar to a representation over $K$. As examples, we discuss a three dimensional irreducible representation of $A_5$ over $\mathbb{Q}[\sqrt5]$ and a four dimensional irreducible representation of the double cover of $A_7$ over $\mathbb{Q}[\sqrt{-7}]$.
David J. Benson
2023-06-09T22:11:10Z
http://arxiv.org/abs/2306.06280v1
# Matrices for finite group representations that respect Galois automorphisms ###### Abstract. We are given a finite group \(H\), an automorphism \(\tau\) of \(H\) of order \(r\), a Galois extension \(L/K\) of fields of characteristic zero with cyclic Galois group \(\langle\sigma\rangle\) of order \(r\), and an absolutely irreducible representation \(\rho\colon H\to\mathsf{GL}(n,L)\) such that the action of \(\tau\) on the character of \(\rho\) is the same as the action of \(\sigma\). Then the following are equivalent. \(\bullet\)\(\rho\) is equivalent to a representation \(\rho^{\prime}\colon H\to\mathsf{GL}(n,L)\) such that the action of \(\sigma\) on the entries of the matrices corresponds to the action of \(\tau\) on \(H\), and \(\bullet\) the induced representation \(\mathsf{ind}_{H,H\rtimes\langle\tau\rangle}(\rho)\) has Schur index one; that is, it is similar to a representation over \(K\). As examples, we discuss a three dimensional irreducible representation of \(A_{5}\) over \(\mathbb{Q}[\sqrt{5}]\) and a four dimensional irreducible representation of the double cover of \(A_{7}\) over \(\mathbb{Q}[\sqrt{-7}]\). Key words and phrases:Brauer group, characters of finite groups, Galois automorphism, Hilbert Theorem, 90, Schur index 2020 Mathematics Subject Classification: 20C15 ## 1. Introduction This paper begins with the following question, suggested to the author by Richard Parker. The alternating group \(A_{5}\) has a three dimensional representation over the field \(\mathbb{Q}[\sqrt{5}]\) which induces up to the symmetric group \(S_{5}\) to give a six dimensional irreducible that can be written over \(\mathbb{Q}\). Given an involution in \(S_{5}\) that is not in \(A_{5}\), is it possible to write down a \(3\times 3\) matrix representation of \(A_{5}\) such that the Galois automorphism of \(\mathbb{Q}[\sqrt{5}]\) acts on matrices in the same way as the involution acts on \(A_{5}\) by conjugation? More generally, we are given a finite group \(H\), an automorphism \(\tau\) of order \(r\), a Galois extension \(L/K\) of fields of characteristic zero with cyclic Galois group \(\mathsf{Gal}(L/K)=\langle\sigma\rangle\) of order \([L:K]=r\), and an absolutely irreducible representation \(\rho\colon H\to\mathsf{GL}(n,L)\). We assume that the action of \(\tau\) on the character of the representation \(\rho\) is the same as the action of \(\sigma\). Then the question is whether it is possible to conjugate to a representation \(\rho^{\prime}\colon H\to\mathsf{GL}(n,L)\) with the property that the Galois automorphism \(\sigma\) acts on matrices in the same way as \(\tau\) acts on \(H\). In other words, we are asking whether the following diagram can be made to commute. We answer this using the invariant \(\lambda(\rho)\) in the relative Brauer group \[\mathsf{Br}(L/K)=H^{2}(\langle\sigma\rangle,L^{\times})\cong K^{\times}/N_{L/K }(L^{\times})\] that defines the division algebra associated to the representation obtained by inducing to the semidirect product. **Theorem 1.1**.: _Let \(\rho\colon H\to\mathsf{GL}(n,L)\) be as above. Then there is an invariant \(\lambda(\rho)\in K^{\times}/N_{L/K}(L^{\times})\) such that the following are equivalent._ 1. \(\lambda(\rho)=1\)_,_ 2. _There is a conjugate_ \(\rho^{\prime}\) _of_ \(\rho\) _making the diagram above commute,_ 3. _If_ \(G\) _is the semidirect product_ \(H\rtimes\langle\tau\rangle\) _then the induced representation_ \(\mathsf{ind}_{H,G}(\rho)\) _has Schur index equal to one; in other words, it can be written over_ \(K\)_._ _More generally, the order of \(\lambda(\rho)\) in \(K^{\times}/N_{L/K}(L^{\times})\) is equal to the Schur index of the induced representation, and the associated division algebra is the one determined by \(\lambda(\rho)\)._ The equivalence of (1) and (2) is proved in Section 3. The equivalence of (1) and (3) is more standard, see for example Turull [4], and is proved in Section 4. Combining these gives the more interesting statement of the equivalence of (2) and (3). We end with some examples. In the case of the three dimensional representations of \(A_{5}\), we have \(\lambda(\rho)=1\), and we write down explicit matrices for \(\rho^{\prime}\), though they're not very pleasant. In the case of the four dimensional irreducible representations of \(2A_{7}\), we have \(\lambda(\rho)=-2\), which is not a norm from \(\mathbb{Q}[\sqrt{-7}]\), and the division ring associated to the induced representation is the quaternion algebra with symbol \((-2,-7)_{\mathbb{Q}}\). **Acknowledgement.** I would like to thank Richard Parker for suggesting this problem and the examples, and Alexandre Turull for some helpful comments. ## 2. The matrix \(X\) Consider the composite \(\sigma\circ\rho\circ\tau^{-1}\): \[H\xrightarrow{\tau^{-1}}H\xrightarrow{\rho}\mathsf{GL}(n,L)\xrightarrow{ \sigma}\mathsf{GL}(n,L).\] This representation is equivalent to \(\rho\), and so there exists a matrix \(X\), well defined up to scalars in \(L^{\times}\), such that conjugation by \(X\) takes \(\rho\) to \(\sigma\circ\rho\circ\tau^{-1}\). Write \(c_{X}\) for conjugation by \(X\), so that \(c_{X}(A)=XAX^{-1}\). Then we have \[\sigma\circ\rho\circ\tau^{-1}=c_{X}\circ\rho. \tag{2.1}\] By abuse of notation, we shall also write \(\sigma\) for the automorphism of \(\mathsf{GL}(n,L)\) given by applying \(\sigma\) to each of its entries. Then \(c_{\sigma(X)}(\sigma(A))=\sigma(X)\sigma(A)\sigma(X)^{-1}=\sigma(XAX^{-1})\), so we have \[c_{\sigma(X)}\circ\sigma=\sigma\circ c_{X}.\] So equation (2.1) gives \[\sigma^{2}\circ\rho\circ\tau^{-2} =\sigma\circ c_{X}\circ\rho\circ\tau^{-1}\] \[=c_{\sigma(X)}\circ\sigma\circ\rho\circ\tau^{-1}\] \[=c_{\sigma(X)}\circ c_{X}\circ\rho\] \[=c_{\sigma(X).X}\circ\rho.\] Continuing this way, for any \(i>0\) we have \[\sigma^{i}\circ\rho\circ\tau^{-i}=c_{\sigma^{i-1}(X)\cdots\sigma(X).X}\circ\rho.\] Taking \(i=r\), we have \(\sigma^{r}=1\) and \(\tau^{r}=1\), so \[\rho=c_{\sigma^{r-1}(X)\cdots\sigma(X).X}\circ\rho. \tag{2.2}\] **Definition 2.3**.: If \(A\) is an \(n\times n\) matrix over \(L\), we define the norm of \(A\) to be \[N_{L/K}(A)=\sigma^{r-1}(A)\cdots\sigma(A).A\] as an \(n\times n\) matrix over \(K\). Equation (2.2) now reads \[\rho=c_{N_{L/K}(X)}\circ\rho.\] By Schur's lemma, it follows that the matrix \(N_{L/K}(X)\) is a scalar multiple of the identity, \[N_{L/K}(X)=\lambda I.\] Applying \(\sigma\) and rotating the terms on the left, we see that \(\lambda=\sigma(\lambda)\), so that \(\lambda\in K^{\times}\). If we replace \(X\) by a scalar multiple \(\mu X\), then the scalar \(\lambda\) gets multiplied by \(\sigma^{r-1}(\mu)\cdots\sigma(\mu)\mu\), which is the norm \(N_{L/K}(\mu)\). Thus the scalar \(\lambda\) is well defined only up to norms of elements in \(L^{\times}\). We define it to be the \(\lambda\)-invariant of \(\rho\): \[\lambda(\rho)\in K^{\times}/N_{L/K}(L^{\times}).\] Thus \(\lambda(\rho)=1\) if and only if \(X\) can be replaced by a multiple of \(X\) to make \(N_{L/K}(X)=I\). ## 3. The matrix \(Y\) The goal is to find a matrix \(Y\) conjugating \(\rho\) to a representation \(\rho^{\prime}\) such that \(\sigma\circ\rho^{\prime}\circ\tau^{-1}=\rho^{\prime}\). Thus we wish \(Y\) to satisfy \[\sigma\circ c_{Y}\circ\rho\circ\tau^{-1}=c_{Y}\circ\rho.\] We rewrite this in stages: \[c_{\sigma(Y)}\circ\sigma\circ\rho\circ\tau^{-1} =c_{Y}\circ\rho\] \[\sigma\circ\rho\circ\tau^{-1} =c_{\sigma(Y)^{-1}}\circ c_{Y}\circ\rho\] \[c_{X}\circ\rho =c_{\sigma(Y)^{-1}Y}\circ\rho.\] Again applying Schur's lemma, \(\sigma(Y)^{-1}Y\) is then forced to be a multiple of \(X\). Since \(N_{L/K}(\sigma(Y)^{-1}Y)=I\), it follows that if there is such a \(Y\) then \(\lambda(\rho)\) is the identity element of \(K^{\times}/N_{L/K}(L^{\times})\). This proves one direction of Theorem 1.1. The other direction is now an immediate consequence of the version of Hilbert's Theorem 90 given in Chapter X, Proposition 3 of Serre [2]: **Theorem 3.1**.: _Let \(L/K\) be a finite Galois extension with Galois group \(\mathsf{Gal}(L/K)\). Then \(H^{1}(\mathsf{Gal}(L/K),\mathsf{GL}(n,L))=0\). _ **Corollary 3.2**.: _Let \(L/K\) be a Galois extension with cyclic Galois group \(\mathsf{Gal}(L/K)=\langle\sigma\rangle\) of order \(r\). If a matrix \(X\in\mathsf{GL}(n,L)\) satisfies \(N_{L/K}(X)=I\) then there is a matrix \(Y\) such that \(\sigma(Y)^{-1}Y=X\)._ Proof.: This is the case of a cyclic Galois group of Theorem 3.1. This completes the proof of the equivalence of (1) and (2) in Theorem 1.1. ## 4. The induced representation Let \(G=H\rtimes\langle\tau\rangle\), so that for \(h\in H\) we have \(\tau(h)=\tau h\tau^{-1}\) in \(G\). Then the induced representation \(\mathsf{ind}_{H}^{G}(\rho)\) is an \(LG\)-module with character values in \(K\), but cannot necessarily be written as an extension to \(L\) of a \(KG\)-module. So we restrict the coefficients to \(K\) and examine the endomorphism ring. **Lemma 4.1**.: \(\mathsf{End}_{KG}(\mathsf{ind}_{H,G}(\rho|_{K}))\) _has dimension \(r^{2}\) over \(K\)._ Proof.: The representation \(\rho|_{K}\) is an irreducible \(KH\)-module, whose extension to \(L\) decomposes as the sum of the Galois conjugates of \(\rho\), so \(\mathsf{End}_{KH}(\rho|_{K})\) is \(r\) dimensional over \(K\). For the induced representation \(\mathsf{ind}_{H,G}(\rho|_{K})=\mathsf{ind}_{H,G}(\rho)|_{K}\), as vector spaces we then have \[\mathsf{End}_{KG}(\mathsf{ind}_{H,G}(\rho|_{K}))\cong\mathsf{Hom}_{KH}(\rho|_{ K},\mathsf{res}_{G,H}\,\mathsf{ind}_{H,G}(\rho|_{K}))\cong r.\,\mathsf{End}_{ KH}(\rho|_{K}).\qed\] **Proposition 4.2**.: _The algebra \(\mathsf{End}_{KG}(\mathsf{ind}_{H,G}(\rho|_{K}))\) is a crossed product algebra, central simple over \(K\), with generators \(m_{\lambda}\) for \(\lambda\in L\) and an element \(\xi\), satisfying_ \[m_{\lambda}+m_{\lambda^{\prime}}=m_{\lambda+\lambda^{\prime}},\qquad m_{ \lambda}m_{\lambda^{\prime}}=m_{\lambda\lambda^{\prime}},\qquad m_{\lambda} \circ\xi=\xi\circ m_{\sigma(\lambda)},\qquad\xi^{r}=m_{\lambda(\rho)}.\] Proof.: We can write the representation \(\mathsf{ind}_{H,G}(\rho|_{K})\) in terms of matrices as follows. \[g\mapsto\begin{pmatrix}\rho(g)|_{K}&&&&\\ &\sigma\rho\tau^{-1}(g)|_{K}&&\\ &&\ddots&\\ &&&\sigma^{-1}\rho\tau(g)|_{K}\end{pmatrix},\qquad\tau\mapsto\begin{pmatrix}I &&&I\\ &\ddots&\\ &&I\end{pmatrix}\circ\sigma\] It is easy to check that the following are endomorphisms of this representation. \[m_{\lambda}=\begin{pmatrix}\lambda I&&&\\ &\sigma(\lambda)I&&\\ &&\ddots&\\ &&&\sigma^{-1}(\lambda)I\end{pmatrix},\qquad\xi=\begin{pmatrix}X&&&&\sigma^{-1} (X)\\ X&&&&\\ &\sigma(X)&&\\ &&\ddots&\\ &&&\sigma^{-2}(X)\end{pmatrix}\] with \(\lambda\in L\) and \(X\) as in Section 2. Since these generate an algebra of dimension \(r^{2}\) over \(K\), by Lemma 4.1 they generate the algebra \(\mathsf{End}_{KG}(\mathsf{ind}_{H,G}(\rho|_{K}))\). The given relations are easy to check, and present an algebra which is easy to see has dimension at most \(r^{2}\), and therefore no further relations are necessary. **Corollary 4.3**.: _The Schur index of the induced representation \(\mathsf{ind}_{H,G}(\rho)\) is equal to the order of \(\lambda(\rho)\) as an element of \(K^{\times}/N_{L/K}(L^{\times})\). In particular, the Schur index is one if and only if \(\lambda(\rho)=1\) as an element of \(K^{\times}/N_{L/K}(L^{\times})\)._ Proof.: This follows from the structure of the central simple algebra \(\mathsf{End}_{KG}(\rho|_{K})\) given in Proposition 4.2, using the theory of cyclic crossed product algebras, as developed for example in Section 15.1 of Pierce [1], particularly Proposition b of that section. This completes the proof of the equivalence of (1) and (3) in Theorem 1.1. In particular, it shows that \(\lambda(\rho)\) can only involve primes dividing the order of \(G\). ## 5. Examples Our first example is a three dimensional representation of \(A_{5}\). There are two algebraically conjugate three dimensional irreducible representations of \(A_{5}\) over \(\mathbb{Q}[\sqrt{5}]\) swapped by an outer automorphism of \(A_{5}\), and giving a six dimensional representation of the symmetric group \(S_{5}\) over \(\mathbb{Q}\). Setting \(\alpha=\frac{1+\sqrt{5}}{2}\), \(\bar{\alpha}=\frac{1-\sqrt{5}}{2}\), we can write the action of the generators on one of these three dimensional representations as follows. \[(12)(34)\mapsto\begin{pmatrix}-1&0&0\\ 0&0&1\\ 0&1&0\end{pmatrix}\qquad(153)\mapsto\begin{pmatrix}-1&1&\alpha\\ \alpha&0&-\alpha\\ -\alpha&0&1\end{pmatrix}\] Taking this for \(\rho\), we find a matrix \(X\) conjugating this to \(\sigma\circ\rho\circ\tau^{-1}\) where \(\sigma\) is the field automorphism and \(\tau\) is conjugation by \((12)\). Using the fact that if \(a=(12)(34)\) and \(b=(153)\) then \(ab^{2}abab^{2}=(253)\), we find that \[X=\begin{pmatrix}1&-\bar{\alpha}&\bar{\alpha}\\ -\bar{\alpha}&1&-\bar{\alpha}\\ \bar{\alpha}&-\bar{\alpha}&1\end{pmatrix}\] We compute that \(\sigma(X).X\) is minus the identity. Now \(-1\) is in the image of \(N_{\mathbb{Q}[\sqrt{5}],\mathbb{Q}}\), namely we have \((2-\sqrt{5})(2+\sqrt{5})=-1\). So we replace \(X\) by \((2-\sqrt{5})X\) to achieve \(\sigma(X).X=I\). Having done this, by Hilbert 90 there exists \(Y\) with \(\sigma(Y)^{-1}.Y=X\). Such a \(Y\) conjugates \(\rho\) to the desired form. For example we can take \[Y=\begin{pmatrix}1-2\sqrt{5}&3-2\sqrt{5}&-3+2\sqrt{5}\\ 3-2\sqrt{5}&1-2\sqrt{5}&3-2\sqrt{5}\\ -3+2\sqrt{5}&3-2\sqrt{5}&1-2\sqrt{5}\end{pmatrix}.\] Thus we end up with the representation \[(12)(34)\mapsto\begin{pmatrix}-1&0&0\\ 0&0&1\\ 0&1&0\end{pmatrix}\qquad(153)\mapsto\frac{1}{40}\begin{pmatrix}10-4\sqrt{5}&- 5+19\sqrt{5}&25-9\sqrt{5}\\ -10-4\sqrt{5}&25+9\sqrt{5}&-5-19\sqrt{5}\\ -50&35-5\sqrt{5}&-35-5\sqrt{5}\end{pmatrix}\] \[(253)\mapsto\frac{1}{40}\begin{pmatrix}10+4\sqrt{5}&-5-19\sqrt{5}&25+9\sqrt{5 }\\ -10+4\sqrt{5}&25-9\sqrt{5}&-5+19\sqrt{5}\\ -50&35+5\sqrt{5}&-35+5\sqrt{5}\end{pmatrix}.\] Denoting these matrices by \(a\), \(b\) and \(c\), it is routine to check that \(a^{2}=b^{3}=(ab)^{5}=1\), \(a^{2}=c^{3}=(ac)^{5}=1\), and \(c=\sigma(b)=ab^{2}abab^{2}\). More generally, if \(H\) is an alternating group \(A_{n}\) and \(G\) is the corresponding symmetric group \(S_{n}\) then all irreducible representations of \(G\) are rational and so the invariant \(\lambda(\rho)\) is equal to one for any irreducible character of \(H\) that is not rational. So an appropriate matrix \(Y\) may always be found in this case. Our second example is one with \(\lambda(\rho)\neq 1\). Let \(H\) be the group \(2A_{7}\), namely a non-trivial central extension of \(A_{7}\) by a cyclic group of order two. Let \(\tau\) be an automorphism of \(H\) of order two, lifting the action of a transposition in \(S_{7}\) on \(H\), and let \(G\) be the semidirect product \(H\rtimes\langle\tau\rangle\). Then \(H\) has two Galois conjugate irreducible representations of dimension four over \(\mathbb{Q}[\sqrt{-7}]\). Let \(\rho\) be one of them. The induced representation is eight dimensional over \(\mathbb{Q}[\sqrt{-7}]\). Restricting coefficients to \(\mathbb{Q}\) produces a \(16\) dimensional rational representation whose endomorphism algebra \(E\) is a quaternion algebra. Thus the induced representation can be written as a four dimensional representation over \(E^{\mathsf{op}}\cong E\). This endomorphism algebra was computed by Turull [3] in general for the double covers of symmetric groups. In this case, by Corollary 5.7 of that paper, the algebra \(E\) is generated over \(\mathbb{Q}\) by elements \(u\) and \(v\) satisfying \(u^{2}=-2\), \(v^{2}=-7\) and \(uv=-vu\). Thus the invariant \(\lambda(\rho)\) is equal to \(-2\) as an element of \(\mathbb{Q}^{\times}/N_{\mathbb{Q}[\sqrt{-7}],\mathbb{Q}}(\mathbb{Q}[\sqrt{-7}] ^{\times})\) in this case.
2308.01043
Discrete metasurface for extreme sound transmission through water-air interface
The mismatch of acoustic impedance at water-air interface can lead a low transmitted sound energy. In this paper, we propose a discrete metasurface for extreme sound transmission based on the impedance matching theory. By employing topology optimization, discrete unit cells with different aspect ratios are designed with unitary sound transmission. The unit cell of continuous metasurface is also obtained for comparison. After analyzing the wide-angle performance of discrete unit cells, samples of both discrete and continuous metasurfaces are fabricated. Sound transmission enhancement of discrete metasurface is clearly measured compared to the bare water-air interface. And the amplitude is relatively larger than that of the continuous sample. Experimental results are in general agreement with numerical ones when viscosity of the sample is considered. Furthermore, the frequency shifts between experiment and simulation are attributed to the random immersion of unit cells for discrete metasurface and the bending of continuous metasurface, respectively. The present work suggests an alternative way for improving the efficiency of water-air acoustic communication.
Shao-Cong Zhang, Hong-Tao Zhou, Xiao-Tong Gong, Yan-Feng Wang, Yue-Sheng Wang
2023-08-02T09:35:56Z
http://arxiv.org/abs/2308.01043v1
# Discrete metasurface for extreme sound transmission ###### Abstract The mismatch of acoustic impedance at water-air interface can lead a low transmitted sound energy. In this paper, we propose a discrete metasurface for extreme sound transmission based on the impedance matching theory. By employing topology optimization, discrete unit cells with different aspect ratios are designed with unitary sound transmission. The unit cell of continuous metasurface is also obtained for comparison. After analyzing the wide-angle performance of discrete unit cells, samples of both discrete and continuous metasurfaces are fabricated. Sound transmission enhancement of discrete metasurface is clearly measured compared to the bare water-air interface. And the amplitude is relatively larger than that of the continuous sample. Experimental results are in general agreement with numerical ones when viscosity of the sample is considered. Furthermore, the frequency shifts between experiment and simulation are attributed to the random immersion of unit cells for discrete metasurface and the bending of continuous metasurface, respectively. The present work suggests an alternative way for improving the efficiency of water-air acoustic communication. Keywords:Discrete metasurface; extreme sound transmission; water-air interface; impedance matching; topology optimization ## 1 Introduction Hydro-acoustic communication is currently the main means of communication between land and ocean[1]. However, the ratio of the acoustic impedance of water and air is up to 3600, leading to almost 30 dB transmission loss of sound through the interface[2, 3, 4]. The impedance mismatch at water-air interface thus highly limits the communication efficiency of radar, sonar, and other acoustic applications[5, 6, 7]. To handle this problem, a specific media placed at the water-air interface is required. Both impedance and thickness of this media should satisfy the impedance matching conditions[2]. However, though many efforts have been made, it is difficult to find an appropriate media, whose impedance is 60 times than that of air[8]. Recently, the concept of acoustic metasurface has provided an alternative way for solving the problem. Acoustic metasurface is a kind of artificial structures consisting of subwavelength unit-cells[9]. It can exhibit abnormal functions of acoustic waves, such as abnormal refraction[10], mode conversion [11, 12, 13], focusing[14, 15, 16], carpet cloaking[17], etc. Specially, metasurfaces are used to manipulate acoustic waves across the water-air interface. The investigations are classified into two parts: First, a number of studies are aiming to achieve extreme sound transmission across the water-air interface. The structure may consist of loaded membrane[18], Helmholtz resonators[19] above the water-air interface, or bubbles[20, 21, 22] immersed in the water. Bok et al[18] realize unitary sound transmission at water-air interface using the resonance of the tensioned membrane units with central mass. Zhang et al[19] design a composite waveguide for extreme sound transmission through the water-air interface and the impedance is adjusted by changing the geometrical parameters. Lee et al[21] discussed the effects of the radius of the underwater bubble and the distance of the bubble to the water-air interface on the impedance matching. Moreover, Huang et al[22] experimentally realized high sound transmission through the interface based on the resonance of the underwater bubbles. Nevertheless, the design of these metasurfaces is usually limited at low frequency. Second, there are some recent investigations focusing on the wavefront manipulation of acoustic waves across the water-air interface. Liu et al[23] constructed a hydroacoustic metasurface with acoustic-focusing effect and enhanced sound transmission. Combining the single-phase unitary-transmission metasurface and phase-modulated acoustic metasurface, Zhou et al[24] realized focusing different vortex fields across water-air interface. It is noted that all these reported metasurfaces are continuous. They should be fully replaced even when part of them fail in use. In this paper, we propose a discrete metasurface for enhanced sound transmission across water-air interface. Unit-cells of the discrete metasurface are designed by digging holes in an isotropic elastic solid using topology optimization. The design scheme has no restriction on the frequency range. Numerical calculations are implemented by Finite Element Method. Unit-cells with different aspect ratio are obtained. For comparison, the unit-cell of continuous metasurface is also designed using the same method. Extreme sound transmissions are achieved based on the impedance matching theory. The transmission performances of the unit-cells for oblique incidence and the metasurfaces assembled by these units are investigated. Discrete and continuous metasurfaces are then fabricated by 3D printing. Enhanced sound transmissions are clearly measured across the water-air interface. Finally, the frequency shifts between experiment and simulation are analyzed by considering the random immersion of the discrete metasurface and the bending of the continuous metasurface. ## 2 The impedance matching theory To realize extreme transmission across the water-air interface, we first consider an isotropic transformer with length \(w\) and thickness \(l\) placed at interface[25], as illustrated in Figure 1(a). When harmonic plane wave with angular frequency \(\omega\) is incident from water to air, the acoustic fields and the velocity can be expressed as \[P_{i}=P_{ia}e^{i(wt-k_{w}y)},P_{r}=P_{n}e^{i(wt+k_{w}y)},P_{Mt}=P_{Mu}e^{i(wt-k _{w}y)},P_{r}=P_{Mu}e^{i(wt+k_{w}y)},P_{t}=P_{n}e^{i(wt-k_{w}y)} \tag{1}\] \[v_{i}=\frac{P_{i}}{\rho_{w}c_{w}},v_{r}=\frac{P_{r}}{\rho_{w}c_{w}},v_{Mt}= \frac{P_{Mt}}{\rho_{M}c_{M}},v_{Mt}=\frac{P_{Mt}}{\rho_{M}c_{M}},v_{t}=\frac{P_ {t}}{\rho_{a}c_{a}} \tag{2}\] where \(P_{i}\) and \(P_{r}\) is the incident wave and reflected wave in the water below y=-\(l\), \(v_{i}\) and \(v_{r}\) is the corresponding velocity, respectively; \(P_{t}\) is the transmitted wave in the air above y=0, \(v_{t}\) is the corresponding velocity; \(P_{Mt}\) and \(P_{Mt}\) are the transmitted wave and the reflected wave in the transformer between y=-\(l\) and y=0, respectively. \(P_{a},P_{n},P_{Mu},P_{Mu}\) and \(P_{u}\) represent the amplitude of \(P_{i},P_{r},P_{na},P_{Ar}\) and \(P_{r}\), respectively; j= \(\sqrt{-1}\); \(k_{w}\) (\(\rho_{w}\), \(c_{w}\)), \(k_{a}\)(\(\rho_{a}\), \(c_{a}\)) and \(k_{M}\)(\(\rho_{M}\), \(c_{M}\)) represent the wavenumber (density, sound velocity) of water, air and the transformer, respectively. On the interface between two different media, the sound pressure and velocity of the transmitted wave should be continuous, i.e., \[P_{ua}e^{k_{u}l}+P_{ua}e^{-k_{u}l}=P_{Mua}e^{k_{u}l}+P_{Mua}e^{-k_{u}l},\quad\text {y=-}l \tag{3}\] \[v_{i}e^{-k_{u}l}-v_{r}e^{k_{u}l}=v_{M}e^{-k_{u}l}-v_{M}e^{k_{u}l},\quad\text{ y=-}l \tag{4}\] \[P_{Mua}+P_{Mua}=P_{ua}\;\;,\qquad\text{y=0} \tag{5}\] \[v_{M}-v_{Mv}=v_{t}\;\;,\qquad\quad\text{y=0} \tag{6}\] Combining Eqs. (3)-(6), the sound transmission T from water to air can be given by \[T=\frac{\mid P_{t}\mid^{2}/2\rho_{a}c_{a}}{\mid P_{t}\mid^{2}/2\rho_{w}c_{w}}= \frac{4Z_{a}Z_{w}}{(Z_{a}+Z_{w})^{2}\cos^{2}k_{u}l+(Z_{M}+\frac{Z_{a}Z_{w}}{Z_{ M}})^{2}\sin^{2}k_{M}l} \tag{7}\] where \(Z_{w}=\rho_{w}c_{w},Z_{a}=\rho_{a}c_{a}\) and \(Z_{M}=\rho_{M}c_{M}\) represent the impedance of water, air and the media, respectively. Extreme sound transmission T=1 can be obtained when \(\cos k_{M}l\) =0 and \(Z_{M}^{2}=Z_{a}Z_{w}\). Then we have: Figure 1: Schematics of sound transmission across the water-air interface with (a) the effective isotropic transformer, or (b) the optimized unit cell made of a single-phase solid. \[l=\frac{(2n-1)}{4}\lambda \tag{8}\] \[Z_{M}=\sqrt{Z_{a}Z_{w}} \tag{9}\] where \(\lambda=2\pi/k_{M}\) is the wavelength in the transformer. ## 3 Topology optimization method In this paper, topology optimization and genetic algorithm are employed to explore the transformer with extreme sound transmission across the water-air interface. The single-phase transformer is made of epoxy with density \(\rho\)=1180kg\(\cdot\)m\({}^{-3}\), the Young's modulus E=2.65GPa and the Poisson's ratio \(v=0.41\). The density and sound velocity of water and air is 1000 kg\(\cdot\)m\({}^{-3}\) and 1500m/s, 1.21kg\(\cdot\)m\({}^{-3}\) and 343 m/s, respectively. The operating frequency is chosen as \(f\)=10kHz. The genetic algorithm is used to maximize the sound transmission of the unit cell. And the optimization formulation can be expressed as \[\text{Find:} \rho(\text{i,j})=0\text{ or }1\text{ ( i=1,2,...,K; j=1,2,...,L)} \tag{10}\] \[\text{Minimum:} \nu(f)=1-T\] (11) \[\text{Subject to:} \rho(\text{i,j})=1\text{ (i=1 or K and j=1 or L)}\] (12) \[\text{Max(K or L)}=24 \tag{13}\] where y(\(f\)) is the fitness function for the target; K and L are the number of pixels in the design domain shown in Figure 1(b), K varies from 8 to 24 and L=24. Each pixel is given a density \(\rho\) of '0' or '1', representing for air (or solid). The outermost side of the design domain is set as '1' to improve the speed of optimization. Meanwhile, considering the symmetry of the water and air regions, as well as the form of plane wave propagation, the unit cell is designed to be y-axis symmetrical. Besides, the "filter"[26] is used to remove isolated elements and fill isolated voids. The evolutionary iteration procedure of the topological optimization is describing as follows: first, the initial population (Ki) with size (S) is randomly generated, at the same time the evolutionary generation (G) is set as 0; Second, calculate the fitness function (y\({}_{i}\)) of the current population by using COMSOL Multiphysics; Third, if y\({}_{i}\) is smaller than the minimum relative error (Er=10-4), break the loop and print K\({}_{\rm i}\), otherwise use "tournament selection" to select new population (K\({}_{\rm i+1}\)) of population size (S). Fourth, perform crossover and mutation on the new population and repeat the 2nd-4th step. ## 4 Results and Discussion **1) Unit cells with extreme sound transmission** By using topology optimization described in section 3, unit cell discrete metasurface (UCDM) with extreme transmission and aspect ratio of 1/2 of is obtained, as shown in Figure 2(a). The aspect ratio is defined as K/L. The unit cell of continuous metasurface (UCCM) is also obtained when K/L=1. It clarifies the differences in the number of evolutions of different cells. It shows that the topology optimization of these two kinds of cells starts from a random structure in G=1, which cannot satisfy the fitness function in Eq.(11) and ends before G=450. It is indicated that the type of the cell can hardly influence the generations of evolution. Figure 2(b) shows the transmitted sound pressure of these two unit cells at operating frequency. It can be seen that both of them have narrow band of frequency for high transmission. The deformation and transmitted sound field of these two cells at resonant frequency are illustrated in Fig. 2 (c), respectively. It can be clearly seen that the UCCM have two kinds of flexural vibrations Figure 2: The sound transmitted performance of the two kinds of unit cells. (a) The evolution of the unit cells with the increase of the generation. (b) The transmission of the unit cells with frequency. (c) The distributions of displacement and acoustic pressure of the unit cells under extreme transmission. at the water-air interface and in the air, which is the first order flexural vibration and the third order flexural vibration along y axis, respectively. However, that of the UCDM is different. It contains not only the first order flexural vibration and the third order flexural vibration along y axis, but also the second order flexural vibration along x axis in the air. This is because the UCCM has different periodic condition from the UCDM. At the same time, there exists two kinds of phase in the transmitted sound pressure. Then extreme transmissions are also obtained for unit cells with different aspect ratio, as shown in Figure 3(a). It can be observed that the sound transmissions of these unit cells are close to 1. Figure 3(b) is the deformation and acoustic pressure of each cell. It depicts that there are two kinds of phases for the unit cells. And the phase difference is \(\pi\). The equivalent parameters of the unit cells with different aspect ratio can be obtained using transfer matrix method [27]. The results are shown in Table 1 with equivalent speed \(c\)', equivalent density \(\rho\)', equivalent impedance \(Z\) and equivalent wavelength \(\lambda\)' of UCDM, respectively. It can be obtained that all of the unit cells have similar normalized equivalent impedance, which satisfies the impedance matching condition in Eq. (9). At the same time, the difference of the equivalent wavelength of the unit cell also suggests Figure 3: The sound transmitted performance of the unit cells with different aspect ratio. (a) The transmission of the unit cells with different aspect ratio. (b) The corresponding distributions of displacement and acoustic pressure for the unit cells with extreme transmission. a phase change of \(\pi\) according to Eq. (8), consistent with the results in Figs. 2(c) and 3(b). **2) Wide-angle performance of the discrete unit cell** In this part, the transmission performance of UCDM under incident wave with different angles is discussed. As shown in Figure 3(a), the sound wave is incident from water side with incident angle \(\theta_{i}\) and transmitted to air side with refracted angle \(\theta_{t}\), According to Snell's Law, we can get \[k_{{}_{w}}\sin\theta_{i}=k_{{}_{a}}\sin\theta_{t} \tag{14}\] In this occasion, the sound transmission coefficient in the normal direction of the unit cell is \[T^{{}^{\prime}}=\frac{k_{{}_{a}}^{2}-k_{{}_{w}}^{2}\sin^{2}\theta_{i}}{k_{{}_{ a}}^{2}\cos^{2}\theta_{i}}T \tag{15}\] By using Eq. (15), the effects of incident angle on the sound transmission of the UCDM and UCCM are calculated and shown in Figs. 4(a) and (c), respectively. It can be clearly seen that the sound transmission of UCDM is close to 1 for most incident angles. However, high transmission of UCCM can be obtained only when the incident angle is around \(0^{\circ}\). The refracted acoustic fields of UCDM and UCCM are plotted in \begin{table} \begin{tabular}{c c c c} \hline Aspect ratio & \(c\)(m/s) & \(\rho\)(kg/m\({}^{3}\)) & \(Z/Z_{M}\) & \(I/\lambda\)\({}^{{}^{\prime}}\) \\ \hline 8/24 & 309 & 79.74 & 0.99 & 0.74 \\ 10/24 & 918 & 26.92 & 0.99 & 0.25 \\ 12/24 & 930 & 26.65 & 0.99 & 0.25 \\ 14/24 & 921 & 26.87 & 0.99 & 0.25 \\ 16/24 & 927 & 26.52 & 0.99 & 0.25 \\ 18/24 & 935 & 26.41 & 0.99 & 0.25 \\ 20/24 & 915 & 26.94 & 0.99 & 0.25 \\ 22/24 & 923 & 26.71 & 0.99 & 0.25 \\ 24/24 & 305 & 81.02 & 0.99 & 0.74 \\ \hline \end{tabular} \end{table} Table 1: Equivalent parameters of the discrete unit cells with different aspect ratios Figs. 4(b) and (d). The refracted angle of both UCDM and UCCM varies within a small range when incident angle is changed from \(0^{\circ}\) to \(90^{\circ}\). This could be explained by Eq. (14). Since \(k_{w}\) is about five times larger than \(k_{a}\), \(\theta_{t}\) should be no larger than \(18^{\circ}\) even for \(90^{\circ}\) incidence. Meanwhile, both unit cells can realize high transmission when the incident angle is less than \(18^{\circ}\). However, the color of the pressure distribution for UCCM fades away when the incident angle is over \(36^{\circ}\). While the UCDM has relatively high transmission when the incident angle is no more than \(72^{\circ}\). So the UCDM has a better wide-angle performance than the UCCM. Figure 4: The transmitted performances of the UCDM and UCCM for different oblique incidences. Panels (a) and (c) show the sound transmission of UCDM or UCCM with different incident angles. The inset in panel (a) is the generalized Snell’s law. Panels (b) and (d) illustrates the sound pressure fields of UCDM or UCCM with incident angle varying from \(0^{\circ}\) to \(90^{\circ}\). 3) Discrete metasurfaces with extreme transmission In this part, we construct discrete and continuous metasurfaces using the unit cells mentioned above. Each metasurface consists of 20 unit cells. Gaussian-type plane wave is used as the excitation in the water. The transmitted sound fields of the discrete and continuous metasurfaces at the operating frequency are shown in Figs.5 (a) and (b). Plane waves are observed in air for both metasurfaces. However, the energy at the transmitted side of the discontinuous metasurface is more concentrated than that of the continuous one. This is due to the wide-angle performance of UCDM. As shown in Fig. 3 (c), the transmission of the UCDM is smaller in the normal direction. So a large amount of the energy is refracted to other direction, leading to the low concentration of the transmitted energy of the continuous metasurface. Figure 5: The sound transmitted performance of the discrete and continuous metasurfaces across the water-air interface. Panel (a) shows the sound transmission of the discrete metasurface with different aspect ratio at the operating frequency. Panel (b) presents variation of sound transmission for the discrete and continuous metasurfaces with frequency. Panels (c) and (d) illustrate the distribution of sound pressure of the discrete and continuous metasurfaces, respectively. To quantitively characterize the transmission properties, the transmitted pressure is collected on a line with length equal to wave source, as marked in Figs. 5(c) and (d). After integration of sound pressure in these two lines, the sound transmission of the metasurface can be calculated by using Eq. (7). The corresponding results are illustrated in Fig. 5(c). It is found that the result is similar to that of the unit-cells in Fig. 2 (b). Both metasurfaces have a narrow band of frequency for high transmission. We further evaluate the transmission properties of discrete metasurface with other aspect ratio, as shown in Fig. 5(d). It is found that all the discrete metasurface have relative larger transmission compared to the continuous one. It should be noted that the continuous metasurface can have high transmission by improving the wide-angle performance. This could be realized by limiting UCCM with high transmission for a relatively large range of incident angle, e.g., 0\({}^{\circ}\), 20\({}^{\circ}\) and 35\({}^{\circ}\). And the corresponding fitness function can be modified as \[y^{{}^{\prime}}(f)=1-\alpha T(0^{{}^{\prime}})-\beta T(20^{{}^{\prime}})-\gamma T (35^{{}^{\prime}}) \tag{16}\] where \(\alpha\)=0.5, \(\beta\)=0.4 and \(\gamma\)=0.1 are the weighting coefficients. Fig. 6(a) shows that the sound transmission through the refined unit cell for incident angle between -90\({}^{\circ}\) to 90\({}^{\circ}\). It is found that unitary transmission is obtained for the incident angle from -33\({}^{\circ}\) to 33\({}^{\circ}\) Figure 6: The wide-angle performance of the refined UCCM. (a) The transmission of the refined UCCM for various incidences. (b) The transmitted sound field of the continuous metasurface with refined UCCM. The corresponding range of incident angle is larger than that in Fig. 4(c). However, the convergence of the unit cell is much slower than UCDM due to the strong constraints of the target, as illustrated in Fig. 2(a). The transmitted sound field of the UCCM is shown in Fig. 6(b). It is found that the energy at the transmitted side of the new continuous metasurface is more concentrated compared with Fig. 4(d). The sound transmission of the new continuous metasurface is 0.93, much close to that of the discrete metasurface, see the red point marked in Fig. 4(a). This indicates that the wide-angle performance of unit cell gives rise to the high transmission of the metasurface. ## 4 Experimental verification In order to verify the high transmission of discrete and continuous metasurfaces, we fabricate the samples with 20 unit cells by 3D printing, as shown in Figs. 7(a) and (b). They are stretched by 50 cm in the non-periodic direction. Fig. 7(c) shows the experiment setups for measuring transmitted acoustic field of metasurfaces. Harmonic wave source with frequency from 8kHz to 14kHz is generated using B&K PLUS Labshop software. It is output by the Input and Output Module (B&K Type 3160-A-042). After being amplified by power amplifier (Krohn-Hite 7500 and B&K Type 2573), the source is played by the underwater transducer with a diameter of 10cm placed on the bottom of the water tank (dimensions 1.6m\(\times\)1.5m\(\times\)0.8m). The discrete and Figure 7: Photograph for the samples of metasurfaces and experimental setups. (a) The discrete metasurface made of 20 units with aspect ratio of 1/2. (b) The continuous metasurface made of 20 units. (c) The experimental setups. continuous metasurfaces are placed on a square steel frame with a width of 65cm and a length of 50cm, respectively. The frame with metasurface is put on the water-air interface. And the transmitted signal is received by the microphone (B&K Type 4939). The scanning area is 20cm\(\times\)20cm above the center of the sample. Figure 8(a) shows the average measured transmission of discrete and continuous metasurfaces. As a comparison, the transmission of bare water-air interface is also Figure 8: Experimental results of the discrete and continuous metasurfaces. Panel (a) shows the measured sound pressure of the bare water-air interface, discrete and continuous metasurfaces. Panel (b) illustrates the simulated transmission with viscosity considered for the discrete and continuous metasurfaces. For comparison, the result for bare water-air interface is also presented. Panels (c)-(e) present the distributions of measured sound pressure in \(x\)-\(y\) plane for the bare water-air interface at \(f\)=10kHz, discrete metasurface at \(f\)=10.9kHz and continuous metasurface at \(f\)=11.7kHz, respectively. measured. It is observed that both discrete and continuous metasurfaces have relatively large transmission. The transmission of the discrete metasurface is almost 20 times larger than that of bare interface at 11.2kHz. And the result for continuous metasurface is 10 times larger compared to the bare interface at 11.8kHz. Figure 8(c)-(e) are the corresponding transmitted sound amplitude filed in _x-y_ plane. It can be clearly seen that the amplitude of the sound pressure for the bare water-air interface much smaller than that for the discrete and continuous metasurfaces. And the amplitude distribution of sound for the discrete metasurface is more concentrated than that for the continuous one. These results are consistent with simulation in Figure 4(c) and (d). Compared to Figure 4(b), the curve in Figure 8(a) is blunter. This might be resulted from the viscosity of the solid and fluid. We then use DMA to measure the viscosity of the sample, which is around 0.05E. The simulated result with viscosity included is shown in Figure 8(b). It can be observed that the sound transmission for the bare water-air interface is a constant in the considered frequency range. And it is much more smaller compared to those of the metasurfaces at _f_=10kHz. Meanwhile, the sound transmission of the discrete metasurface is about twice larger than that of the continuous metasurface at _f_=10kHz. These results are in good agreement with the measured results shown in Figure 8(a). Figure 9: The effect of random immersion depth of UCDMs on transmitted performance. Panel (a) shows the sound transmission of the UCDM with the immersion depth of _l_/120 and _l_/24. Panel (b) illustrates the transmitted sound filed of the UCDM at _f_=10kHz for different immersion depths. Panel (c) presents the sound transmissions of the discrete metasurfaces with/without random immersions. However, clear frequency shift can be obtained between simulation and experiment for both metasurfaces. We attribute this to the random immersion of UCDM and the bending of the continuous metasurface, as we state below. ## 5 Effect of random immersion on transmission for discrete metasurface In this part, we discussed the effect of random immersion of UCDMs on the transmission performance of the discrete metasurface. Fig. 9(a) shows the transmission of UCDM with immersion depths of _l_/120 and _l_/24, respectively. Right shift of the transmission peak is observed even for a small immersion. And the shift becomes further clear for a large immersion. The transmission amplitude is also decreased. This can also be observed from the pressure distribution at the transmission peak are shown in Fig. 9(b). Then we construct three discrete metasurfaces by UCDMs with random immersions, as illustrated in Fig. 9(c). The transmission peaks are shift to a larger frequency, which is consistent with the experiment in Fig. 9(a). ## 6 Effect of bending on the transmission for continuous metasurface The sample of continuous metasurface is bent after fabric Figure 10: The effect of bending on sound transmission and frequency shift for continuous metasurface. (a)The sound transmission of the continuous metasurface, DMs and RDMs. Panels (b) and (c) are the transmitted sound fields of the DMs with the maximum deformation of _l_/120 at _f_=10.2kHz and _l_/24 at _f_=10.8kHz, respectively. restricted by the frame. A two-step simulation is conducted to analyze the effect of bending on the transmission performance of metasurface. The first is to calculate the static deformation of the metasurface. Static load along the normal direction is applied on the middle of the metasurface with two ends fixed. Considering the bending direction, two sets of deformed metasurface (DM, in the direction of wave propagation) and reverse deformed metasurface (RDM, in the opposite direction of wave propagation) are constructed, respectively. By changing the magnitude of the load, the maximum deformation of DM (or RDM) is _l_/120 and _l_/24, respectively. Then we evaluate the sound transmission performances of these metasurfaces. The corresponding result is shown in Figure 10(a). It is found that the transmission peaks of the deformed metasurfaces shift right and the amplitude decreases. The bigger the load is, the larger the deformation and the frequency shift, and the lower the transmission is. Figure 10(b)-(c) shows the transmitted sound field of the DM with different deformation at the corresponding peak frequency. Though the transmitted sound energy in Fig. 10(b) is more concentrated than that in Fig. 5(d), the transmission amplitude is smaller. It can also be observed form Figs. 10(b) and (c) that the transmitted pressure distribution for DM with larger deformation is more distributed, giving rise to a smaller transmission. ## 5 Conclusions In this work, we propose discrete metasurfaces for enhanced sound transmission through the water-air interface by using topology optimization. For comparison, a continuous acoustic metasurface is also designed by the same optimization. Samples of discrete and continuous metasurfaces are then fabricated and measured. Pressure distributions of the samples are obtained from both simulations and experiment. The results show that enhanced sound transmission is obtained when the impedance matching conditions of the water, air and metasurface are satisfied. Although enhanced transmission is observed for both discrete and continuous metasurfaces in the simulations and experiment, the measured enhancement for the discrete metasurface is more pronounced due to the wide-angle performance of the discrete unit cell. This is also explained by the simulation when viscosity of the sample is considered. Meanwhile, the measured transmission peaks are shifted to higher frequencies compared to the simulated results. This is explained by the random immersion of the unit cells for discrete metasurface and the bending for the continuous metasurface. In the future, the possibility to design uneven metasurfaces suitable for the uneven water-air metasurface will be explored. The present work will also be integrated with phase-modulated acoustic metasurface to realize abnormal wave functions through the water-air interface. ## Acknowledgements This work is supported by National Natural Science Foundation of China (Grant Nos. 12072223, 12122207, 12021002 and 11991032). The first author also thanks Mr. Jiaxuan Weng for the valuable discussions and constructive suggestions.
2305.01901
Few-shot Event Detection: An Empirical Study and a Unified View
Few-shot event detection (ED) has been widely studied, while this brings noticeable discrepancies, e.g., various motivations, tasks, and experimental settings, that hinder the understanding of models for future progress.This paper presents a thorough empirical study, a unified view of ED models, and a better unified baseline. For fair evaluation, we compare 12 representative methods on three datasets, which are roughly grouped into prompt-based and prototype-based models for detailed analysis. Experiments consistently demonstrate that prompt-based methods, including ChatGPT, still significantly trail prototype-based methods in terms of overall performance. To investigate their superior performance, we break down their design elements along several dimensions and build a unified framework on prototype-based methods. Under such unified view, each prototype-method can be viewed a combination of different modules from these design elements. We further combine all advantageous modules and propose a simple yet effective baseline, which outperforms existing methods by a large margin (e.g., 2.7% F1 gains under low-resource setting).
Yubo Ma, Zehao Wang, Yixin Cao, Aixin Sun
2023-05-03T05:31:48Z
http://arxiv.org/abs/2305.01901v2
# Few-shot Event Detection: An Empirical Study and a Unified View ###### Abstract Few-shot event detection (ED) has been widely studied, while this brings noticeable discrepancies, e.g., various motivations, tasks, and experimental settings, that hinder the understanding of models for future progress. This paper presents a thorough empirical study, a unified view of ED models, and a better _unified baseline_. For fair evaluation, we compare 12 representative methods on three datasets, which are roughly grouped into prompt-based and prototype-based models for detailed analysis. Experiments consistently demonstrate that prompt-based methods, including Chat-GPT, still significantly trail prototype-based methods in terms of overall performance. To investigate their superior performance, we break down their design elements along several dimensions and build a unified framework on prototype-based methods. Under such unified view, each prototype-method can be viewed a combination of different modules from these design elements. We further combine all advantageous modules and propose a simple yet effective _baseline_, which outperforms existing methods by a large margin (e.g., \(2.7\%\)\(F1\) gains under _low-resource_ setting). 1 Footnote 1: Our code will be publicly available at [https://github.com/mayubo2333/fewshot_ED](https://github.com/mayubo2333/fewshot_ED). ## 1 Introduction Event Detection (ED) is the task of identifying event triggers and types in texts. For example, given _"Cash-strapped Vivendi wants to sell Universal Studios"_, it is to classify the word _"sell"_ into a _TransferOwnership_ event. ED is a fundamental step in various tasks such as successive event-centric information extraction Huang et al. (2022); Ma et al. (2022); Chen et al. (2022), knowledge systems Li et al. (2020); Wen et al. (2021), story generation Li et al. (2022), etc. However, the annotation of event instances is costly and labor-consuming, which motivates the research on improving ED with limited labeled samples, i.e., the few-shot ED task. Extensive studies have been carried out on few-shot ED. Nevertheless, there are noticeable discrepancies among existing methods from three aspects. (1) _Motivation_ (Figure 1): Some methods focus on model's _generalization_ ability that learns to classify with only a few samples Li et al. (2022). Some other methods improve the _transferability_, by introducing additional data, that adapts a well-trained model on the preexisting schema to a new schema using a few samples Lu et al. (2021). There are also methods considering both abilities Liu et al. (2020); Hsu et al. (2022). (2) _Task setting_: Even focusing on the same ability, methods might adopt different task settings for training and evaluation. For example, there are at least three settings for transferability: _episode learning_ (EL, Deng et al. 2020; Cong et al. 2021), _class-transfer_ (CT, Hsu et al. 2022) and _task-transfer_ (TT, Lyu et al. 2021; Lu et al. 2022). (3) _Experimental Setting_: Even focusing on the same task setting, their experiments may vary in different sample sources (e.g., a subset of datasets, annotation guidelines, or external corpus) and sample numbers (shot-number or sample-ratio). Table 1 provides a detailed comparison of representative methods. In this paper, we argue the importance of a unified setting for a better understanding of few-shot ED. First, based on exhaustive background investigation on ED and similar tasks (e.g., NER), we con Figure 1: Task settings to access _Generalization_ (a) and _Transferability_ (b). Colors denote event types. duct **an empirical study of twelve SOTA methods under two practical settings**: _low-resource_ setting for _generalization_ ability and _class-transfer_ setting for _transferability_. We roughly classify the existing methods into two groups: prototype-based models to learn event-type representations and proximity measurement for prediction and prompt-based models that convert ED into a familiar task of Pretrained Language Models (PLMs). The second contribution is **a unified view of prototype-based methods** to investigate their superior performance. Instead of picking up the best-performing method as in conventional empirical studies, we take one step further. We break down the design elements along several dimensions, e.g., the source of prototypes, the aggregation form of prototypes, etc. From this perspective, five prototype-based methods on which we conduct experiment are instances of distinct modules from these elements. And third, through analyzing each effective design element, we propose **a simple yet effective _unified baseline_** that combines all advantageous elements of existing methods. Experiments validate an average \(2.7\%\)\(F1\) gains under _low-resource_ setting and the best performance under _class-transfer_ setting. Our analysis also provides many valuable insights for future research. ## 2 Preliminary Event detection (ED) is usually formulated as either a span classification task or a sequence labeling task, depending on whether candidate event spans are provided as inputs. We brief the sequence labeling paradigm here because the two paradigms can be easily converted to each other. Given a dataset \(\mathcal{D}\) annotated with schema \(E\) (the set of event types) and a sentence \(X=[x_{1},...,x_{N}]^{T}\in\mathcal{D}\), where \(x_{i}\) is the \(i\)-th word and \(N\) the length of this sentence, ED aims to assign a label \(y_{i}\in(E\cup\{\texttt{N.A.}\})\) for each \(x_{i}\) in \(X\). Here N.A. refers to either none events or events beyond pre-defined types \(E\). We say that word \(x_{i}\) triggering an event \(y_{i}\) if \(y_{i}\in E\). ### Few-shot ED task settings We categorize few-shot ED settings to four cases: _low-resource_ (LR), _class-transfer_ (CT), _episode learning_ (EL) and _task-transfer_ (TT). Low-resource setting assesses the _generalization_ ability of few-shot ED methods, while the other three settings are for _transferability_. We adopt LR and CT in our empirical study towards practical scenarios. More details can be found in Appendix A.1. **Low-resource setting** assumes access to a dataset \(\mathcal{D}=(\mathcal{D}_{train},\mathcal{D}_{dev},\mathcal{D}_{test})\) annotated with a label \begin{table} \begin{tabular}{l l|c c c|c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Method} & \multicolumn{3}{c|}{**Task setting**} & \multicolumn{4}{c}{**Experimental setting**} \\ & & LR & EL & CT & TT & Dataset & Sample Number & Sample Source \\ \hline \multirow{7}{*}{\begin{tabular}{l} **Seed-based** (Bronstein et al., 2015) \\ MSEP (Peng et al., 2016) \\ ZSL (Huang et al., 2018) \\ \end{tabular} } & & & & ACE & 30 & Guidelines \\ & ZSL (Huang et al., 2018) & & & & ACE & 0 & Datasets \\ & DMBPN (Deng et al., 2020) & & & FewEvent & \{5,10,15\}-shot & Datasets \\ & OntoED (Dung et al., 2021) & & & MAVEN / FewEvent & \{0,1,5,10,15,20\}\% & Datasets \\ & Zhang’s (Zhang et al., 2021) & & & ACE & 0 & Corpus \\ & PA-CRF (Cong et al., 2021) & & & FewEvent & \{5,10\}-shot & Datasets \\ & ProACT (Lai et al., 2021) & & & ACE / FewEvent / RAMS & \{5,10\}-shot & Datasets \\ & CausalED (Chen et al., 2021) & & & ACE / MAVEN / ERE & 5-shot & Datasets \\ & Yu’s (Yu et al., 2022) & & & ACE & 176 & Guidelines + Corpus \\ & ZED (Zhang et al., 2022a) & & & MAVEN & 0 & Corpus \\ & HCL-TAT (Zhang et al., 2022b) & & & FewEvent & \{5,10\}-shot & Datasets \\ & KE-PN (Zhao et al., 2022) & & & ACE / MAVEN / FewEvent & \{1,5\}-shot & Datasets \\ \hline \multirow{7}{*}{ \begin{tabular}{l} **Seed-based** (Zhang et al., 2021) \\ \end{tabular} } & EERC (Liu et al., 2020) & & & & ACE & \{0,1,5,10,20\}\% & Datasets \\ & FSQA (Feng et al., 2020) & & & & ACE & \{0,1,3,5,7,9\}-shot & Datasets \\ & EDTE (Lyu et al., 2021) & & & ACE / ERE & 0 & - \\ & Text2Event (Lu et al., 2021) & & & ACE / ERE & \{1,5,25\}\% & Datasets \\ & UIE (Lu et al., 2022) & & & ACE / CASIE & \{1,5,10\}-shot/\% & Datasets \\ & DEGREE (Hu et al., 2022) & & & ACE / ERE & \{0,1,5,10\}-shot & Datasets \\ & PILED (Li et al., 2022b) & & & ACE / MAVEN / FewEvent & \{5,10\}-shot & Datasets \\ \hline \hline \end{tabular} \end{table} Table 1: Noticeable discrepancies among existing few-shot ED methods. Explanations of task settings can be found in Section 2.1, which also refer to different motivations: LR for generalization, EL, CT, and TT for transfer abilities. **Dataset** indicates the datasets on which the training and/or evaluation is conducted. **Sample Number** refers to the number of labeled samples used. **Sample Source** refers to where training samples come from. Guidelines: example sentences from annotation guidelines. Datasets: subsets of full datasets. Corpus: (unlabeled) external corpus. set \(E\), where \(|\mathcal{D}_{dev}|\leq|\mathcal{D}_{train}|\ll|\mathcal{D}_{test}|\). It assesses the generalization ability of models by (1) utilizing only few samples during training, and (2) evaluating on the real and rich test dataset. **Class-transfer setting** assumes access to a source dataset \(\mathcal{D}^{(S)}\) with a preexisting schema \(E^{(S)}\) and a target dataset \(\mathcal{D}^{(T)}\) with a new schema \(E^{(T)}\). Note that \(D^{(S)}\) and \(D^{(T)}\), \(E^{(S)}\) and \(E^{(T)}\) contain disjoint sentences and event types, respectively. \(\mathcal{D}^{(S)}\) contains abundant samples, while \(\mathcal{D}^{(T)}\) is the low-resource setting dataset described above. Models under this setting are expected to be pre-trained on \(\mathcal{D}^{(S)}\) then further trained and evaluated on \(\mathcal{D}^{(T)}\). ### Category of existing methods We roughly group existing few-shot ED methods into two classes: prompt-based methods and prototype-based methods. More details are introduced in Appendix A.2. **Prompt-based methods** leverage the rich language knowledge in PLMs by converting downstream tasks to the task with which PLMs are more familiar. Such format conversion narrows the gap between pre-training and downstream tasks and benefits knowledge induction in PLMs with limited annotations. Specifically, few-shot ED can be converted to machine reading comprehension (MRC, Du and Cardie, 2020; Liu et al., 2020; Feng et al., 2020), natural language inference (NLI, Lyu et al., 2021), conditional generation (CG, Paolini et al., 2021; Lu et al., 2021, 2022; Hsu et al., 2022), and the cloze task Li et al. (2022). We give examples of these prompts in Table 6. **Prototype-based methods** predict an event type for each word/span mention by measuring its representation proximity to _prototypes_. Here we define prototypes in a _generalized_ format -- it is an embedding that represents some event type. For example, Prototypical Network (ProtoNet, Snell et al., 2017) and its variants Lai et al. (2020, 2020); Deng et al. (2020, 2021); Cong et al. (2021); Lai et al. (2021) construct prototypes via a subset of sample mentions. In addition to event mentions, a line of work leverage related knowledge to learn or enhance prototypes' representation, including AMR graphs Huang et al. (2018), event-event relations Deng et al. (2021), definitions Shen et al. (2021) and FrameNet Zhao et al. (2022). Zhang et al. (2022) recently introduce contrastive learning Hadsell et al. (2006) in few-shot ED task. Such method also determines the event by measuring the distances with other samples and aggregates these distances to evaluate an overall distance to each event type. Therefore we view it as a _generalized_ format of prototype-based methods as well. For comprehensiveness, we also include competitive methods from similar tasks, _i.e.,_ Named Entity Recognition and Slot Tagging, which are highly adaptable to ED. Such expansion enriches the categorization and enables us to build a unified view in Section 3. For instance, some methods Hou et al. (2020); Ma et al. (2022) leverage label semantics to enhance or directly construct the prototypes. Others Das et al. (2022) leverage contrastive learning for better prototype representations. ## 3 A Prototype-based Unified View Due to the superior performance (Sections 5 and 6), we zoom into prototype-based methods to provide a unified view towards a better understanding. We observe that they share lots of similar components. As shown in Table 2 and Figure 2, we decompose prototype-based methods into 5 design elements: prototype source, transfer function, distance function, aggregation form, and CRF module. This unified view enables us to compare choices in each design element directly. By aggregating the Figure 2: The architectures of five existing prototype-based methods and the unified baseline. Given event mention \(x\) and event type \(y\), each sub-figure depicts how to compute the logits\((y|x)\). White circles: representation of predicted event \(h_{x}\). Purple circles: representation of prototypes \(h_{c_{y}}\) (\(c_{y}\in\mathcal{C}_{y}\)). Yellow modules: transfer functions. Green modules: distance functions. Blue modules: aggregation form. Orange modules: CRF modules. Dashed lines in (a) and (c) represent that their CRFs are only used during inference. effective choices, we end with a _Unified Baseline_. Formally, given an event mention \(x\), prototype-based methods predict the likelihood \(p(y|x)\) from logits\((y|x)\) for each \(y\in(E\cup\{\texttt{N.A.}\})\) \[p(y|x)=\text{Softmax}_{y\sim(E\cup\{\texttt{N.A.}\})}\text{logits}(y|x)\] The general framework is as follows. Denote the PLM's output representation of event mention \(x\) and data \(c_{y}\) in prototype source \(\mathcal{C}_{y}\) as \(h_{x}\) and \(h_{c_{y}}\) respectively, where \(h\in R^{m}\) and \(m\) is the dimension of PLM's hidden space. The first step is to convert \(h_{x}\) and \(h_{c_{y}}\) to appropriate representations via a transfer function\(f(\cdot)\). Then the methods maintain either a single or multiple prototypes \(c_{y}\)'s for each event type, determined by the adopted aggregation form. Third, the distance between \(\overline{f(h_{x})}\) and \(\overline{f(h_{c_{y}})}\) (single prototype) or \(f(h_{c_{y}})\)'s (multiple prototypes) is computed via a distance function \(d(\cdot,\cdot)\) to learn the proximity scores, _i.e.,_ logits\((y|x)\). Finally, an optional CRF module is used to adjust logits\((y|x)\) for \(x\) in the same sentence to model their label dependencies. For inference, we adopt nearest neighbor classification by assigning the sample with nearest event type in \(\cup_{y\in(E\cup\{\texttt{N.A.}\})}\mathcal{C}_{y}\), _i.e.,_ \[\hat{y}_{x}=\operatorname*{argmin}_{y\in(E\cup\{\texttt{N.A.}\})}\min_{c_{y} \in\mathcal{C}_{y}}d(f(h_{x}),f(h_{c_{y}}))\] Next, we detail the five design elements: **Prototype source \(\mathcal{C}_{y}\)** (purple circles in Figure 2, same below) indicates a set about the source of data / information for constructing the prototypes. There are mainly two types of sources: (1) _event mentions_ (purple circle without words): ProtoNet and its variants in Figure 2(b),(c),(d) additionally split a support set \(\mathcal{S}_{y}\) from training data as prototype source, while contrastive learning methods in Figure 2(a) view every annotated mention as the source (except the query one). (2) _Label semantics_ (purple ellipses with words): Sometimes, the label name \(l_{y}\) is utilized as the source to enhance or directly construct the prototypes. For example, FLSS in Figure 2(e) views the text representation of type names as prototypes, while L-TapNet-CDT in Figure 2(c) utilizes both the above kinds of prototype sources. **Transfer function \(f:R^{m}\to R^{n}\)** (yellow modules) transfers PLM outputs into the distance space for prototype proximity measurement. Widely used transfer functions include normalization in Figure 2(b), down-projection in Figure 2(c), reparameterization in Figure 2(a), or an identity function. **Distance function \(d:R^{n}\times R^{n}\to R_{+}\)** (green modules) measures the distance of two transferred representations within the same embedded space. Common distance functions are euclidean distance in Figure 2(d) and negative cosine similarity in Figure 2(b),(c),(e). **Aggregation form** (blue modules) describes how to compute logits\((y|x)\) based on a single or multiple prototype sources. Aggregation may happen at three levels. (1) _feature-level_: ProtoNet and its variants in Figure 2(b),(c),(d) aims to construct a _single_ prototype \(h_{\bar{c}_{y}}\) for each event type \(y\) by merging various features, which ease the calculation logits\((y|x)=-d(f(h_{x}),f(h_{\bar{c}_{y}}))\). (2) _score-level_: CONTAINER in Figure 2(a) views each data as a prototype (they have _multiple_ prototypes for each type \(y\)) and computes the distance \(d(f(h_{x}),f(h_{c_{y}}))\) for each \(c_{y}\in\mathcal{C}_{y}\). These distances are then merged to obtain logits\((y|x)\). (3) _loss-level_: Such form has multiple parallel branches \(b\) for each mention \(x\). Each branch has its own logits\({}^{(b)}(y|x)\) and is optimized with different loss components during training. Thus it could be viewed as a multi-task learning format. See _unified baseline_ in Figure 2(f). \begin{table} \begin{tabular}{l|c c c c c} \hline \hline Method & Prototype \(\mathcal{C}_{y}\) & Aggregation & Distance \(d(u,v)\) & Transfer \(f(h)\) & CRF Module \\ \hline ProtoNet (Snell et al., 2017) & Event mentions & feature & \(||u-v||_{2}\) & \(h\) & \(-\) \\ L-TapNet-CDT (Hou et al., 2020) & Both & feature & \(-u^{T}v/\tau\) & \(\mathcal{M}\frac{h}{||h||}\) & CRF-Inference \\ PA-CRF (Cong et al., 2021) & Event mentions & feature & \(-u^{T}v\) & \(\frac{h}{||h||}\) & CRF-PA \\ CONTAINER (Das et al., 2022) & Event mentions & score & JSD\((u||v)\) & \(\mathcal{N}(\mu(h),\Sigma(h))\) & CRF-Inference \\ FLSS (Ma et al., 2022) & Label name & \(-\) & \(-u^{T}v\) & \(h\) & \(-\) \\ \hline Unified Baseline (Ours) & Both & score + loss & \(-u^{T}v/\tau\) & \(\frac{h}{||h||}\) & \(-\) \\ \hline \hline \end{tabular} \end{table} Table 2: Decomposing five prototype-based methods and _unified baseline_ along design elements. “Both” in column 1 means both event mentions and label names for \(y\) are prototype sources. JSD: Jensen–Shannon divergence. \(\mathcal{M}\): Projection matrix in TapNet. \(\mathcal{N}(\mu(h),\Sigma(h))\): Gaussian distribution with mean \(\mu(h)\) and covariance matrix \(\Sigma(h)\). **CRF module** (orange modules) adjusts predictions within the same sentence by explicitly considering the label dependencies between sequential inputs. The vanilla CRF (Lafferty et al., 2001) and its variants in Figure 2(a),(b),(c) post additional constraints into few-shot learning. ## 4 Experimental setup ### Few-shot datasets and Evaluation **Dataset source**. We utilize ACE05 (Doddington et al., 2004), MAVEN (Wang et al., 2020) and ERE (Song et al., 2015) to construct few-shot ED datasets in this empirical study. Detailed statistics about these three datasets are in Appendix B.1. **Low-resource setting**. We adopt \(K\)-shot sampling strategy to construct few-shot datasets for the low-resource setting, i.e., sampling \(K_{train}\) and \(K_{dev}\) samples per event type to construct the train and dev sets, respectively.2 We set three \((K_{train},K_{dev})\) in our evaluation: (2, 1), (5, 2) and (10, 2). We follow Yang and Katiyar (2020) taking a greedy sampling algorithm to approximately select \(K\) samples for each event type. See Appendix B.2 for details and the statistics of the sampled few-shot datasets. We inherit the original test set as \(\mathcal{D}_{test}\). Footnote 2: Recent systematic research on few-shot NLP tasks (Perez et al., 2021) is of opposition to introducing an additional dev set for few-shot learning. We agree with their opinion but choose to keep a **very small** dev set mainly for feasibility consideration. Given the number of experiments in our empirical study, it is infeasible to conduct cross-validation on every single train set for hyperparameter search. **Class-transfer setting**. The few-shot datasets are curated in two sub-steps: (1) Dividing both event types and sentences in the original dataset into two disjoint parts, named _source dataset_ and _target dataset pool_, respectively. (2) Sampling few-shot samples from the target dataset pool to construct target dataset. The same sampling algorithm as in _low-resource_ setting is used. Then we have the source dataset and the sampled target dataset. See Appendix B.2 for details and the statistics of the sampled few-shot datasets. **Evaluation Metric** We use micro-\(F1\) score as the evaluation metric. To reduce the random fluctuation, the reported values of each setting are the averaged score and sample standard deviation, of results w.r.t 10 sampled few-shot datasets. ### Evaluated methods We evaluate 12 representative methods, including vanilla fine-tuning, in-context learning, 5 prompt-based and 5 prototype-based methods. These methods are detailed in Appendix B.3. **Fine-tuning** To validate the effectiveness of few-shot methods, we fine-tune a supervised classifier for comparison as a trivial baseline. **In-context learning** To validate few-shot ED tasks still not well-solved in the era of Large Language Models (LLMs), we design such baseline instructing LLMs to detect event triggers by the means of in-context learning (ICL). **Prompt-based** (1) _EEQA_ (QA-based, Du and Cardie 2020), (2) _ETE_ (NLI-based, Lyu et al. 2021), (3) _PTE_ (cloze task, Schick and Schutze 2021), (4) _UIE_ (generation, Lu et al. 2022) and (5) _DEGREE_ (generation, Hsu et al. 2022). **Prototype-based** (1) _ProtoNet_ (Snell et al., 2017), (2) _L-TapNet-CDT_ (Hou et al., 2020), (3) _PACRF_ (Cong et al., 2021), (4) _CONTAINER_ (Das et al., 2022) and (5) _FSLS_ (Ma et al., 2022). See Table 2 and Figure 2 for more details. ### Implementation details We unify PLMs in each method as much as possible for a fair comparison in our empirical study. Specifically, we use RoBERTa-base (Liu et al., 2019) for all prototype-based methods and three non-generation prompt-based methods. However, we keep the method's original PLM for two prompt-based methods with generation prompt, UIE (T5-base, Raffel et al. 2020) and DEGREE (BART-large, Lewis et al. 2020). We observe their performance collapses with smaller PLMs. Regarding ICL method, we use Chat-GPT (gpt-3.5-turbo-0301) as the language model. See more details in Appendix B.4. ## 5 Results: Low-resource Learning ### Overall comparison We first overview the results of the 12 methods under the low-resource setting in Table 3. **Fine-tuning**. Despite its simpleness, fine-tuning achieves acceptable performance. In particular, it is even comparable to the strongest existing methods on MAVEN dataset, only being \(1.1\%\) and \(0.5\%\) less under 5-shot and 10-shot settings. One possible reason that fine-tuning is good on MAVEN is that MAVEN has 168 event types, much larger than others. When the absolute number of samples is relatively large, PLMs might capture implicit interactions among different event types, even though the samples per event type are limited. When the sample number is scarce, however, fine-tuning is much poorer than existing competitive methods (see ACE05). Thus, we validate the necessity and progress of existing few-shot methods. **In-context learning.** We find the performance of ICL-based methods lags far behind that of tuning-required methods, though the backbone of ICL approach (ChatGPT) is much larger than other PLMs (<1B). A series of recent work (Ma et al., 2023; Gao et al., 2023; Zhan et al., 2023) observe the similar results as ours 3. Thus we validate few-shot ED tasks could not be solved smoothly by cutting-edge LLMs and deserves further exploration. Footnote 3: We refer readers to Ma et al. (2023) for a more detailed discussion on why ICL approaches stumble across few-shot ED tasks. **Prompt-based methods**. Prompt-based methods deliver much poorer results than expected, even compared to fine-tuning, especially when the sample number is extremely scarce. It shows designing effective prompts for ED tasks with very limited annotations is still challenging or even impossible. We speculate it is due to the natural gap between ED tasks and pre-training tasks in PLMs. Among prompt-based methods, PTE and DEGREE achieve relatively robust performance under all settings. DEGREE is advantageous when the sample size is small, but it cannot well handle a dataset with many event types like MAVEN. When sample sizes are relatively large, EEQA shows competitive performance as well. ### Prototype-based methods Since prototype-based methods have overall better results, we zoom into the design elements to search for effective choices based on the unified view. **Transfer function, Distance function, and CRF.** We compare combinations of transfer and distance functions and four variants of CRF modules in Appendices C.1 and C.2. We make two findings: (1) A scaled coefficient in the distance function achieves better performance with the normalization transfer function. (2) There is no significant difference between models with or without CRF modules. Based on these findings, we observe a significant improvement in five existing methods by simply substituting their \(d\) and \(f\) for more appropriate choices, see Figure 3 and Appendix C.1. We would use these new transfer and distance functions in further analysis and discussion. **Prototype Source.** We explore whether label semantic and event menti \begin{table} \begin{tabular}{l l|c c c|c c c|c c c} \hline \hline \multicolumn{2}{c|}{**Method**} & \multicolumn{3}{c|}{**ACE05**} & \multicolumn{3}{c|}{**MAVEN**} & \multicolumn{3}{c}{**ERE**} \\ & & 2-shot & 5-shot & 10-shot & 2-shot & 5-shot & 10-shot & 2-shot & 5-shot & 10-shot \\ \hline \multirow{8}{*}{\begin{tabular}{} \end{tabular} } & _Fine-tuning_ & \(33.3_{(4.4)}\) & \(42.5_{(4.6)}\) & \(48.2_{(1.5)}\) & \(40.8_{(4.7)}\) & \(52.1_{(0.7)}\) & \(55.7_{(0.2)}\) & \(32.9_{(2.1)}\) & \(39.8_{(2.9)}\) & \(43.6_{(1.7)}\) \\ & _In-context Learning_ & \(38.9_{(3.0)}\) & \(34.3_{(1.2)}\) & \(36.7_{(0.8)}\) & \(22.1_{(1.0)}\) & \(22.7_{(0.3)}\) & \(23.9_{(0.7)}\) & \(24.2_{(3.3)}\) & \(26.0_{(0.7)}\) & \(25.5_{(1.7)}\) \\ \hline \multirow{8}{*}{\begin{tabular}{} \end{tabular} } & EEQA & \(24.1_{(12.2)}\) & \(43.1_{(2.7)}\) & \(48.3_{(2.4)}\) & \(33.4_{(9.2)}\) & \(48.1_{(0.9)}\) & \(52.5_{(0.5)}\) & \(13.7_{(8.6)}\) & \(34.4_{(1.7)}\) & \(39.8_{(2.4)}\) \\ & ETE & \(15.7_{(0.6)}\) & \(19.1_{(0.3)}\) & \(21.4_{(0.2)}\) & \(28.9_{(4.3)}\) & \(30.6_{(1.3)}\) & \(32.5_{(1.1)}\) & \(10.6_{(2.3)}\) & \(12.8_{(2.2)}\) & \(13.7_{(2.8)}\) \\ & PTE & \(38.4_{(4.2)}\) & \(26.7_{(2.7)}\) & \(49.8_{(1.1)}\) & \(41.3_{(4.4)}\) & \(46.0_{(0.6)}\) & \(49.5_{(0.6)}\) & \(33.4_{(2.8)}\) & \(36.9_{(1.3)}\) & \(37.0_{(1.8)}\) \\ & UIE & \(29.3_{(2.9)}\) & \(38.3_{(2.4)}\) & \(43.4_{(3.5)}\) & \(33.7_{(1.4)}\) & \(44.4_{(0.3)}\) & \(50.5_{(0.5)}\) & \(19.7_{(1.5)}\) & \(30.8_{(1.9)}\) & \(34.1_{(1.6)}\) \\ & DEGREE & \(40.0_{(2.9)}\) & \(45.5_{(3.2)}\) & \(48.5_{(2.1)}\) & \(43.3_{(1.0)}\) & \(43.4_{(5.9)}\) & \(45.5_{(4.3)}\) & \(31.3_{(3.1)}\) & \(36.0_{(4.6)}\) & \(40.7_{(2.2)}\) \\ \hline \multirow{8}{*}{ \begin{tabular}{} \end{tabular} } & ProtoNet & \(38.3_{(5.0)}\) & \(47.2_{(3.9)}\) & \(52.3_{(2.4)}\) & \(44.5_{(2.2)}\) & \(51.7_{(0.6)}\) & \(55.4_{(0.2)}\) & \(31.6_{(2.7)}\) & \(39.7_{(2.4)}\) & \(44.3_{(2.3)}\) \\ & PA-CRF & \(34.9_{(7.2)}\) & \(48.1_{(3.9)}\) & \(51.7_{(2.6)}\) & \(44.8_{(2.2)}\) & \(51.8_{(1.0)}\) & \(55.3_{(0.4)}\) & \(30.6_{(2.8)}\) & \(38.0_{(3.9)}\) & \(40.4_{(2.0)}\) \\ & L-TapNet-CDT & \(43.2_{(3.8)}\) & \(49.8_{(2.9)}\) & \(53.5_{(3.4)}\) & \(48.6_{(1.2)}\) & \(53.2_{(0.4)}\) & \(56.1_{(0.9)}\) & \(35.6_{(2.6)}\) & \(42.7_{(1.7)}\) & \(45.1_{(3.2)}\) \\ & CONAIENER & \(40.1_{(3.8)}\) & \(47.7_{(3.3)}\) & \(50.1_{(1.8)}\) & \(44.2_{(1.4)}\) & \(50.8_{(0.9)}\) & \(52.9_{(0.3)}\) & \(34.4_{(3.6)}\) & \(39.3_{(1.9)}\) & \(44.5_{(2.3)}\) \\ & FSLS & \(39.2_{(3.4)}\) & \(47.5_{(3.2)}\) & \(51.9_{(1.7)}\) & \(46.7_{(1.2)}\) & \(51.5_{(0.5)}\) & \(56.2_{(0.2)}\) & \(34.5_{(3.1)}\) & \(39.8_{(2.5)}\) & \(44.0_{(2.0)}\) \\ \hline \multicolumn{2}{c|}{Unified Baseline} & **46.0**\((4.6)\) & **54.4**\((2.6)\) & **56.7**\((1.5)\) & **49.5**\((1.7)\) & **54.7**\((0.8)\) & **57.8**\((1.2)\) & **38.8**\((2.4)\) & **45.5**\((2.8)\) & **48.4**\((2.6)\) \\ \hline \hline \end{tabular} \end{table} Table 3: Overall results of _fine-tuning_ method, 10 existing few-shot ED methods, and the _unified baseline_ under low-resource setting. The best results are in bold face and the second best are underlined. The results are averaged over 10 repeated experiments, and sample standard deviations are in the round bracket. The standard deviations are derived from different **sampling few-shot datasets** instead of **random seeds**. Thus high standard deviation values do not mean that no significant difference among these methods. Figure 3: Results of existing methods _before_ (dashed lines) and _after_ (solid lines) adjustment that substitute their transfer and distance functions to appropriate ones. See full results in Table 8. totype sources, i.e., whether utilizing both achieves better performance than either one. We choose ProtoNet and FSLS as base models which contain only a single kind of prototype source (mentions or labels). Then we combine the two models using three aggregating forms mentioned in Section 3 and show their results in Figure 4. Observe that: (1) leveraging label semantics and mentions as prototype sources simultaneously improve the performance under almost all settings, and (2) merging the two kinds of sources at loss level is the best choice among three aggregation alternatives. **Contrastive or Prototypical Learning**. Next, we investigate the effectiveness of contrastive learning (CL, see CONTAINER) and prototypical learning (PL, see ProtoNet and its variants) for event mentions. We compare three label-enhanced (since we have validated the benefits of label semantics) methods aggregating event mentions with different approaches. (1) _Ll-ProtoNet_: the strongest method utilizing PL in last part. (2) _Ll-CONTAINER_: the method utilizing in-batch CL as CONTAINER does. (3) _Ll-MoCo_: the method utilizing CL with MoCo setting He et al. (2020). The in-batch CL and MoCo CL are detailed in Appendix C.4. Figure 5 suggests CL-based methods outperform Ll-ProtoNet. There are two possible reasons: (1) CL has higher sample efficiency since every two samples interact during training. PL, however, further splits samples into support and query set during training; samples within the same set are not interacted with each other. (2) CL adopts score-level aggregation while PL adopts feature-level aggregation. We find the former also slightly outperforms the latter in Figure 4. We also observe that MoCo CL usually has a better performance than in-batch CL when there exists complicated event types (see MAVEN), or when the sample number is relatively large (see ACE 10-shot). We provide a more detailed explanation in Appendix C.4. ### The unified baseline Here is a summary of the findings: (1) Scaled euclidean or cosine similarity as distance measure with normalized transfer benefits existing methods. (2) CRF modules show no improvement in performance. (3) Label semantic and event mentions are complementary prototype sources, and aggregating them at loss-level is the best choice. (4) As for the branch of event mentions, CL is more advantageous than PL for few-shot ED tasks. (5) MoCo CL performs better when there are a good number of sentences, otherwise in-batch CL is better. Based on these findings, we develop a simple but effective _unified baseline_ as follows. We utilize both label semantic and event mentions as prototype sources and aggregate two types of sources at loss-level. Specifically, we assign two branches with their own losses for label semantic and event mentions respectively. Both two branches adopt scaled cosine similarity \(d_{\tau}(u,v)=-\frac{u^{T}v}{\tau}\) as distance measure and normalization \(f(h)=h/\|h\|_{2}\) as transfer function. We do not add CRF modules. For label semantic branch, we follow FSLS and set the embeddings of event name as prototypes. Here \(h_{x}\) and \(h_{e_{y}}\) represent the PLM representation of event mention \(x\) and label name \(e_{y}\), respectively. \[e_{y} =\text{Event\_name}(y)\] \[\text{logits}^{(l)}(y|x) =-d_{\tau}(f(h_{x}),f(h_{e_{y}}))\] For event mention branch, we adopt CL which aggregates prototype sources (event mentions) at score-level. If the total sentence number in train set is smaller than 128, we take in-batch CL (CONTAINER) strategy as below: \[\text{logits}^{(m)}(y|x)=\sum_{x^{\prime}\in\mathcal{S}_{y}(x)}\frac{-d(f(h_{ x}),f(h_{x^{\prime}}))}{|\mathcal{S}_{y}(x)|}\] \(\mathcal{S}_{y}(x)=\{x^{\prime}|(x^{\prime},y^{\prime})\in D,y^{\prime}=y,x^{ \prime}\neq x\}\) is the set of all other mentions with the same label. Figure 4: Results of three approaches aggregating label semantics and event mentions on MAVEN and ERE few-shot datasets. **Lf**: feature-level. **Ls**: score-level. **Ll**: loss-level. See full results in Table 9. Figure 5: Results of (label-enhanced) PL and CL methods on ACE05 and MAVEN few-shot datasets. See full results on three datasets in Table 10. If the total sentence number in train set is larger than 128, we instead take MoCo CL maintaining a queue for \(\mathcal{S}_{y}(x)\) and a momentum encoder. We then calculate the losses of these two branches and merge them for joint optimization: \[p^{(l/m)}(y|x) =\text{Softmax}_{y}[\text{logits}^{(l/m)}(y|x)]\] \[L^{(l/m)}(y|x) =-\sum_{(x,y)}y\text{log}(p^{(l/m)}(y|x))\] \[L =L^{(l)}+L^{(m)}\] The diagram of the _unified baseline_ is illustrated in Figure 2(f) and its performance is shown in Table 3. Clearly, _unified baseline_ outperforms all existing methods significantly, 2.7\(\%\)\(F\)1 gains on average, under all low-resource settings. ## 6 Results: Class-transfer Learning In this section, we evaluate existing methods and the _unified baseline_ under class-transfer setting. Here we do not consider in-context learning because previous experiments show it still lags far from both prompt- and prototype-based methods. ### Prompt-based methods We first focus on 4 existing prompt-based methods and explore whether they could smoothly transfer event knowledge from a preexisting (source) schema to a new (target) schema. We show results in Figure 6 and Appendix D.1. The findings are summarized as follows. (1) The transfer of knowledge from source event types to target event types facilitates the model prediction under most scenarios. It verifies that an appropriate prompt usually benefits inducing the knowledge learned in PLMs. (2) However, such improvement gradually fades with the increase of sample number from either source or target schema. For example, the 5-shot v.s 10-shot performance for PTE and UIE are highly comparable. We speculate these prompts act more like a catalyst: they mainly teach model how to induce knowledge from PLMs themselves rather than learn new knowledge from samples. Thus the performance is at a standstill once the sample number exceeds some threshold. (3) Overall, the performance of prompt-based methods remains inferior to prototype-based methods in class-transfer setting (see black lines in Figure 6). Since similar results are observed in low-resource settings as well, we conclude that prototype-based methods are better few-shot ED task solver. ### Prototype-based methods We further explore the transfer ability of existing prototype-based methods and _unified baseline4_. Thanks to the unified view, we conduct a more thorough experiment that enumerates all possible combinations of models used in the source and target domain, to assess if the generalization ability affects transferability. That is, the parameters in PLMs will be shared from source to target model. We show results in Figure 7 and Appendix D.2. Footnote 4: Transfer and distance functions in all methods are substituted to appropriate ones and CRF modules are removed. _1. Is transfer learning effective for prototype-based methods?_ It depends on the dataset (compare the first row with other rows in each column). For ACE05 and MAVEN datasets, the overall answer is yes. Contrary to our expectation, transfer learning affects most target models on ERE dataset negatively, especially for 2- and 5-shot settings. _2. Do prototype-based methods perform better than simple fine-tuning?_ It depends on whether _fine-tuning_ the source or target model. When _fine-tuning_ a source model (row 2), it sometimes achieves comparable even better performance than the prototype-based methods (last 4 rows). When _fine-tuning_ a target model (column 1), however, the performance drops significantly. Thus, we speculate that powerful prototype-based methods are more necessary in target domain than source domain. Figure 6: Class-transfer results of prompt-based methods. We plot _fine-tuning_ (red dash lines), best and second best prototype-based methods (black solid/dash lines) for comparison. See full results in Table 11. 3. Is the choice of prototype-based methods important?_ Yes. When we select inappropriate prototype-based methods, they could achieve worse performance than simple fine-tuning and sometimes even worse than models without class transfer. For example, CONTAINER and L-TapNet are inappropriate source model for ACE05 dataset. _4. Do the same source and target models benefit the event-related knowledge transfer?_ No. The figures show the best model combinations often deviate from the diagonals. It indicates that different source and target models sometimes achieve better results. _5. Is there a source-target combination performing well on all settings?_ Strictly speaking, the answer is No. Nevertheless, we find that adopting FSLS as the source model and our _unified baseline_ as the target model is more likely to achieve competitive (best or second best) performance among all alternatives. It indicates that (1) the quality of different combinations show kinds of **tendency** though no consistent conclusion could be drawn. (2) a model with moderate inductive bias (like FSLS) might be better for the source dataset with abundant samples. Then our _unified baseline_ could play a role during the target stage with limited samples. ## 7 Conclusion We have conducted a comprehensive empirical study comparing 12 representative methods under unified _low-resource_ and _class-transfer_ settings. For systematic analysis, we proposed a unified framework of promising prototype-based methods. Based on it, we presented a simple and effective _baseline_ that outperforms all existing methods significantly under _low-resource_ setting, and is an ideal choice as the target model under _class-transfer_ setting. In the future, we aim to explore how to leverage unlabeled corpus for few-shot ED tasks, such as data augmentation, weakly-supervised learning, and self-training. ## Acknowlegement This study is supported under the RIE2020 Industry Alignment Fund - Industry Collaboration Projects (IAF-ICP) Funding Initiative, the Singapore Ministry of Education (MOE) Academic Research Fund (AcRF) Tier 1 grant, as well as cash and in-kind contribution from the industry partner(s). Figure 7: Class-transfer results of _fine-tuning_ methods and four prototype-based methods on three datasets. For each matrix, row and column represent the source and target models, respectively. For example, the value in top-left corners of every matrix means the performance when directly finetuning a model in target dataset (source: N.A. / target: Fine-tuning). Each value is the results averaged over 10 repeated experiments. See full results in Table 12. ### Limitations We compare 12 representative methods, present a _unified view_ on existing prototype-based methods, and propose a competitive _unified baseline_ by combining the advantageous modules of these methods. We test all methods, including the unified baseline, on three commonly-used English datasets using various experimental settings and achieve consistent results. However we acknowledge the potential disproportionality of our experiments in terms of language, domain, schema type and data scarcity extent. Therefore, for future work, we aim to conduct our empirical studies on more diverse event-detection (ED) datasets. We are fortunate to witness the rapid development of Large Language Models (LLMs Brown et al., 2020; Ouyang et al., 2022; Chung et al., 2022) in recent times. In our work, we set in-context learning as a baseline and evaluate the performance of LLMs on few-shot ED tasks. We find current LLMs still face challenges in dealing with Information Extraction (IE) tasks that require structured outputs Qin et al. (2023); Josifoski et al. (2023). However, we acknowledge the ICL approach adopted here is relatively simple. We do not work hard to find the optimal prompt format, demonstration selection strategy, etc., to reach the upper bounds of LLMs' performance. We view how to leverage the power of LLMs on ED tasks as an open problem and leave it for future work. In this work, we focus more on the model aspect of few-shot ED tasks rather than data aspect. In other words, we assume having and only having access to a small set of labeled instances. In the future, we plan to explore how to utilize annotation guidelines, unlabeled corpus and external structured knowledge to improve few-shot ED tasks.
2304.07081
Chop Chop: Byzantine Atomic Broadcast to the Network Limit
At the heart of state machine replication, the celebrated technique enabling decentralized and secure universal computation, lies Atomic Broadcast, a fundamental communication primitive that orders, authenticates, and deduplicates messages. This paper presents Chop Chop, a Byzantine Atomic Broadcast system that uses a novel authenticated memory pool to amortize the cost of ordering, authenticating and deduplicating messages, achieving "line rate" (i.e., closely matching the complexity of a protocol that does not ensure any ordering, authentication or Byzantine resilience) even when processing messages as small as 8 bytes. Chop Chop attains this performance by means of a new form of batching we call distillation. A distilled batch is a set of messages that are fast to authenticate, deduplicate, and order. Batches are distilled using a novel interactive protocol involving brokers, an untrusted layer of facilitating processes between clients and servers. In a geo-distributed deployment of 64 medium-sized servers, Chop Chop processes 43,600,000 messages per second with an average latency of 3.6 seconds. Under the same conditions, state-of-the-art alternatives offer two orders of magnitude less throughput for the same latency. We showcase three simple Chop Chop applications: a Payment system, an Auction house and a "Pixel war" game, respectively achieving 32, 2.3 and 35 million operations per second.
Martina Camaioni, Rachid Guerraoui, Matteo Monti, Pierre-Louis Roman, Manuel Vidigueira, Gauthier Voron
2023-04-14T12:09:06Z
http://arxiv.org/abs/2304.07081v2
# Chop Chop: Byzantine Atomic Broadcast to the Network Limit ###### Abstract At the heart of _state machine replication_, the celebrated technique enabling decentralized and secure universal computation, lies Atomic Broadcast, a fundamental communication primitive that orders, authenticates, and dedplicates messages. This paper presents Chop Chop, a Byzantine Atomic Broadcast system that amortizes the cost of ordering, authenticating and deduplicating messages, achieving "line rate" (i.e., closely matching the complexity of a protocol that does not ensure any ordering, authentication or Byzantine resilience) even when processing messages as small as 8 bytes. Chop Chop attains this performance by means of a new form of batching we call _distillation_. A distilled batch is a set of messages that are fast to authenticate and deduplicate, as well as order. Batches are distilled using a novel interactive mechanism involving _brokers_, an untrusted layer of facilitating processes between clients and servers. In a geo-distributed deployment of 64 medium-sized servers, with clients situated cross-cloud, Chop Chop processes 43,600,000 messages per second with an average latency of 3.6 seconds. Under the same conditions, state-of-the-art alternatives offer two orders of magnitude less throughput for the same latency. We showcase three simple Chop Chop applications: a Payment system, an Auction house and a "Pixel war" game, respectively achieving 32, 2.3 and 35 million operations per second. ## 1 Introduction Will the Internet eventually achieve its foundational vision of a highly-available, decentralized, secure, and universal computer shared by all? Theory says yes: state machine replication (SMR) [26, 64] enables decentralized universal computation in the face of arbitrary failures [48, 66]. In practice, however, the road ahead is challenging. At the heart of SMR lies Atomic Broadcast, a powerful consensus-equivalent primitive that comes with fundamental bounds [27] and constraints [30], hindering its real-world performance despite decades of extensive research [5, 14, 17, 19, 17, 14, 51, 77, 78, 80] and attention from industry, where SMR powers a myriad of blockchains and ledgers [4, 33, 44, 46, 50, 70, 71, 72, 75, 76]. When deployed globally, seminal Atomic Broadcast implementations, such as BFT-SMaRt [9] and HotStuff [78], fail to stretch past a few thousand messages per second, three orders of magnitude short of the millions of requests per second collectively handled by the Internet's largest, centralized services (Fig. 1). Pushing Atomic Broadcast into the tens of millions of messages per second seems a necessary stepping stone towards planetary-scale, decentralized computation. Towards line rate.While slow and expensive, ordering messages in Atomic Broadcast is amenable to _batching_[17]: order once, deliver in bulk. This observation motivated the development of _memory pool_ (mempool) protocols [24, 32, 67], as initiated by Narwhal [24], designed to amortize ordering. This strategy proved effective, e.g., Bullshark [67] delivers in the order of 380,000 messages per second when accelerated by Narwhal. Despite this improvement, however, state-of-the-art batching still falls short of achieving _line rate_, i.e., matching the communication complexity of a protocol that does not ensure any ordering, authentication, or Byzantine resilience. In such a simplified setting, a server could simply deliver a sequence of application messages as it receives them from the network: \(b\) bits received, \(b\) bits delivered. Modern connections have enough bandwidth to receive tens of millions of application messages per second:1 2.5 orders of magnitude of gap still exist between Atomic Broadcast and unordered, unauthenticated dissemination. It is natural to ask Figure 1: **Throughput of Internet-scale services.** if such a large gap is inherent to atomicity's unavoidable cost of ordering, authenticating and deduplicating messages. This paper answers in the negative, accelerating Atomic Broadcast by a further two orders of magnitude with a system that performs within 8% of line-rate, even when handling 40 million requests per second. Chop Chop.We present Chop Chop, a Byzantine Atomic Broadcast system. As a mempool, Chop Chop uses an underlying instance of Atomic Broadcast to order batches, aggregating messages to amortize costs. Classic methods of batching, however, fail to also amortize authentication and deduplication: each payload in a batch still has to carry an individual public key, signature and sequence number. Chop Chop addresses this shortcoming with a new form of batches: _distilled batches_. Unlike a classic batch, a distilled batch contains condensed information that allows to authenticate and deduplicate its messages in bulk, much faster than in existing schemes. Distilled batches leverage the strong ordering of Atomic Broadcast to minimize redundant information. Trustless brokers.Chop Chop produces distilled batches using a novel interactive protocol involving _brokers_, a layer of facilitating processes between clients and servers. Counterintuitively, distilled batches are faster for servers to receive and process, but expensive for brokers to produce: distillation is interactive and relies on expensive cryptographic operations. Importantly, however, incorrectly distilled batches are visibly malformed. As such, brokers are _untrusted_: good brokers take load off the servers; bad ones cannot compromise the system's safety. Servers are exposed to every message in the system, bottleneck easily, and only a threshold of them can be compromised before the system loses safety. Brokers, instead, can be spun up by anyone, outside of Chop Chop's security perimeter, to meet the load produced by clients. Evaluation.We evaluate Chop Chop in a cross-cloud, geo-distributed environment including 320 medium-sized AWS EC2 machines and 64 OVH machines. We simulate up to 257 million clients and consider 12 experimental environments. Setting up each environment requires the installation of 13 TB of synthetic workload. A naive installation using scp from a single machine would take 68 hours. We designed silk, a one-to-many peer-to-peer file transfer tool optimized for high latency connections, to install the files in 30 minutes instead. We compare Chop Chop's throughput and end-to-end latency against its baselines in multiple real-world scenarios including server failures, adverse network conditions, and applications running. In all scenarios, Chop Chop's throughput outperforms its closest competitor by up to two orders of magnitude, with no penalty in terms of latency. When put under stress, Chop Chop orders, authenticates and deduplicates upwards of 43,600,000 messages per second with a mean latency of 3.6 seconds. Except under the most adverse network conditions and proportions of faulty clients, Chop Chop still achieves millions of operations per second. Applications.Unlike most Atomic Broadcast implementations [9, 24, 67, 78], Chop Chop does not offload authentication and deduplication to the application. This allows Chop Chop-based applications to focus entirely on their core logic without ever engaging in expensive, and easy to get wrong, cryptography. To showcase this, we implement three simple applications to evaluate on top of Chop: a Payment system, an Auction house and an instance of the game "Pixel war". These three simple applications (300 lines of logic) work effectively with messages as small as 8 bytes, further underlying the communication overhead represented by public keys, signatures and sequence numbers in non-distilled systems. Both Payments and Pixel war inherit Chop Chop's throughput, respectively processing over 32 and 35 million operations per second. Even the Auction house, which is single-threaded, achieves 2.3 million operations per second. (These applications are meant as examples, and further optimization is beyond the scope of this paper.) Contributions.We identify authentication and deduplication as the main bottlenecks of batched Atomic Broadcast; we introduce distilled batches to extend the amortizing properties of batching to authentication and deduplication; we present distillation, an interactive protocol to produce distilled batches, and identify the opportunity to offload it to an untrusted set of brokers; we implement Chop Chop, an Atomic Broadcast system taking advantage of distillation; we thoroughly evaluate Chop Chop, improving state-of-the-art Atomic Broadcast throughput by two orders of magnitude, maintaining near line-rate performance up to 40 million requests per second; we showcase Chop Chop through a Payment system, an Auction house and an instance of the "Pixel war" game, respectively achieving 32, 2.3 and 35 million operations per second. Roadmap.SS2 introduces Atomic Broadcast, discusses classic batching mechanisms and highlights the cost of authenticating and deduplicating messages in the resulting batches. SS3 presents distilled batches and, for pedagogical purposes, introduces a simplified failure-free version of Chop Chop's protocol. SS4 describes Chop Chop's fault-tolerant protocol in further detail. SS5 discusses Chop Chop's implementation. SS6 discusses Chop Chop's empirical evaluation, highlighting the challenges of such large scale experiments. We summarize related work in SS7 and future work in SS8. ## 2 Atomic Broadcast In an Atomic Broadcast system [18], _clients_ broadcast messages that are delivered by _servers_. Properties [13].Correct servers deliver the same messages in the same order (_agreement_). Messages from correct clients are eventually delivered (_validity_). Spurious messages cannot be attributed to correct clients (_integrity_). No message is delivered more than once (_no duplication_). ### Cost of Atomic Broadcast Informally, Atomic Broadcast's most distinctive property, agreement, is also the most challenging to satisfy. Correct servers must coordinate to _order_ messages without compromising liveness. A great deal of research effort has been put in developing ordering techniques, optimizing for latency [45, 54] or communication complexity [53, 61]. Integrity and no duplication, instead, allow for simple solutions. Clients can ensure integrity by _authenticating_ their messages using digital signatures: servers simply ignore incorrectly authenticated messages. For no duplication, clients can tag each message with a strictly increasing _sequence number_: after ordering, servers discard old messages as replays. Both techniques--we call them _classic authentication_ and _classic sequencing_--are non-interactive, easy to implement, and agnostic of the protocol employed to order messages. Arguably due to the simplicity and effectiveness of classic authentication and sequencing, most Atomic Broadcast implementations overlook integrity and no duplication entirely: they offload authentication and sequencing to the application, focusing on the more challenging task of ordering. Batching for ordering.Lacking an efficient technique to minimize its complexity, ordering could be Atomic Broadcast's main bottleneck.2 The well-known strategy of _batching_, however, is both general and effective at amortizing the agreement cost of an Atomic Broadcast implementation [17, 66]. Footnote 2: Byzantine Atomic Broadcast among \(n\) participants cannot be achieved with a bit complexity smaller than \(\Theta(n^{2})\)[27]. Broadly speaking, batching is orchestrated by a _broker_ as follows [24].3 Over a small window of time, the broker collects multiple client-issued messages in a batch, which it disseminates to the servers; the broker then submits to an underlying instance of Atomic Broadcast a cryptographic hash of the batch it collected; upon delivering the hash of a batch from Atomic Broadcast, a server retrieves the batch, and delivers to the application all the messages it contains. Because the size of a hash is constant, the cost of ordering a batch does not depend on its size: as batches become larger, the cost of ordering each message goes to zero. In practice, batching can effectively eliminate the cost of ordering in any real-world implementation of Atomic Broadcast. Footnote 3: In the literature, servers usually play the role of brokers. As we discuss in §4, however, Chop Chop minimizes its load on the servers by offloading brokerage to a separate, trustless set of processes. Cost of integrity and no duplication.Batching does not efficiently uphold integrity and no duplication. Regardless of how many messages are batched together, the cost of classic authentication and sequencing stays constant: one public key, one signature and one sequence number for each message. In practice, these costs dominate the computation and communication budget of a batched Atomic Broadcast system (see SS3.2). On the one hand, signatures are among the most CPU-intensive items in the standard cryptographic toolbox, dwarfing in particular symmetric primitives such as hashes and ciphers. On the other, public keys, signatures and sequence numbers can easily account for the majority of a batch's size. To illustrate these costs, consider the example of a payment system. A payment operation requires three fields: sender, recipient, and amount. Sender and recipient fit in 4 B each if the system serves less than 4 billion users. Amount needs 4 B for payments between 1 cent and 40 millions. Hence, a payment can be encoded in just 12 B. Using public keys to identify sender and recipient (2\(\times\)32 B using Ed25519 [8, 38]) and attaching a signature (64 B) and a sequence number (8 B) to each message inflates payloads to 140 B. For payments, _91% of the bandwidth is spent on integrity and no duplication_. ### Existing Mitigations Chop Chop integrates the two following techniques to reduce the bandwidth and CPU cost of authentication. Short identifiers.Repeated public keys consume a significant slice of a server's communication budget. A workaround is to have servers store public keys in an indexed _directory_[2]. Upon first joining the system, a client announces its public key via Atomic Broadcast to _sign up_. Upon delivering a sign-up message, a server appends the new public key to its directory. The same public key appears at the same position in the directory of all correct servers thanks to Atomic Broadcast's agreement. Having signed up, a client uses its position in the directory as identifier instead of its public key. In the previous example of a payment system, using such identifiers reduces a payment size by 40%, from 140 B to 84 B. However, a signature per payment must still be transmitted. Pooled signature verification.Authenticating a batch by verifying its signatures is a computationally intensive task for a server [17, 69]. However, Red Belly [22] and Mir [69] showed that not all servers need to authenticate all batches. Indeed, assuming at most \(f\) faulty servers, a broker optimistically asks only \(f+1\) servers to authenticate a batch to be certain to reach at least one correct server. If \(f+1\) servers do not reply by a timeout, the broker extends its request to \(f\) additional servers, thus reaching at least \(f+1\) correct servers. A correct server that authenticates a batch sends back to the broker a _witness shard_, i.e., a signed statement that the batch is correctly signed. The broker aggregates \(f+1\) identical shards into a _witness_, which it sends to the other \(2f\) servers. Because every witness contains at least one correct shard, the servers can trust the witness instead of verifying the batch. Assuming \(3f+1\) servers, this technique shaves up to two-thirds off the system's authentication complexity. ## 3 Distilled Batches Chop Chop's main contribution is _distillation_, a set of techniques aimed at extending the amortizing properties of batches to authentication and sequencing. Background: multi-signatures.Chop Chop makes use of multi-signature schemes [37] to authenticate batches. Secret keys produce signatures that can be verified against the corresponding public keys. Public keys and signatures, however, can be _aggregated_. Let \((p_{1},r_{1}),\ldots,(p_{n},r_{n})\) be distinct key pairs, and \(s_{1},\ldots,s_{n}\) be signatures produced by \(r_{1},\ldots,r_{n}\) on the _same_ message \(m\): \(p_{1},\ldots,p_{n}\) (resp., \(s_{1},\ldots,s_{n}\)) can be aggregated into a constant-sized aggregate public key \(p\) (resp., aggregate signature \(s\)). Remarkably, \(s\) can be verified in constant time against \(p\) and \(m\)[55, 10]. Chop Chop uses BLS multi-signatures [10] which can be aggregated cheaply and non-interactively: even a non-signing process can compute \(p\) (resp., \(s\)) once provided with \(p_{1},\ldots,p_{n}\) (resp., \(s_{1},\ldots,s_{n}\)) by computing a single multi-plication over an elliptic curve. ### Distillation at a Glance In brief, distillation aims to produce _distilled batches_. A distilled batch has some of its signatures (resp., sequence numbers) replaced by an _aggregate signature_ (resp., _aggregate sequence number_). When maximally successful, distillation produces a _fully distilled batch_, where all signatures (resp., sequence numbers) have been replaced by a _single_ aggregate signature (resp., sequence number). As we discuss below, distilled batches are vastly cheaper for servers to receive and process. Fig. 2 depicts the effect of distillation on a batch. Full distillation (failure-free).For pedagogical purposes, we introduce distillation under the assumption that all processes are correct. We detail Chop Chop's fault-tolerant distillation techniques in SS4.2, optimized and adapted to the Byzantine setting. As in the classic batching case, a set \(\chi_{1},\ldots,\chi_{b}\) of clients submit their messages \(m_{1},\ldots,m_{b}\) to a broker \(\beta\). Each \(\chi_{i}\) selects for its message \(m_{i}\) a sequence number \(k_{i}\) (greater than any sequence number it previously used), then sends \((k_{i},m_{i})\) to \(\beta\). Upon receiving all \((k_{i},m_{i})\)-s, \(\beta\) computes the aggregate sequence number \[k=\max_{i}k_{i}\] then builds the _batch proposal_ \[B=[(x_{1},k,m_{1})\,,\ldots,(x_{b},k,m_{b})]\] where \(x_{i}\) is \(\chi_{i}\)'s numerical identifier in the system (see SS2.2). \(\beta\) then sends \(B\) back to every \(\chi_{i}\). Upon receiving \(B\), \(\chi_{i}\) produces a multi-signature \(s_{i}\) for the hash \(H(B)\) of \(B\), which it sends back to \(\beta\). Having collected all multi-signatures, \(\beta\) computes the aggregate signature \[s=\prod_{i}s_{i}\] In doing so, \(\beta\) obtains the fully distilled batch \[\tilde{B}=[s,k,((x_{1},m_{1})\,,\ldots,(x_{b},m_{b}))]\] Upon receiving \(\tilde{B}\), any server now can: compute \(B\) by inserting \(k\) between each \((x_{i},m_{i})\); compute \(H(B)\); use each \(x_{i}\) to retrieve \(\chi_{i}\)'s public key \(p_{i}\) from its directory; compute the aggregate public key \[p=\prod_{i}p_{i}\] and finally verify \(s\) against \(p\) and \(H(B)\). Distillation outcome.Having engaged with \(\beta\) to distill the batch, every \(\chi_{i}\) multi-signs the _same_ message \(H(B)\) and updates its sequence number to the _same_\(k\). This allows \(\beta\) to authenticate and sequence all of \(\tilde{B}\) using \(s\) and \(k\) only. Distillation safety.The proposed distillation protocol has no safety drawback. First, because \((x_{i},k,m_{i})\) appears in \(B\), \(\chi_{i}\) still gets to authenticate \(m_{i}\). Intuitively, \(\chi_{i}\)'s multi-signature on \(H(B)\) publicly authenticates _whatever message in \(B\) is attributed to \(\chi_{i}\)_, \(m_{i}\) in this case. Second, because \(k\geq k_{i}\), \(k\) is still a valid sequence number for \(m_{i}\). Sequence number distillation might cause \(\chi_{i}\) to skip some sequence numbers whenever any \(\chi_{j}\) issues some \(k_{j}>k_{i}\). Contiguity of sequence numbers, however, is not a requirement for deduplication. As with classic sequencing, \(\chi_{i}\) produces--and servers deliver--messages with strictly increasing sequence numbers; servers disregard all other messages as replays. ### Distillation Microbenchmark Having discussed how distilled batches are produced, we now estimate the significance of their effect by means of a back-of-the-envelope calculation and a simple microbenchmark on AWS. Consider a setting where 100 million clients broadcast 8-byte messages, e.g., to issue payments (see SS2.1). We compare _classic authentication and sequencing_, where clients are identified by their public keys, messages are individually signed and sequenced, against _fully distilled batches_ where Figure 2: **Full distillation in action. With classic authentication and sequencing, each payload \(q_{i}\) contains a public key \(pk_{i}\), a sequence number \(sn_{i}\), a message \(msg_{i}\) and a signature \(sig_{i}\). In the fully distilled case, each \(q_{i}\) reduces to just \(id_{i}\) and \(msg_{i}\): one header \(H\), composed of one aggregate sequence number \(SN\) and one aggregate signature \(SIG\), is sufficient for the entire batch. Bars are to scale if small messages are broadcast using Ed25519 for signatures and BLS12-381 for uncompressed multi-signatures: \(sn_{i}\) and \(SN\) are 8 B, \(msg_{i}\) is 8 B, \(pk_{i}\) is 32 B, \(sig_{i}\) is 64 B, \(SIG\) is 192 B.** clients are identified by a numerical identifier and each batch contains only one aggregated signature and sequence number. We use Ed25519 [38] for signatures (32 B public keys, 64 B signatures) and BLS12-381 [12] for multi-signatures (192 B uncompressed signatures). We use uncompressed BLS multi-signatures to save computation time at the cost of storage space (96 B compressed vs. 192 B uncompressed). Communication complexity.Payloads are 112 B per message in the classic case (32 B of public key, 8 B of sequence number, 8 B of message, 64 B of signature) vs. 11.5 B in the fully distilled case (28 bits = 3.5 B of identifier to represent 257M clients, 8 B of message). Assuming batches of 65,536 messages (Fig. 3), classic batches are exactly 7 MB long, while fully distilled batches are 736 KB long including aggregate signature and sequence number. Computation complexity.Running at maximum load, an Amazon EC2 c6i.8xlarge instance authenticates \(16.2\pm 0.4\) classic batches per second using Ed25519's batch verification for 65,536 signatures. The same machine authenticates \(457.1\pm 0.3\) fully distilled batches per second: each authentication requires the aggregation of 65,536 BLS12-381 public keys and the verification of one BLS12-381 multi-signature. Summary.By the order-of-magnitude calculations above, fully distilled batches hold the promise to reduce the costs of authentication and sequencing by a factor 9.7 for network bandwidth, and 28.2 for CPU. Chop Chop aims to deliver on that promise for a real-world fault-tolerant system. ## 4 Chop Chop This section overviews Chop Chop's architecture and protocol, and provides arguments for its correctness. Overview.Chop Chop involves three types of processes (Fig. 4): broadcasting clients, delivering servers and a layer of broadcast-facilitating brokers between them. Servers run an Atomic Broadcast instance among themselves, to which brokers submit messages. Chop Chop is agnostic to the implementation of Atomic Broadcast used by the servers. On top of the provided broker-to-server Atomic Broadcast, Chop Chop implements a much faster client-to-server Atomic Broadcast: clients submit messages to the servers, aided by brokers. Chop Chop's protocol unfolds in two phases: _distillation_ (SS4.2) and _submission_ (SS4.3). In the distillation phase, clients interact with a broker to gather their messages in a distilled batch (see SS3). In the submission phase, the broker disseminates the distilled batch to the servers and submits the batch's hash to the server-run instance of Atomic Broadcast. Upon delivering its hash from Atomic Broadcast, servers retrieve the batch and deliver its messages. Chop Chop's contributions mainly focus on the distillation phase. Chop Chop's submission strategy closely resembles prior batch-based Atomic Broadcast implementations [24, 67, 32]. ### Architecture and Model Chop Chop augments the architecture of a classic Atomic Broadcast, as described in SS2, with novel brokers. Clients and servers._Clients_ broadcast messages to a (distinct) set of _servers_. We assume that less than one third of servers can be faulty and behave in an arbitrary manner, i.e., be Byzantine [48], while all clients can be faulty. For simplicity, servers form a fixed set that is known by all correct processes at system startup. Chop Chop can be extended for reconfiguration thanks to its modular use of Atomic Broadcast [47, 9] (Fig. 4). Clients issue messages after broadcasting their public keys to the system (see SS2.2). Brokers.We discussed in SS3 how both classic and distilled batches are assembled by a broker. The role of brokers is traditionally taken by servers. Given the additional strain put on brokers by Chop Chop's interactive distillation protocol, however, having servers be brokers would result in a waste of scarce, trusted resources. Importantly, however, distillation is _trustless_. On the one hand, agreement rests entirely on Chop Chop's underlying Atomic Broadcast instance, for which brokers are only clients. On the other hand, as we argue in SSS4.2 and 4.4.1, a faulty broker cannot compromise integrity or no duplication: distilled batches are publicly authenticated, and correct clients cannot be tricked into using stale sequence numbers. Hence, _brokers need no trust_: a broker either does its job correctly or produces distilled batches that are visibly malformed, and easily discarded by all correct servers. This observation is of paramount importance to the performance of Chop Chop: _because distillation is heavy but trustless, brokers should be distinct from servers_. Along with Figure 4: **Chop Chop architecture.** Figure 3: **Full distillation of a batch of 65,536 payloads (sizes to scale).** The aggregate signature and aggregate sequence number do not appear as a result of their small size. clients and servers, we thus assume a third, _independent set of brokers_, sitting between clients and servers, to accelerate Atomic Broadcast by assembling client messages in distilled batches. We assume that at least one broker is correct; the system loses liveness but not safety if all brokers are faulty. Network.Chop Chop guarantees that the batches collected and submitted to servers by correct brokers are well-formed even in asynchrony, but achieves full distillation when the network is synchronous (see SS4.2). Chop Chop inherits the network requirements of its underlying Atomic Broadcast. ### Distillation Phase We introduced in SS3 a simplified, failure-free distillation protocol. This section describes how Chop Chop renders distillation tolerant to arbitrary failures and improves its performance via a sequence of improvements, each addressing a shortcoming of the simplified protocol. The complete fault-tolerant protocol of Chop Chop is depicted in Fig. 5. In the failure-free distillation protocol: clients \(\chi_{1},\ldots,\chi_{b}\) send their messages \(m_{1},\ldots,m_{b}\), with sequence numbers \(k_{1},\ldots,k_{b}\) (#2) to a broker \(\beta\) (Fig. 5, #1); \(\beta\) identifies the maximum submitted sequence number \(k\) and builds a batch proposal \(B=[(x_{1},k,m_{1}),\ldots,(x_{b},k,m_{b})]\) (#3); \(\beta\) disseminates \(B\) to \(\chi_{1},\ldots,\chi_{b}\) (#4); each \(\chi_{i}\) produces a multi-signature \(s_{i}\) on \(H(B)\) (#5), which it sends to back \(\beta\) (#6); \(\beta\) aggregates \(s_{1},\ldots,s_{n}\) into an aggregate \(s\), thus producing a fully distilled batch \(\tilde{B}=[s,k,((x_{1},m_{1}),\ldots,(x_{b},m_{b}))]\) (#7). Background: Merkle trees.Chop Chop uses Merkle trees [57] to hash batches. An \(l\)-element vector \(z_{1},\ldots,z_{l}\) is hashed into a _root_\(r\), used as commitment. For each \(i\), \(z_{i}\)'s value can be proved by means of a proof of inclusion \(p_{i}\), verifiable against \(r\) and \(z_{i}\). Proofs of inclusions are \(O(\log l)\) in size and are verified in \(O(\log l)\) time. What if a broker forges messages?A faulty \(\beta\) could try to falsely attribute to some \(\chi_{i}\) a message \(m^{\prime}_{i}\neq m_{i}\). \(\beta\) could do so by replacing \(m_{i}\) with \(m^{\prime}_{i}\) in \(B\), then having \(\chi_{i}\) sign \(H(B)\), thus implicitly authenticating \(m^{\prime}_{i}\). This is easily fixed by having \(\chi_{i}\) check that \(m_{i}\) correctly appears in \(B\) before signing \(H(B)\). Can a broker avoid sending the entire batch?A clear inefficiency of the simplified protocol is that \(\beta\) has to convey all of \(B\) back to each \(\chi_{i}\). This is fixed using Merkle trees. Upon assembling \(B\), \(\beta\) computes the Merkle root \(r\) of \(B\), along with the Merkle proof \(p_{i}\) for each \((x_{i},k,m_{i})\) in \(B\). Instead of sending \(B\) to all clients, \(\beta\) just sends \(r\), \(k\) and \(p_{i}\) to each \(\chi_{i}\). Upon receiving \(r\), \(k\) and \(p_{i}\), \(\chi_{i}\) checks \(p_{i}\) against \(r\) and \((x_{i},k,m_{i})\), producing \(s_{i}\) on \(r\) only if the check succeeds. If \(\chi_{i}\) signs \(r\), then \((x_{i},k,m_{i})\) is necessarily an element of \(B\). Importantly, however, \(\beta\) could inject \((x_{i},k,m^{\prime}_{i}\neq m_{i})\) somewhere else in \(B\), while still providing \(\chi_{i}\) only with the proof for \((x_{i},k,m_{i})\). This is solved by having servers ignore every distilled batch where two or more messages are attributed to the same client. This way, if \(\chi_{i}\) signs \(r\), then either \(m_{i}\) is the only message in \(B\) attributed to \(\chi_{i}\), or \(\tilde{B}\) is rejected by all servers as malformed: either way, integrity is upheld. What if a client does not multi-sign?Under the assumption that \(\chi_{1},\ldots,\chi_{b}\) are correct, \(\beta\) can safely wait until it collects all \(s_{1},\ldots,s_{b}\). This policy is clearly flawed in the Byzantine setting: a single crashed client can prevent \(\beta\) from ever aggregating \(s\). Furthermore, lacking an assumption of synchrony, \(\beta\) cannot exclude from \(\tilde{B}\) those clients that do not sign \(r\) by some timeout: consistently slow clients would always be excluded, and validity would be lost. This issue is fixed by the fallback mechanism introduced in the following. Fault-tolerant distillation.Upon first sending \((k_{i},m_{i})\) to \(\beta\) (#2), \(\chi_{i}\) also sends an individual, non-aggregable signature \(t_{i}\) for \((x_{i},k_{i},m_{i})\), which \(\beta\) stores. \(\beta\) then waits for \(s_{i}\)-s on \(r\) until either all \(s_{i}\)-s are collected, or a timeout expires. For every \(s_{i}\) that ends up missing, due to \(\chi_{i}\) being crashed or delayed, \(\beta\) attaches \((k_{i},t_{i})\) to \(\tilde{B}\). Upon receiving \(\tilde{B}\), a server first checks each individual signature \(t_{i}\) against the corresponding \((x_{i},k_{i},m_{i})\). The server then checks \(s\) against the public keys of the clients for which an individual signature \(t_{i}\) was not given, i.e., the public keys of all clients that signed \(r\) in time. In summary: fast, correct clients who successfully produce their \(s_{i}\)-s in time authenticate their message by multi-signing \(r\); slow or crashed clients still get their messages through, individually authenticated by the \(t_{i}\)-s that they originally produced. Full distillation is achieved whenever the network is synchronous and all clients are correct, which we argue is the case in practice for the majority of a system's lifetime. When the network is asynchronous, however, a fraction of clients might fail to produce their \(s_{i}\) in time, resulting in a _partially distilled batch_. At the limit where all clients fail to sign \(r\) in time, \(\tilde{B}\) reduces to a classic batch, degrading server-side performance to pre-distillation levels. We underline that safety and liveness are preserved regardless of synchrony. What if a broker replay messages?A problem introduced by the last fix is that \(\chi_{i}\) authenticates both \(k_{i}\) and \(k\) as sequence numbers for \(m_{i}\), allowing a faulty \(\beta\) to play \(m_{i}\) twice, hence breaking Atomic Broadcast's no duplication. This is fixed by having each client engage in the broadcast of only one message at a time. This way, while \(\beta\) can replay \(m_{i}\), it can only do so consecutively: all sequence numbers \(\chi_{i}\) authenticates for \(m_{i}\) belong to a range that does not contain sequence numbers for any other message \(m_{i^{\prime}\neq i}\) issued by \(\chi_{i}\). This observation is key to the following fix: along with the last sequence number \(\tilde{k}_{\chi}\) each client \(\chi\) used, a correct server \(\sigma\) stores the last message \(\bar{m}_{\chi}\) that \(\chi\) broadcast; upon ordering a message \(m\) with sequence number \(k\) from \(\chi\), \(\sigma\) delivers \(m\) if and only if \(k>\tilde{k}_{\chi}\) and \(m\neq\bar{m}_{\chi}\). In doing so, \(\sigma\) discards all consecutive replays of \(\bar{m}_{\chi}\), thus preventing replays in general. What if a client broadcasts too frequently?The last fix relies on clients broadcasting one message at a time. Depending on latency, a client broadcasting too frequently might accrue an ever-growing queue of pending messages. This issue is fixed by flushing application messages in bursts, akin to Nagle's buffering algorithm for TCP. What if a client submits the largest possible sequence number?Assuming that a finite number of bits (e.g., 64) are allocated to representing sequence numbers, a faulty client \(\chi_{m}\) could set its \(k_{m}\) to the largest possible sequence number \(k_{max}\) (e.g., \(2^{64}-1\)). In doing so, \(\chi_{m}\) would force all other \(\chi_{i}\)-s to update their sequence number to \(k_{max}\). Since correct clients only use strictly increasing sequence numbers, no \(\chi_{i}\) could ever broadcast again: sequence numbers would run out. Proving the _legitimacy_ of sequence numbers fixes this issue. Legitimate sequence numbers.By the rule that we established, no more than one message from the same client can appear in the same batch. Moreover, correct clients always tag their messages with the smallest sequence number they have not yet used, i.e., the largest they have used plus one. By induction, we then have that unless some client misbehaves, no client ever needs to use a sequence number larger than the number of batches ever delivered by the servers: the largest sequence number any client submits to the very first batch is 0, therefore no client submits a sequence number larger than 1 to the second batch, and so on. This observation allows us to define as _legitimate_ any sequence number smaller than the number of batches servers have delivered at any given time. Legitimacy proofs.This definition of legitimacy allows for the generation of _legitimacy proofs_: upon delivering the \(n\)-th batch, a server publicly states so with a signature. By collecting \(f+1\) server signatures stating that the \(n\)-th batch was delivered into a certificate \(l_{n}\), any process can publicly prove that any sequence number smaller than \(n\) is legitimate. Upon initially submitting \(k_{i}\) (#2), \(\chi_{i}\) also sends to \(\beta\) a certificate \(l_{n}\), for any \(n>k_{i}\); \(\beta\) ignores client submissions that lack such certificate, except when \(k_{i}=0\) since no certificate is needed. Upon sending \(k\) back to all \(\chi_{i}\)-s (#4), \(\beta\) attaches the highest \(l_{\hat{n}}\) it collected: \(l_{\hat{n}}\) proves that \(k\) is legitimate since \(\hat{n}>k\). \(\chi_{i}\) signs \(r\) (#5) only if \(k\) is proved legitimate by \(l_{\hat{n}}\). This technique ensures correct clients always use legitimate sequence numbers. Because legitimate sequence numbers grow only with the number of batches delivered by the servers, no correct client is forced to skip too far ahead, compromising its own liveness. What if a broker crashes?If \(\beta\) fails to engage in the protocol, each \(\chi_{i}\) can submit its message to any other broker. ### Submission Phase The submission phase ensures that all servers efficiently deliver a distilled batch, and that all broadcasting clients receive a proof that their messages were delivered. Witness.Having gathered a distilled batch \(\tilde{B}\) (#7), \(\beta\) moves on to have \(f+1\) servers signs a _witness shard_ for \(\tilde{B}\). In signing a witness shard for \(\tilde{B}\), a server \(\sigma\) simultaneously makes two statements. First, \(\tilde{B}\) is _well-formed_: \(\sigma\) successfully verified \(\tilde{B}\)'s signatures and found all messages in \(\tilde{B}\) to have a different sender. Second, \(\tilde{B}\) is _retrievable_: \(\sigma\) stores \(\tilde{B}\) and makes it available for retrieval, should any other server need it. We call a _witness_ for \(\tilde{B}\) the aggregation of \(f+1\) witness shards for \(\tilde{B}\). Because any set of \(f+1\) processes includes a correct process, when presented with a witness for \(\tilde{B}\) any server can trust \(\tilde{B}\) to be well-formed and retrievable. As discussed in SS2.2, witnesses optimize server-side computation. Only \(f+1\) servers need to engage in the expensive checks required to safely witness \(\tilde{B}\). All other servers can trust \(\tilde{B}\)'s witness, saving trusted CPU resources. In order to collect a witness for \(\tilde{B}\), \(\beta\) sends \(\tilde{B}\) to all servers (#8). Optimistically, \(\beta\) asks only \(f+1\) servers to sign a witness shard for \(\tilde{B}\), progressively extending its request to \(2f+1\) servers upon expiration of suitable timeouts. Upon receiving \(\tilde{B}\) (#9), a correct server \(\sigma\) stores \(\tilde{B}\). If asked to witness \(\tilde{B}\), \(\sigma\) checks that \(\tilde{B}\) is well-formed and sends back to \(\beta\) its witness shard for \(\tilde{B}\) (#10). \(\beta\) collects and aggregates \(f+1\) shards into a witness for \(\tilde{B}\) (#11), then submits \(\tilde{B}\)'s hash and witness to the server-run Atomic Broadcast (#12). Figure 5: **Overview of the Chop Chop protocol between two clients (\(\chi_{1},\chi_{2}\)), a broker (\(\beta\)) and four servers (\(\sigma_{1}\)–\(\sigma_{4}\)).** The protocol is comprised of 19 steps (#1–#19) and of an underlying instance of Atomic Broadcast such as BFT-SMaRt or HotStuff. Delivery.Upon delivering \(\tilde{B}\)'s hash and witness from Atomic Broadcast (#13), a correct server \(\sigma\) retrieves \(\tilde{B}\), either from its local storage (if it directly received \(\tilde{B}\) from \(\beta\) at #8) or from another server (#14). Because \(\tilde{B}\) is retrievable, \(\sigma\) is guaranteed to eventually find a server to pull \(\tilde{B}\) from. Having retrieved \(\tilde{B}\) (#15), \(\sigma\) delivers all non-duplicate messages in \(\tilde{B}\) (see SS4.2 for how \(\sigma\) detects duplicates). Response.Finally, \(\sigma\) signs a _delivery certificate_, listing the messages in \(\tilde{B}\) that \(\sigma\) delivered. \(\sigma\) sends its signature back to \(\beta\) (#16). By agreement of Atomic Broadcast, all correct servers deliver the same subset of messages in \(\tilde{B}\). As such, \(\beta\) is guaranteed to eventually collect \(f+1\) signatures on the same delivery certificate (#17). Upon doing so, \(\beta\) distributes a copy of \(\tilde{B}\)'s delivery certificate to \(\chi_{1},\dots,\chi_{b}\) (#18). Armed with \(\tilde{B}\)'s delivery certificate, a correct \(\chi_{i}\) can publicly prove the delivery of \(m_{i}\) (#19) and safely broadcast its next message. ### Correctness This section summarizes Chop Chop's correctness analysis. #### 4.4.1 Safety The safety of Chop Chop is given by its agreement, integrity and no duplication properties (see SS2). Agreement.Chop Chop inherits agreement from its underlying, server-run instance of Atomic Broadcast. A correct server delivers messages only upon delivering the hash of a batch from the server-run Atomic Broadcast. Upon doing so, a correct server retrieves the full batch, checks its hash, and delivers all its messages in order of appearance. All correct servers deliver the same messages in the same order assuming cryptographic hashes are collision-resistant. Integrity.A correct server only delivers messages included in a batch witnessed by \(f+1\) servers, i.e., by at least one correct server. A correct server witnesses a batch only if: no more than one message in the batch is attributed to the same client; every client in the batch authenticates its message with a signature or the root of the batch's Merkle tree with a multi-signature. A correct client multi-signs the root of a batch's Merkle tree only upon receiving a proof of the inclusion of its message in the batch. As such, if a correct client multi-signs the root of a batch's Merkle tree, either the batch contains only the client's intended message or it is not witnessed. In summary, a correct server delivers a message \(m\) from a correct client \(\chi\) only if \(\chi\) broadcast \(m\). No duplication.A correct client only broadcasts one message at a time. As such, while the client might attach multiple sequence numbers to the same message (different brokers may propose different aggregate sequence numbers for the client to authenticate) the sequence numbers the client attaches to each message belong to distinct ranges. A correct server delivers client messages only in increasing order of sequence number, and ignores repeated messages. This means that a correct server delivers at most one message from each sequence number range. In summary, no server delivers a correct client's message more than once. #### 4.4.2 Liveness The liveness of Chop Chop is given by its validity property. Validity.If a correct client submits its message to a correct broker, the message is guaranteed to eventually be delivered by all correct servers: even if the client fails to engage in distillation in a timely manner, its message is still included in a batch which gets disseminated, witnessed and delivered by all correct servers. Faulty brokers can clearly refuse to service (specific) clients. Upon expiration of a suitable timeout, however, a correct client submits its message to a different broker. As we assume that at least one broker is correct, all correct clients are eventually guaranteed to find a correct broker and get their messages delivered by all correct servers. #### 4.4.3 Other Attacks As we outlined in SSS 4.4.1 and 4.4.2, Chop Chop satisfies all properties of Atomic Broadcast. In this section, we consider other attacks an adversary might deal to impair Atomic Broadcast's performance and fairness [41] in Chop Chop. Denial of service.A faulty broker may refuse to service clients, thus forcing them to fall back on other brokers, increasing latency. A faulty broker may also submit deliberately non-distilled batches to servers to force them to waste trusted resources to receive and verify individual signatures. While handling DoS is beyond the scope of this paper, Chop Chop is amenable to accountability mechanisms [34]. Brokers could be asked to stake resource to join the system. Correct, high-performance brokers could be rewarded, akin to gas fees in Ethereum [76]. Brokers that accrue a reputation of misbehavior or slowness could be banned and lose their initial stake. Front-running.A faulty broker might impact fairness by front-running messages of interest [23, 81]. While front-running resistance is beyond the scope of this paper, Chop Chop is compatible as-is with existing mechanisms to mitigate or prevent front-running, most notably schemes that have clients submit encrypted messages whose content is revealed only after delivery [58, 79]. Importantly, these encrypt-order-reveal schemes could be selectively employed only for those messages that are vulnerable to front-runs, e.g., messages used for stock trading [63]. Maintaining Chop Chop's throughput while providing quorum-enforced fairness for every message [80] opens a valuable future avenue of research. ## 5 Implementation Details A straightforward implementation of the protocol we presented in SS4 would not achieve the throughput and latency we observe in SS6. In this section, we discuss some of the techniques and optimizations required on the way to practically achieving Chop Chop's full potential. (Many optimizations are however left out due to space constraints). Code.Chop Chop is implemented in Rust, totaling 8,900 lines of code. The main libraries Chop Chop depends on are: tokio 1.12 for an asynchronous, event-based runtime; rayon 1.5 for worker-based parallel computation; serde 1.0 for serialization and deserialization; blake3 1.0 for cryptographic hashes; ed25519-dalek 1.2 for EdDSA signatures on Curve25519 [38]; blst 0.3.5 for multi-signatures on the BLS12-381 curve [12]. Chop Chop also depends on in-house libraries: talk (9,800 lines of code) for basic distributed computing and high-level networking and cryptography; zebra (7,100 lines of code) for Merkle-tree based data structures. ### Broker The goal of a Chop Chop broker is to produce batches as distilled as possible (to minimize server load), as large as possible (to amortize ordering), and as quickly as possible (to minimize latency). Our target is for a broker to assemble one fully distilled batch of 65,536 messages (736 KB, see Fig. 3) per second, with a 1 second distillation timeout. Reliable UDP.Short-lived TCP connections between broker and clients are easier to work with, but unfeasible for the broker to handle. Assuming an end-to-end broadcast time of up to 10 seconds, the broker would need to maintain upwards of 600,000 simultaneous TCP connections, which preliminary tests immediately proved unfeasible on the hardware we have access to. This makes UDP the only option for client-broker communication. However, UDP lacks the reliability properties of TCP, and tests showed non-negligible packet loss even within the same AWS EC2 availability zone. As we discussed in SS4.2, message loss immediately translates to partial distillation. We address this issue by means of an in-house, ACK-based, message retransmission protocol based on UDP that also smoothens the rate of outgoing packets. EdDSA batch verification.To avoid spoofing, all client messages are authenticated with signatures. At the target rate, however, individually verifying each signature is unfeasible for a broker. Luckily, ed25519-dalek allows for more efficient batched verification. A broker buffers the client messages it receives and authenticates them in batches. Tree-search invalid multi-signatures.Clients contributing to the same batch produce matching multi-signatures for the batch's root. At the target rate the broker cannot independently verify each multi-signature. We tackle this problem by gathering multiple matching multi-signatures on the leaves of a binary tree: internal nodes aggregate their children. For each tree, the broker verifies the root multi-signature, recurring only on the children of an invalid parent. This allows to identify invalid multi-signatures in logarithmic time while enabling batched verification in the good case. Caching legitimacy proofs.Clients justify their sequence numbers with legitimacy proofs. Again, the broker cannot verify each proof in time. We address this problem by having the broker verify a legitimacy proof only if higher than the highest it previously observed. As a result, a faulty client might get away with submitting an invalid legitimacy proof but, importantly, not an illegitimate sequence number. ### Server The goal of a Chop Chop server is to process distilled batches as quickly as possible without overflowing its memory. Batch garbage collection.Servers update each other on which batches they delivered. A server garbage-collects a batch, both messages and metadata, as soon as it is delivered by all other servers. We underline that, even if a single server fails to deliver a batch, the others cannot garbage-collect it as the slow server might be correct. This is an inherent limitation of Atomic Broadcast: agreement without synchrony can be ensured only in the infinite-memory model. Identifier-sorted batching.No two messages from the same client must appear in the same batch. To simplify processing, brokers sort the messages in a batch by client identifier. Servers reject batches whose identifiers are not strictly increasing, thus verifying that all identifiers are distinct in constant size and in linear time. Sorting messages by identifier also enables parallel deduplication: messages are split by identifier range, chunks are deduplicated independently. ## 6 Evaluation We evaluate Chop Chop focusing on the following research questions (RQs): What workload can Chop Chop sustain (SS 6.3)? What are the benefits of Chop Chop's distillation (SS6.4)? How does Chop Chop scale to different numbers of servers (SS6.5)? How efficiently does Chop Chop use resources overall (SS6.6)? How does Chop Chop perform under adverse conditions, such as server failures (SS6.7)? What performance can applications achieve using Chop Chop (SS6.8)? ### Baselines We compare Chop Chop against four baselines: * HotStuff [78]: an Atomic Broadcast protocol designed for high-throughput (written in C++); * BFT-SMaRt [9]: an Atomic Broadcast protocol, similar to PBFT [17], designed for low-latency (written in Java); * Narwhal-Bullshark: the DAG-based Atomic Broadcast protocol Bullshark [67] with the state-of-the-art high-throughput mempool Narwhal [24] (written in Rust); * Narwhal-Bullshark-sig: akin to Narwhal-Bullshark but with Narwhal modified to authenticate messages, thus matching Chop Chop's guarantees. We deploy Chop Chop with two distinct underlying Atomic Broadcast protocols (Fig. 5): HotStuff and BFT-SMaRt. HotStuff and BFT-SMaRt.Evaluating HotStuff and BFT-SMaRt allows us to assess the base performance of an Atomic Broadcast protocol and determine how much acceleration Chop Chop provides. We evaluate Chop Chop on top of the same implementations of HotStuff [21] and BFT-SMaRt [20] we benchmark against. These implementations are production-ready and do not use state-of-the-art mempool protocols, only some basic form of batching. When evaluated stand-alone, each message in these systems includes 80 B of header composed of a client identifier (8 B), a sequence number (8 B), and a signature (64 B) verified by the servers. Both systems use batches of 400 messages, i.e., of 34.4 KB. Narwhal-Bullshark.As a state-of-the-art mempool, Narwhal is a close point of comparison for Chop Chop. Servers in Narwhal scale out following a primary-workers model: each server is paired with one or several workers into a server group. Similarly to Chop Chop, Narwhal greatly accelerates its underlying Atomic Broadcast (here, Bullshark). Unlike Chop Chop, however, Narwhal leaves the responsibility of authenticating and deduplicating messages to the application. Narwhal-Bullshark-sig.For a better comparison, we also benchmark Narwhal-Bullshark-sig: Narwhal-Bullshark where messages are authenticated by Narwhal in a state-of-the-art way, i.e., using batched, multi-core Ed25519 signature verification. Each message includes an 80 B header as for HotStuff and BFT-SMaRt. As for Narwhal-Bullshark, the remaining parameters are the default ones, e.g., 500 KB batches. ### Setup Unless otherwise specified--in SSSS 6.5 and 6.6--the Chop Chop benchmarks involve 64 c6i.8xlarge AWS servers, of 32 Intel vCPUs each, geo-distributed across 14 regions. Brokers assemble, and servers process, batches of 65,536 messages. Each message is 8 B in length, resulting in 736 KB batches (Fig. 3). Baselines always use the same set of server machines as their Chop Chop counterpart. All experiments run with maximum resilience, e.g., the system survives 21 faulty servers out of 64. Fig. 6 overviews the used deployment. Matching trusted vs. total resources.Unlike its baselines, Chop Chop leverages _untrusted_ resources, brokers, to boost its performance. Lacking a well-defined conversion between trusted and untrusted resources, two extremes can be taken to compare Chop Chop with its baselines: we can either match trusted resources, e.g., same number of Chop Chop servers as Narwhal workers, or match total resources, e.g., same number of servers and brokers in Chop Chop as workers in Narwhal. Intuitively, the first approach considers untrusted resources to be free while the second considers untrusted resources to be as costly as trusted resources. We use the first approach in SSSS 6.3 to 6.5, 6.7 and 6.8 to stress Chop Chop, provisioning the system with enough brokers to bottleneck servers. We use the second approach in SS 6.6 to assess how efficiently Chop Chop uses its hardware resources, trusted or not. Load clients and load brokers.We show in SS 6.3 that Chop Chop servers handle up to 43.6 million operations per second with an average latency of 3.6 seconds. To produce this level of workload, a real-world deployment would require over 700 brokers, each handling around 200,000 clients broadcasting back-to-back thus totaling hundreds of millions of machines. As we cannot experiment at such a scale, we introduce two new actors: _load clients_ and _load brokers_.4 Footnote 4: Throughout the remainder of this section, “brokers” and “clients” denote real brokers and real clients. The term “load” is always used explicitly. Load clients connect to brokers and simulate thousands of concurrent client requests. Most system evaluation typically use this approach to stress the system and measure latency. However, we explicitly separate clients from load clients in this evaluation. Clients run on very small machines--less powerful than most smartphones--to provide more accurate end-to-end latency measurements. We similarly split clients from load clients in all baseline runs. Load brokers are unique to Chop Chop. Even using load clients, we could not deploy enough brokers to bottleneck Chop Chop's servers. Load brokers work around this limitation, submitting batches of pre-generated messages directly to the servers. Free from interactions with clients and expensive cryptography, a load broker puts on the servers a load equivalent to that of tens of brokers working at full capacity. Using load clients and load brokers, we manage to show that brokers can quickly generate large batches of messages, and servers can process large numbers of batches. Cross-cloud deployment.All servers are deployed on AWS, balanced across 14 regions: Cape Town, Sao Paulo, Bahrain, Canada, Frankfurt, Northern Virginia, Northern California, Stockholm, Ohio, Milan, Oregon, Ireland, London, and Paris. For system sizes of 8 in SS 6.5, we distribute servers across the first 8 regions from the list, which constitute the most adversarial setup with the highest pairwise latency. Load brokers are placed in a separate cloud provider, OVH, for two purposes. First, it provides a better representation of Internet load than a single-cloud deployment. AWS operates under its own AS so any AS peering bottlenecks would be bypassed by an AWS-only deployment. Second, OVH is one Figure 6: **Cross-cloud deployment summary.** of the few cloud providers with enough peering with AWS to stress Chop Chop without charging for egress bandwidth, saving us from using AWS' costly bandwidth. The final cost amounted to 25,000 USD in AWS credits. Using OVH saved us more than 70,000 USD since each of Chop Chop's data point on a figure would have cost 1,700 USD in AWS egress bandwidth--21 TB at 0.08 USD per GB \(\approx\) 1,700 USD. For all experiments, we deploy one broker in each continent (Cape Town, Sao Paulo, Tokyo, Sydney, Frankfurt, and Northern Virginia) and one client in each of the 14 regions above, plus Tokyo and Sydney. Clients connect to their nearest broker. We configure the network for geo-distribution and high load, e.g., TCP buffer sizes [35] and UDP parameters. All baselines run on the same parameters. For Narwhal-Bullshark, we collocate each server with one of the workers in its server group. We reproduced Narwhal-Bullshark's original experiments [67] and matched the results. Hardware.All servers, brokers and load clients run on c6i.8xlarge machines with an Intel Xeon Platinum 8375C (32 virtual CPUs, 16 physical cores, 2.9 GHz baseline, 3.5 GHz turbo), 64 GB of memory and 12.5 Gb/s of bandwidth. We selected these machines since they provide good performance and are in the same "commodity" price range as those chosen initially for Chop Chop's main baseline: Narwhal-Bullshark. Clients run on t3.small machines: 2 vCPUs, 1 physical core, 2 GB of memory, and up to 5 Gb/s bandwidth--of which they use less than 1 KB/s. All machines run Linux Ubuntu 20.04 LTS on the AWS patched version of the Linux kernel 5.15.0, except for the load brokers on OVH which run on Linux kernel 5.4.0--the same kernel was not available. Challenges.The most significant evaluation challenges arose from the scale of the targeted deployment. The setup and orchestration alone required simultaneous handling of up to 320 machines across two different cloud providers and 25 regions, as well as transferring 13TB of files--mostly public keys and pre-generated batches--for each of the 12 setups. To handle this, we developed a new command-line tool to efficiently deploy distributed systems: silk. Among other things, we use silk for peer-to-peer-style file transfer over aggregated TCP connections, and for grouped process control. With silk, transferring all files from a single machine takes around 30 minutes, compared to 68 hours with scp. Additional challenges came from the real-world nature of the targeted deployment. First, the connection between OVH and AWS's Asia and Pacific regions was particularly unstable at certain times of day especially when close to saturation. For example, Tokyo's connection was frequently degraded between 3pm and 5pm UTC. Second, the performance of some machines sometimes deviated from their specifications. As an example, in a setup size of 64, we observed around 2 machines operating with a 10% lower CPU turbo clock rate than specified. Considering these variations, we increased the number of servers a broker initially asks for witness shards (see SS4.3) by a margin, e.g., \(f+5\) instead of \(f+1\). This improves system stability--i.e., lower latency variability--while slightly reducing maximum throughput. Unless otherwise specified, we set the margin to 4 in all experiments, i.e., \(f+5\). Plots.Every data point is the mean of 5 runs of 2 minutes each (after excluding warmup and cooldown, the relevant cross-section is at least 1 minute). All plots further depict one standard deviation from the mean using either colored shaded areas or black error bars (which may be too small to notice). ### RQ1 - Load Handling Fig. 7 shows the latency and throughput of Chop Chop and all its baselines for various input rates of 8 B messages. The variability is represented using shaded areas. Baselines.Both BFT-SMaRt and HotStuff showcase stable performances under low loads, respectively achieving around 1,400 and 1,600 operations per second. BFT-SMaRt's latency is consistently better than HotStuff's up to its inflection point (0.45-0.53 s vs. 1.2-1.6 s). We measure up to 3.8M op/s for Narwhal-Bullshark and up to 382k op/s for Narwhal-Bullshark-sig. The difference in respective throughput highlights the cost of authentication for servers: verifying signatures reduces the throughput of Narwhal-Bullshark by one order of magnitude. We observe a latency of around 3.6 s for both Narwhal-Bullshark and Narwhal-Bullshark-sig. Chop Chop.Chop Chop achieves close to 44M op/s while running on top of both HotStuff and BFT-SMaRt. Chop Chop's latency range is 3.0-3.6 s with BFT-SMaRt and 5.8-6.5 s with HotStuff. Notably, the latency of Chop Chop-HotStuff decreases under high load. This is due to the internal batching mechanism of the HotStuff implementation: buffers fill faster under higher load, thus avoiding timeouts. This has an immediate impact on Chop Chop, which feeds HotStuff at a low rate: HotStuff alone accounts for over 60% of Chop Chop-HotStuff's overall latency. BFT-SMaRt makes a better fit for Chop Chop, as its throughput is sufficient for Chop Chop's needs, and its latency is lower than HotStuff's. Mempools' trade-off.In comparison to BFT-SMaRt and HotStuff, Chop Chop trades latency in favor of throughput. This trade-off is mostly explained by batching and distillation. When assembling a batch, a broker has to wait twice: once to collect enough messages to fill a batch, and once to collect all multi-signatures from clients engaging in distillation. We set both waits' timeout to 1 second. Notably, Narwhal-Bullshark seems to incur a similar latency cost, as Chop Chop's latency approximately matches that of Narwhal-Bullshark, even though Chop Chop needs an extra round trip between clients and broker (Fig. 5, #4-#6). ### RQ2 - Distillation Benefits We showcase the benefits of distillation by: evaluating throughput with and without distillation, evaluating distilla tion for messages of different sizes, and observing the impact of distillation on network bandwidth to achieve line rate. Distillation vs. mitigations.Along with distillation, Chop Chop makes use of two techniques available in the literature to mitigate the cost of Atomic Broadcast's authentication: short identifiers and pooled signature verification (see SS2.2). Fig. (a)a breaks down Chop Chop's throughput, measuring how significantly distillation alone contributes to Chop Chop's performance. When no message is distilled, Chop Chop's servers bottleneck at 1.5M op/s, 3.9\(\times\) higher than Narwhal-Bullshark-sig. This result is in line with both systems bottlenecking on server CPU, as the technique employed by Chop Chop to mitigate authentication complexity has only one third of the servers verify each client signature. (We conjecture that the additional 1.3\(\times\) factor may be owed to engineering differences.) When batches are fully distilled, Chop Chop's throughput grows to 44M op/s, accounting for the additional 29-fold boost to Chop Chop's performance. Distillation for larger messages.Fig. (b)b illustrates Chop Chop's maximum throughput for message sizes of 8 B to 512 B which may be relevant to applications that cannot work around smaller message sizes, e.g., many smart contracts. Chop Chop's throughput is similar with BFT-SMaRt and HotStuff, decreasing at an approximately 1-to-1 ratio as the message size increases: 44.3M op/s for 8 B, 17.6M op/s for 32 B, 3.5M op/s for 128 B and 890k op/s for 512 B. This is in line with expectations. As we discuss in SS3.2, a server should receive \(\sim b\) bytes in order to deliver a \(b\)-bytes message in a large, fully distilled batch, as full distillation amortizes to zero the communication cost of authenticating and sequencing each message. For 8 B messages, servers encounter a CPU bottleneck slightly before the link between load brokers and servers is saturated. This explains why the throughput decreases only 2.52\(\times\) when messages grow to 32 B: all remaining server-bound bandwidth is used to convey messages (as messages are larger) while the load on server CPUs is reduced (as less messages are delivered overall). The system remains communication-bottlenecked as the size of the messages increases, and throughput starts decreasing linearly with message size, e.g., Chop Chop's throughput for 512 B messages is 4.00\(\times\) smaller than for 128 B. By contrast, Narwhal-Bullshark-sig bottlenecks on server CPUs longer, due to signature verification, maintaining a stable throughput until 512 B messages finally fill server links. Overall, Narwhal-Bullshark-sig's throughput only decreases from 382k op/s for 8 B messages to 142k op/s for 512 B messages, which matches their non-authenticated evaluation with 512 B messages. The gap between Chop Chop and Narwhal-Bullshark-sig at 512 B messages can be mostly attributed to Chop Chop's more efficient use of server bandwidth: unlike Narwhal, Chop Chop offloads the dissemination of batches to external brokers. Narwhal's use of worker-to-worker communication in its common path also makes it more prone to be affected by AWS's various upload limitations, e.g., AWS upload bandwidth is half the stated download bandwidth, and there are network credit limits for "burst" uploading. Line rate.Fig. 9 illustrates Chop Chop's near line-rate network use by depicting its input, network and output rates: * Input rate measures the total bytes of useful information--i.e., client identifiers and messages--that clients, load Figure 8: **Throughput of Chop Chop and authenticated Narwhal with Bullshark (log scale) when (a) Chop Chop has no distillation and with (b) varying message size.** Figure 7: **Throughput-latency of Chop Chop and of notable Atomic Broadcast systems under various input rates.** Figure 9: **Throughput efficiency of authenticated Narwhal with Bullshark (left, log scale) and Chop Chop with BFT-SMaRt (right, linear scale) with various input rates.** clients and load brokers all broadcast per time unit; * Network rate measures the ingress bandwidth of servers at their network interface, i.e., useful information captured by the input rate as well as the Atomic Broadcast's overhead for ordering, authentication and deduplication; * Output rate, or "goodput", measures the total bytes of useful information that each server delivers per time unit. A system with perfect line rate would match all three rates: input rate would match output rate as messages can be delivered in a timely fashion with no backlogging, and output rate would match network rate as a server would only receive useful information, with no overhead due to Atomic Broadcast. The gray-shaded areas in Fig. 9 highlight this overhead, i.e., the difference between network and input rates. Network and output rates are averaged over all servers. In this experiment, each of the 257M simulated clients broadcast 8 B messages. This results in 11.5 B of useful information per broadcast as 28 bits = 3.5 B are sufficient to represent every identifier. This conversion is captured by the dotted line which converts the input rate from op/s, represented on the x-axis, to B/s, represented on the y-axis. For authenticated Narwhal-Bullshark, the output rate closely matches the input rate until signature verification becomes the bottleneck at 378k op/s, shown by the plateauing output rate. The gap between Narwhal-Bullshark-sig's network and input rates is evident, differing by one order of magnitude (notably in line with our back-of-the-envelope calculation in SS3.2). In contrast, thanks to distillation, Chop Chop practically achieves line-rate up to its maximum throughput. Before its inflection point at 40M op/s, the the overhead of Chop Chop is less than 8%. The drop in output and network rates at 60M op/s is due to servers surpassing their computational capacity: broadcasts stall, server witness verification gets backlogged and brokers, suspecting server faults, ask for more batch witnesses, further stressing servers' CPUs. ### RQ3 - Number of Servers Fig. (a)a illustrates the maximum throughput for systems of 8 (\(f=2\)), 16 (\(f=5\)), 32 (\(f=10\)) and 64 (\(f=21\)) servers. For Chop Chop, we adjust the witnessing margin as the system grows by 0, 1, 2, and 4 for 8, 16, 32 and 64 servers respectively (see SS6.2). Both Chop Chop and authenticated Narwhal-Bullshark scale well to 64 servers. Note that, unless trust assumptions are modified, Narwhal-Bullshark-sig only scales vertically: if a Narwhal server or any of its workers are faulty, the entire server group is compromised. Chop Chop, instead, scales horizontally with the number of brokers. ### RQ4 - Overall Efficiency The center cluster of bars in Fig. (b)b compares Chop Chop's throughput with that of authenticated Narwhal-Bullshark when overall hardware resources are matched. In this setting, both systems have 128 machines at their disposal. Chop Chop is provided with 64 servers, 64 brokers and 0 load brokers. Since a load broker uses pre-generated synthetic data to simulate tens of brokers (see SS6.2), involving load brokers in this experiment would give an unfair advantage to Chop Chop. Narwhal-Bullshark-sig is provided with 128 workers, to match Chop Chop's total machines, balanced across 64 server groups, to match Chop Chop's servers. The left and right clusters of bars depict Chop Chop using load brokers and Narwhal-Bullshark-sig with 64 server groups containing 1 worker each, respectively, as in the other experiments. We observe 4.6M op/s for Chop Chop, with servers reporting around 5% CPU usage. We observe 679k op/s for Narwhal-Bullshark-sig. Chop Chop's higher throughput is in line with expectations. In Narwhal-Bullshark-sig, workers are trusted, and as such a worker can only contribute to its own server group. Instead, since Chop Chop brokers are untrusted, a broker's work is useful to all servers. ### RQ5 - Chop Chop Under Failures Fig. (a)a depicts Chop Chop's throughput when some servers crash 30 seconds into the run. Performance drops marginally (from 44M op/s to 43M op/s) with one crash and by 66% (down to 15M op/s) when one-third of the servers crash, resulting in less CPU globally available to witness batches. Figure 11: **Throughput of Chop Chop (log scale) with (a) various server failures and for (b) different applications.** Figure 10: **Throughput of Chop Chop and authenticated Narwhal with Bullshark (log scale) when (a) varying system size, and when (b) varying the number of overall machines (“m”) with 64 servers (“s”).** Load brokers in Chop Chop simulate tens of brokers, hence are noted “\(\infty\) m”.** Fig. 8a captures Chop Chop's performance hit when clients fail to engage in distillation. This could be caused by clients being slow or crashed, or brokers being malicious. Under the most extreme conditions, where no client engages in distillation, the throughput drops from 44M op/s to 1.5M op/s. ### RQ6 - Application Use Cases Fig. 11b depicts the maximal stable throughput for various application use cases. In the Auction app, a client can bid an amount on a token it does not own, or take the highest offer it received for an item it owns. The highest amount bid on each token is locked and cannot be used to bid elsewhere. Money bid is transferred when the owner of the token takes the offer, or refunded when the bid is raised by another client. The Auction app is single-threaded and many clients bid on the same token to approximate a real auction. In the Payments app, clients choose a recipient and an amount to transfer. In Pixel war, clients choose a pixel and an RGB color to paint on a 2,048 by 2,048 board. Operations are generated at random. We observe 2.3M op/s for the Auction, 32M op/s for Payments and 35M op/s for Pixel war. The bottleneck is the application in all cases, thus Chop Chop has sufficient capacity for high, single-application throughput. Chop Chop can also support many separate high-throughput applications simultaneously, making it a fitting Atomic Broadcast candidate to power a universal SMR system (or "Internet computer"). ## 7 Related Work We overview below the state-of-the-art most relevant to Chop Chop, namely Atomic Broadcast systems with high-throughput and efficient signature aggregation schemes. High-throughput Atomic Broadcast.Narwall [24] is a mempool protocol that separates the reliable distribution of payloads from the communication-expensive ordering in order to accelerate DAG-based Atomic Broadcast [67, 31, 40]. Narwall utilizes trusted workers to increase throughput while Chop Chop relies on _trustless_ brokers, for the same effect, and scales out more efficiently. To circumvent the bottleneck associated with the broadcast leader, approaches using multiple leaders have been developed--both for crash [59, 29] and arbitrary [68, 69, 3, 6] faults--to scale the broadcast throughput linearly with the number of leaders. Dissemination trees [61, 42] have also been employed to reduce communication cost and maximize network bandwidth utility, while sharded [74, 44] and federated [52] approaches reduce communication cost by promoting local communication in geo-distributed setups. In comparison, Chop Chop shows that an optimal distillation mechanism for batches achieves better performances without adding complexity to the Atomic Broadcast protocol itself. Other approaches have shown that the underlying hardware of servers can also be exploited for higher throughput, such as FPGA [39, 36] and Intel SGX enclaves [7]. In comparison, Chop Chop uniquely boosts throughput by exploiting _trustless hardware_ via brokers. Atomic Broadcast can also be accelerated in data centers by using the topology of the network [49, 62] or even by running within the network itself using P4-programmable switches [43, 25]. In such low latency environments, the processing overhead incurred by the operating system kernel can be bypassed to further increase the throughput of Atomic Broadcast [1, 43, 73]. Signature aggregation.Aggregate signatures were first proposed to save space by compacting a large number of signatures into just one [11, 65]. Up until recently, aggregation could also save verification time but only in certain cases: either when the signatures are generated by the same signer [16, SS5.1], or when the signatures are on the same message, i.e., multi-signatures [37]. In the latter case, aggregation mechanisms have been proposed to achieve constant-time verification of aggregated multi-signatures for both BLS [10] and Schnorr [55] signature schemes. In particular, multi-signatures are used in cryptocurrencies to have many servers sign the same batch of payloads [42, 28]. Servers in Chop Chop use rapidly-verifiable BLS multi-signatures [10] for that very purpose. In addition to aggregating server signatures on batches, Chop Chop Chop's distillation mechanism also aggregates all client signatures in a batch in a way that provides constant-time verification. The theoretical scheme Draft [15] proposed signature aggregation with similar verification performances but is tailored to Reliable Broadcast. It is however unclear how Draft could be implemented as a real-world system without compromising liveness. Indeed, Draft assumes infinite memory to prevent message replay attacks which would rapidly exhaust servers' memory if Draft were to be deployed to match Chop Chop's target throughput in our evaluation (see SS6.2). Chop Chop also aggregates client sequence numbers to significantly reduce bandwidth consumption when small messages are broadcast (Fig. 2). Aggregating sequence numbers is made possible thanks to the ordering of Atomic Broadcast and thanks to novel legitimacy proofs (see SS4.2). ## 8 Concluding Remarks Chop Chop's performance comes with two limitations. First, Chop Chop's high throughput makes memory management a challenge: servers fill their memory quickly if unable to garbage-collect under heavy load. Second, all servers in Chop chop are known at startup and it is unclear if its performance would be maintained when deployed on thousands of servers. Interesting avenues of future research include sharding to achieve even higher throughput by running multiple, independent, coordinated instances of Chop Chop, and offloading more tasks to the brokers, such as public key aggregation. ## Acknowledgments This work has been supported in part by AWS Cloud Credit for Research, the Hasler Foundation (#21084), and Inosuisse (46752.1 IP-ICT).
2306.01289
nnMobileNet: Rethinking CNN for Retinopathy Research
Over the past few decades, convolutional neural networks (CNNs) have been at the forefront of the detection and tracking of various retinal diseases (RD). Despite their success, the emergence of vision transformers (ViT) in the 2020s has shifted the trajectory of RD model development. The leading-edge performance of ViT-based models in RD can be largely credited to their scalability-their ability to improve as more parameters are added. As a result, ViT-based models tend to outshine traditional CNNs in RD applications, albeit at the cost of increased data and computational demands. ViTs also differ from CNNs in their approach to processing images, working with patches rather than local regions, which can complicate the precise localization of small, variably presented lesions in RD. In our study, we revisited and updated the architecture of a CNN model, specifically MobileNet, to enhance its utility in RD diagnostics. We found that an optimized MobileNet, through selective modifications, can surpass ViT-based models in various RD benchmarks, including diabetic retinopathy grading, detection of multiple fundus diseases, and classification of diabetic macular edema. The code is available at https://github.com/Retinal-Research/NN-MOBILENET
Wenhui Zhu, Peijie Qiu, Xiwen Chen, Xin Li, Natasha Lepore, Oana M. Dumitrascu, Yalin Wang
2023-06-02T06:15:36Z
http://arxiv.org/abs/2306.01289v4
# nnMobile-Net: Rethinking CNN Design for Deep Learning-Based Retinopathy Research ###### Abstract Retinal diseases (RD) are the leading cause of severe vision loss or blindness. Deep learning-based automated tools play an indispensable role in assisting clinicians in diagnosing and monitoring RD in modern medicine. Recently, an increasing number of works in this field have taken advantage of Vision Transformer to achieve state-of-the-art performance with more parameters and higher model complexity compared to Convolutional Neural Networks (CNNs). Such sophisticated and task-specific model designs, however, are prone to be overfitting and hinder their generalizability. In this work, we argue that a channel-aware and well-calibrated CNN model may overcome these problems. To this end, we empirically studied CNN's macro and micro designs and its training strategies. Based on the investigation, we proposed a no-new-MobileNet (nn-MobileNet) developed for retinal diseases. In our experiments, our generic, simple and efficient model superseded most current state-of-the-art methods on four public datasets for multiple tasks, including diabetic retinopathy grading, fundus multi-disease detection, and diabetic macular edema classification. Our work may provide novel insights into deep learning architecture design and advance retinopathy research. Retinal Diseases \(\cdot\) Fundus image \(\cdot\) Classification \(\cdot\) CNN \(\cdot\) Diabetic Retinopathy grading \(\cdot\) Training tricks \(\cdot\) MobileNet \(\cdot\) Benchmark ## 1 Introduction Retinal diseases (RD), including diabetic retinopathy (DR), age-related macular degeneration, inherited retinal conditions, and retinopathy of prematurity, are leading causes of blindness globally [1]. The automated RD diagnosis framework is crucial in modern medicine to guide the proper treatment of patients. In the past decade, deep learning has achieved state-of-the-art performance in automating the diagnosis of various RD. Following that trend, convolutional neural networks (CNNs) [2; 3; 4; 5; 6; 7; 8] were dominating the early stages of development. Many studies have incorporated prior knowledge of the retinal lesion or the clinician-provided diagnosis into CNNs. Zoom-in-Net [3] took a biomimetic approach that used image magnification to locate lesions in the diagnosis of RD. Zhou et al. proposed a semi-supervised learning framework [4], which coordinated lesion segmentation and classification tasks by feature integrations. CANet [2] integrated two attention modules to jointly generate disease-specific and disease-dependent features for grading DR and diabetic macular edema (DME). Che et al. [7] achieved good performance via robust disentangled features of DR/DME. While these methods have demonstrated promising results, their complex and task-specific model designs were easy to overfit and required specific datasets (e.g., multi-task datasets). Vision Transformers (ViT) have recently gained much attention in various visual tasks by leveraging the self-attention mechanism to capture long-term feature dependencies. Along this direction, MIL-VT [9] proposed using multiple-instance pooling to aggregate the features extracted by a ViT. Sun et al. [10] proposed a lesion-aware transformer (LAT) to learn the diabetic lesion-specific features via a cross-attention mechanism. To reduce the model complexity of transformer-based methods, Jiang et al. [11] proposed an efficient transformer design (SatFormer) by taking advantage of an efficient abnormality-aware attention mechanism and a saliency enhancement module for DR grading. Although those methods achieved state-of-the-art performance, most of them heavily relied on pretraining on large-scale datasets due to the data-hungry nature of ViT whose complexity quadratically grew with respect to the input size. In addition, the RD features are localized in nature. It was challenging for pure transformer-based feature extractors that more focused on global representations. To mitigate this issue, recent ViT advances [12; 13] converged to CNNs on bringing back convolutional operations. This motivated us to rethink the role of CNN in designing RD diagnostic models. In this work, we rethought the issue of overfitting and frequently overlooked channel-wise information in RD classification tasks. For example, the radiation physical model of fundus images has been studied by Preece and Delori et al. [14; 15]. Light entering the inner eye was reflected, absorbed, and transmitted depending on tissue properties and pigment concentration. As a result, prime light colors, such as red, blue, and green colors, behaved differently in the fundus images. We hypothesized that radiation physics-aware channel-wise designs might benefit RD research. Our study also explored other key components of CNN architecture and its training strategies, including activation functions, spatial dropout, data augmentations, and optimizers. Based on our investigation, we proposed a fine-tuned lightweight CNN equipped with channel-information and studied its performance in a variety of retinal image datasets. The contribution of this paper can be summarized into four aspects. Firstly, we provided a new perspective on applying deep learning to fundus images. The high sensitivity of color fundus images to the channel was considered prior knowledge that can be incorporated into the model design. Secondly, we addressed a common overfitting issue in fundus image classification tasks by introducing dropout modules based on channel-wise information and combining heavy data augmentation. In particular, our study presented a counterintuitive result that heavy data augmentation helped improve performance, unlike the prevailing belief that heavy data augmentation disrupted the structure of medical images. Thirdly, based on our investigation, we proposed a series of modifications to MobileNetv2 [16] that demonstrated remarkable improvements in the classification of retinal diseases. Our method did not rely on complex multitasking but instead inherited its simple and efficient characteristics. Fourthly, our extensive experiments demonstrated that our work outperformed most state-of-the-art methods for various retinal disease tasks on multiple benchmark datasets. Overall, our study proposed a non-new network structure but rather fine-tuning an existing model based on the physical model and overfitting reduction schemes. We hope the new results presented in this study will provide novel perspectives on CNN design and encourage rethinking the importance of data essence and overfitting. ## 2 Methods Guided by the hypothesis that channel-wise information played a crucial role in improving diagnostic performance for RD, our CNN architecture upon MobileNetv2 [16] used deepwise convolution and channel attention in each residual building block. We also empirically investigated optimizing the number of channels and incorporating Dropout in each residual building block to enhance the generalizability and prevent overfitting. To further address the overfitting problem, we adopted a heavy data-augmentation approach. Additionally, we explored the activation and optimizer to further improve the performance. ### Network Design **Inverted linear residual block (ILRB):** The original residual building block followed a trend of sequentially widening, narrowing, and finally widening the number of channels. Many modern designs of CNNs [17; 18; 16], however, inverted this order with a narrow to wide to narrow channel configuration by taking advantage of deep-wise convolution, namely inverted residual block (IRB). By further discarding the activation function after the last convolutional layer to prevent information loss during the rectified linear unit (ReLU) activation, the IRB turned to an inverted linear residual block (ILRB). The deepwise convolution performed separate 2D convolution on a per-channel basis and then weighted the feature map of each channel by a \(1\times 1\) convolution to aggregate the channel dimension, which resembled the Convolutional Block Attention Module in [19]. To further capture the channel-wise information, the Squeeze-and-Excitation block in [20] was adopted to each residual block. We kept the same stem setting (i.e., channel configuration and the number of filters) of each ILRB as that in [17] with an expansion rate of 1 for the first ILRB and 6 otherwise. The expansion rate denotes the rate of channel dimensions between the hidden layers and the input to each ILRB module. Fig. 1 shows the detailed structure of the ILRB module. The effectiveness of this module was also demonstrated in subsequent ablation experiments. **Activation Function (AF):** Recently, Han et al. [17] uncovered the effect of complicated nonlinear activation functions (e.g., ELU or SiLU) in visual tasks by leveraging that the matrix rank of the output feature can estimate the expressiveness of a layer. Although surprising performance has been achieved when applied in natural images where the region of interest is always well-defined, it is likely to be problematic when directly translating to retinal fundus images where tiny and hardly distinguishable lesions (e.g., Microaneurysms) are of most interest. Inspired by this observation, We conducted empirical studies of the impact of different activation functions (i.e., SiLU, ReLU, PReLU, ReLU6) on RD tasks. As shown in Fig. 2(B), the ReLU6 activation was the best among all options. **Dropout (D):** Most RD diagnostic models suffer from the issue of overfitting mainly due to the heterogeneous appearance of pathological biomarkers in terms of size, shape, and location. For example, DR diagnostic models are always struggling to classify Microaneurysms. Dropout is widely used to mitigate overfitting and improve generalizability. But where and how to place Dropouts remain an open question. In this study, we tried to answer this question by investigating two common dropout modes and their positions in the network. The first one was the regular random dropout which randomly zeroed out entries in the feature map following a Bernoulli distribution. The second mode was spatial-dropout [24] (channel-wise dropout) that randomly zeroed out channels in the feature map, which Figure 1: Inverted linear residual block (ILRB) architecture design. Figure 2: Empirical parameter study results Based on Messidor-2 dataset [21]. Subpanel pictures (a), (b), (c), and (d) represent different experimental groups, each of which was independent of the others. We kept the other parameters consistent in each experiment, but it did not mean the other parameters were the best. MC denotes Mixup[22] and CutMix[23], and SDropout-[x] denotes placed SpatialDropout in position x as shown in Fig. 1. matched our previous hypothesis on channel-wise information. We investigated three possible locations of Dropout placement in the ILRB (Fig. 1) where location 3 showed the best performance (Fig. 2(A)). ### Training Techniques **Data Augmentations (DA):** Most previous work [10; 11; 2; 9] held the view that excessive data augmentation could potentially compromise the integrity of fundus data. Therefore, data augmentations including spatial transformations and brightness adjustments were always recommended in retinal fundus images. Nevertheless, these data augmentations could not eliminate overfitting in the RD tasks based on our empirical studies. Building upon the aforementioned observations, exploratory experiments were conducted to optimize the data augmentation strategies that better prevent overfitting. We presented three data augmentation combinations: (I). Customized data augmentations in methods [10; 11]; (II). Customized data augmentations in methods [10; 11] with Mixup [22] and CutMix [23]; (III). The official ImageNet data augmentation techniques [18] (e.g., RandAugment[25] and Random Erasing[26]) with Mixup and CutMix. Our empirical studies showed that the heaviest data augmentations (III) led to the best performance (Fig. 2(C)). Our software package is available at [https://github.com/Retinal-Research/NNMOBILE-NET](https://github.com/Retinal-Research/NNMOBILE-NET). **Optimizer (O):** Recent empirical studies demonstrated that performance improvement was gained by training the neural networks with more advanced optimizers (e.g., AdamW [27], AdamP [28]) that better accommodated the step size adaptively. As shown in Fig. 2(D), our empirical studies showed that training the network with AdamP optimizers significantly increased the performance. ## 3 Experiments and Results An optimal set of network structures and training strategies is summarized in Section 2. We used cross-entropy loss for training all the models in this work. The initial learning rate was set to 0.001 decayed according to a cosine decay learning rate scheduler with 20 epochs of linear warm-up. A weight decay rate of 0.05 was applied to prevent further overfitting. All experiments were performed on three GeForce RTX 3090 with a batch size of 32 and 1000 epochs. All were implemented in PyTorch. Experimental results are summarized in Tables 1 to 5. ### Datasets and Evaluation Metrics **Messidor-1 dataset**[21] contains 1200 fundus images with four DR grades. We conducted referral and normal DR classification in this dataset. In the referral and non-referral DR classification, Grades 0 and 1 are considered non-referable, while Grades 2 and 3 are considered referable DR (rDR). For normal and abnormal classification, only Grade 0 will be labeled as normal, and the other grades are recognized as abnormal. We followed the experimental settings in [2] by using 10-fold cross-validation on the entire dataset. The area under the curve (AUC) was used as the evaluation metric. **Messidor-2 dataset**[21] contains 1748 fundus images with five DR grades. As no official split of the training and testing dataset was provided, we used this dataset to conduct ablation studies to demonstrate the effectiveness of each component of our proposed method on DR grading evaluated by the AUC and quadratic cohen's kappa (Kappa). **RFMiD dataset**[29] contains 1920 training, 640 validation, and 640 testing images with 45 different types of pathologies (central serous retinopathy, central retinal vein occlusion, asteroid hyalosi, etc.). Following the protocol in [11; 9], we performed normal and abnormal binary classification on this dataset whose performance is measured by accuracy (ACC), AUC, and F1. **APOTS dataset**[30] contains 3662 fundus images for DR grading with the severity on a grade of 0 to 4 (no DR, mild, moderate, severe, proliferative DR). Following the experimental setting of 5-fold cross-validation in [9], we evaluated the performance of DR grading in terms of ACC, AUC, weighted F1, and kappa. **IDRiD dataset**[31] contains 413 training and 103 testing images for both DR grading and DME severity grading tasks. we used the training and testing data provided by the official split. Different from method [2] that re-labeled DR grading into two categories, we trained the multi-class DR grading task and reported the evaluation metrics of ACC, AUC, and F1. Both grading experiments followed the protocol in [7]. ### Ablation Studies To quantify the importance of each proposed module in Section 2, we performed a series of ablation studies on the DR grading task on the Messidor-2 dataset. Table 1 reports the results of all the ablation studies. We found that applying heavy data augmentations (DA) during training had the highest performance gain increasing the AUC and Kappa by 3.06% and 2.71%, respectively 2. Meanwhile, the ILRB module also significantly improved the performance of the DR grading by 2.22% in the AUC and 1.53% in the Kappa. Besides, the ReLU6 activation function, spatial dropout, and AdamP optimizer together contributed to a performance gain of \(0.48\%\) in AUC and \(2.2\%\) in the Kappa. Even though there was only a minor increase in the AUC, the Kappa, which measured the grading accuracy, increased dramatically, especially after applying the spatial dropout (Table 1). With all proposed modifications, we drove the AUC from \(91.19\) to 96.52 (\(6.15\%\)) and the Kappa from 85.76 to 91.37 (\(6.5\%\)). Based on our ablation study results, the factors which led to a significant performance improvement were revealed. Among them, the common overfitting reduction schemes (data augmentation and spatial dropout) contributed the most, followed by the ILRB module guided by channel-wise information. ### Comparison to State-Of-The-Art Methods **DR task performance.** We compared the proposed method to a variety of existing state-of-the-art (SOTA) methods on three DR datasets (i.e., Messidor1, APOTS, and IDRID). The proposed method achieved the best performance on the IDRID dataset (AUC=91.6, ACC=73.1). Remarkably, the proposed method outperformed the best model (DETACH-DAW [7]) by \(8\%\) and \(23.5\%\) in AUC and ACC (Table 5), respectively, on the IDRID dataset. We also found that the proposed method had the highest ACC and Kappa on the APOTS dataset with a similar AUC to the best-performed model (Table 4). The proposed method achieved an equal performance to the best model (LAT [10]) on the referral DR classification task and the best performance on the normal DR classification task (Table 2). It is worth noting that most works (i.e., MIL-VT [9], LAT [10], Zoom-in-Net [3], Semi + Adv [4], and CKML [33]) were pre-trained on large-scale external datasets. Whereas, the proposed method was trained from scratch using the same benchmark datasets. **Multi-disease abnormal detection performance.** We also conducted experiments and comparisons to current SOTA methods on the multi-disease detection task. The proposed method achieved the best performance in terms of ACC and AUC, while the SatFormer-B [11] achieved the best performance in F1 (Table 3). However, our model (Param=34M) has fewer than half number of parameters of the SatFormer-B [11] (Param=78M). Even though the proposed model had a similar stem architecture to the RexNet, our heavy data augmentations and spatial dropout improved the ACC by \(3.4\%\) and AUC by \(4.4\%\). \begin{table} \begin{tabular}{c c c c c|c c} \hline \hline ILRB & DA & D & O & AF & AUC & Kappa \\ \hline ✗ & ✗ & ✗ & ✗ & ✗ & 91.19 & 85.76 \\ ✓ & ✗ & ✗ & ✗ & ✗ & 93.22 & 87.06 \\ ✓ & ✓ & ✗ & ✗ & ✗ & 96.08 & 89.42 \\ ✓ & ✓ & ✓ & ✗ & ✗ & 96.14 & 90.58 \\ ✓ & ✓ & ✓ & ✓ & ✗ & 96.13 & 90.72 \\ ✓ & ✓ & ✓ & ✓ & ✓ & 96.52 & 91.37 \\ \hline \hline \end{tabular} \end{table} Table 1: Ablation Studies on the DR grading task on the Messidor-2 dataset [21]. ResNet-50 [32] served as the comparison model in the initial line. \begin{table} \begin{tabular}{l c c c} \hline \hline Method & Annotations & Referral & Normal \\ & & AUC & AUC \\ \hline VNXK [33] & - & 88.7 & 87.0 \\ CKML [33] & - & 89.1 & 86.2 \\ Comp. CAD [6] & - & 91.0 & 87.6 \\ Expert A [6] & - & 94.0 & 92.2 \\ Expert B [6] & - & 92.0 & 86.5 \\ Zoom-in-Net [3] & - & 95.7 & 92.1 \\ AFN [5] & patch & 96.8 & - \\ Semi + Adv [4] & pixel & 97.6 & 94.3 \\ CANet[2] & - & 96.3 & - \\ LAT [10] & - & 98.7 & 96.3 \\ \hline Ours & - & **98.7** & **97.5** \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison of rDR and normal classification on the Messidor-1 dataset [21]. Annotations denote whether pixel-level or patch-level supervision was applied. **DME classification performance.** The DME classification task was evaluated on the IDRID dataset following the protocol in [7]. Table 5 demonstrated that the proposed method surpassed the model with the best performance (DETACH+DAW [7]) by \(17.3\%\) on F1, \(6.5\%\) on AUC, and \(4.8\%\) on ACC. Compared to other SOTA methods (i.e., CANet [2], Multi-task net [39], MTMR-net [40], and DETACH-DAW [7]) that were jointly trained on multiple tasks, our proposed model was trained from scratch on the DME task only. ## 4 Conclusion In this paper, we provided novel insights into the RD diagnostic model design informed by the channel-wise information in retinal fundus images. A generic and efficient CNN-based architecture design and its training strategies were proposed for general RD tasks. Our comprehensive experimental results demonstrated that a properly tuned CNN rather than a model with a huge number of parameters and complex structures could compete favorably with current SOTA methods across multiple RD tasks on benchmark datasets. Acknowledgement.This work was partially supported by grants from NIH (R21AG065942, R01EY032125, and R01DE030286), and the State of Arizona via the Arizona Alzheimer Consortium. \begin{table} \begin{tabular}{l c c c c} \hline \hline & \multicolumn{4}{c}{DR Grading} \\ \cline{2-5} Method & AUC & ACC & F1 & Kappa \\ \hline DLI[37] & - & 82.5 & 80.3 & 89.0 \\ CANet[2] & - & 83.2 & 81.3 & 90.0 \\ GREEN-ResNet50 [38] & - & 84.4 & 83.6 & 90.8 \\ GREEN-SE-ResNet50 [38] & - & 85.7 & 85.2 & 91.2 \\ MIL-VT [9] & **97.9** & 85.5 & 85.3 & 92.0 \\ \hline Ours & 97.8 & **89.1** & **88.9** & **93.4** \\ \hline \hline \end{tabular} \end{table} Table 4: Performance comparison of DR grading on the APOTS dataset [30]. \begin{table} \begin{tabular}{l c c c c} \hline \hline & \multicolumn{4}{c}{Normal/Abnormal} \\ \cline{2-5} Method & Param(M) & ACC & AUC & F1 \\ \hline CANet[2] & 29 & 88.3 & 91.0 & 90.4 \\ EffNet-B7[34] & 66 & 88.2 & 91.0 & 90.7 \\ ReXNet[17] & 34 & 91.3 & 94.5 & 93.3 \\ CrossFormer-L[35] & 92 & 90.6 & 94.3 & 92.0 \\ Swin-L[36] & 197 & 89.5 & 93.8 & 91.6 \\ MIL-VT[9] & 98 & 91.1 & 95.9 & 94.4 \\ SatFormer-B [11] & 78 & 93.8 & 96.5 & **95.8** \\ \hline Ours & 34 & **94.4** & **98.7** & 94.4 \\ \hline \hline \end{tabular} \end{table} Table 3: Performance comparison of multi-disease abnormal detection on the RFMiD dataset [29]. Param are the parameter numbers, indicating model complexity of models. \begin{table} \begin{tabular}{l c c c|c c} \hline \hline & \multicolumn{4}{c}{DME} & \multicolumn{4}{c}{DR} \\ \cline{2-5} Method & AUC & F1 & ACC & AUC & F1 & ACC \\ \hline CANet[2] & 87.9 & 66.1 & 78.6 & 78.9 & 42.3 & 57.3 \\ Multi-task net[39] & 86.1 & 60.3 & 74.8 & 78 & 43.9 & 59.2 \\ MTMR-net[40] & 84.2 & 61.1 & 79.6 & 79.7 & 45.3 & 60.2 \\ DETACH + DAW [7] & 89.5 & 72.3 & 82.5 & 84.8 & 49.4 & 59.2 \\ \hline Ours & **95.3** & **84.8** & **86.5** & **91.6** & **72.6** & **73.1** \\ \hline \hline \end{tabular} \end{table} Table 5: Performance comparison of DR and DME grading on the IDRID dataset [31].
2301.12534
Vicarious Offense and Noise Audit of Offensive Speech Classifiers: Unifying Human and Machine Disagreement on What is Offensive
Offensive speech detection is a key component of content moderation. However, what is offensive can be highly subjective. This paper investigates how machine and human moderators disagree on what is offensive when it comes to real-world social web political discourse. We show that (1) there is extensive disagreement among the moderators (humans and machines); and (2) human and large-language-model classifiers are unable to predict how other human raters will respond, based on their political leanings. For (1), we conduct a noise audit at an unprecedented scale that combines both machine and human responses. For (2), we introduce a first-of-its-kind dataset of vicarious offense. Our noise audit reveals that moderation outcomes vary wildly across different machine moderators. Our experiments with human moderators suggest that political leanings combined with sensitive issues affect both first-person and vicarious offense. The dataset is available through https://github.com/Homan-Lab/voiced.
Tharindu Cyril Weerasooriya, Sujan Dutta, Tharindu Ranasinghe, Marcos Zampieri, Christopher M. Homan, Ashiqur R. KhudaBukhsh
2023-01-29T20:39:21Z
http://arxiv.org/abs/2301.12534v4
# Vicarious Offense and _Noise Audit_ of Offensive Speech Classifiers ###### Abstract This paper discusses and contains content that is offensive or disturbing. This paper examines social web content moderation from two key perspectives: automated methods (machine moderators) and human evaluators (human moderators). We conduct a _noise audit_ at an unprecedented scale using nine machine moderators trained on well-known offensive speech data sets evaluated on a corpus sampled from 92 million YouTube comments discussing a multitude of issues relevant to US politics. We introduce a first-of-its-kind data set of _vicarious offense_. We ask annotators: (1) if they find a given social media post offensive; and (2) how offensive annotators sharing different political beliefs would find the same content. Our experiments with machine moderators reveal that moderation outcomes wildly vary across different machine moderators. Our experiments with human moderators suggest that (1) political leanings considerably affect first-person offense perspective; (2) Republicans are the worst predictors of vicarious offense; (3) predicting vicarious offense for the Republicans is most challenging than predicting vicarious offense for the Independents and the Democrats; and (4) disagreement across political identity groups considerably increases when sensitive issues such as reproductive rights or gun control/rights are discussed. Both experiments suggest that offense, is indeed, highly subjective and raise important questions concerning content moderation practices. Polarization Machine Translation US News Networks Human Annotation ## 1 Introduction Offensive speech on web platforms is a persistent problem with wide-ranging impacts [1, 2]. Not only can it impact those directly targeted [3, 4, 5], but it can also create a climate of negative behavior, where such behavior becomes normalized. Offensive speech can have significant economic impacts, as the sponsors of those platforms where offensive speech is particularly prevalent can become associated with the offending language and lose business, or withdraw their support from the platform in which it occurred [6]. There are many reasons why moderation fails to eliminate the problem. Among them is the reality that people often disagree on what is offensive [7, 8]. Disagreement can be caused by a lack of understanding of the reasons why something is offensive, or how the offensive language can impact those targeted [9], or simply by the intrinsic subjectivity [10] of the phenomenon. This is reflected in wildly diverging annotations in publicly available datasets which result in diverging performances of machine learning models trained to detect offensive speech. And yet, thus far, there have been only few studies of the nature of this disagreement, from the perspective of human or machine moderators [11]. Both camps of moderators are related, as human annotators play a critical role in training machine moderators. In this paper, we tackle the problem of disagreement in offense speech classification from both a human and machine perspective, and explore the relationship between the two. We focus on the particular problem of offensive speech in political discourse. We focus on this particular problem because it is a timely and important one. It is also one on which we expected to find a relatively strong and clear disagreement signal as it is a topic that has become especially polarized in recent years [12, 13, 14, 15]. ### Contributions and Novelties Our contributions and novelties are the following: \(\bullet\) _Noise Audit:_ While limited literature exists on investigating the generalizability of offensive speech detection systems across data-sets [16] and vulnerability to adversarial attacks [17], unseen use cases [18], or geographic biases [19], to the best of our knowledge, no work exists on a comprehensive, in-the-wild evaluation of offensive speech filtering outcomes on large-scale real-world political discussions with documented political dissonance. Much to the spirit of the celebrated book, "Noise: A Flaw in Human Judgment" [20] by Kahneman et al., we conduct a comprehensive _noise audit_ of several well-known offensive speech classifiers on a massive political data set consisting of 92,242,547 comments on 241,112 videos hosted by official YouTube handles of three prominent US cable news networks. Our data set spans the time period of 2014 - 2022 that has seen two presidential elections, a raging pandemic, an ongoing war, and a far-from-peaceful transfer of power. A noise audit, as described in [20], is essentially an audit of outcome variability across multiple (competent) decision systems. [20] outline several scenarios, such as judicial sentencing or insurance settlements, that indicate widespread noise in diverse real-world settings. In this paper, we seek to understand how content moderation outcomes vary across different offensive speech classifiers (dubbed machine moderators) in political discourse at the granularity of individual social media posts. One key impediment to performing in-the-wild analysis of content moderation systems is a lack of ground-truth. It is resource-intensive to annotate a representative data set of social media posts to test content moderation at scale. We bypass this requirement as we study disagreement among machine moderators on a large pool of unlabeled instances. \(\bullet\) _Vicarious Offense:_ We introduce the notion of vicarious offense. With the 2022 US midterm election being months away and the current US political climate showing no signs of reconciliation between the politically divergent left and Figure 1: Illustrative examples highlighting nuanced inconsistencies between machine moderators and human moderators with different political leanings. For every comment, majority vote is used to aggregate individual machine and human moderator’s verdicts. An angry emoji and a green checkbox indicate _offensive_ and _notOffensive_ labels, respectively. These real-world examples are drawn from comments on YouTube news videos of three major US cable news networks: Fox News, CNN, and MSNBC. Each example is annotated by 20 human moderators with at least six Republicans, Democrats, and Independents. Nine well-known offensive speech data sets are used to create nine machine moderators. The grid summarizes vicarious offense where annotators belonging to the row political identity is asked to predict vicarious offense perspectives of the two other political identities mentioned in the columns. the right, we ask a timely and important question: _how well does a Democratic-leaning user perceive what content would be deemed as offensive by her Republican-leaning counterpart or vice-versa?_ Documented evidence indicate that negative views towards the opposite political party have affected outcomes in settings as diverse as allocating scholarship funds [21], mate selection [22], and employment decisions [23]. Is it possible that such negative views also affect our ability to perceive what we dub _vicarious offense_? We conduct a detailed annotation study to understand the phenomenon of vicarious offense and release the first-of-its-kind data set where an annotator labels both (1) if she finds a social media post as offensive or not, and (2) if she feels a person with a different political beliefs would find the same social media post as offensive or not. Unlike most studies on political biases, which proffer a binarized world-view of US politics, in our study, we consider all three key players in the US democracy: the Democrats, the Republicans, and the Independents. In the era of growing political polarization in the US where sectarian _us vs them_ often dominates the political discourse [24], our study marks one of the early attempts to quantify how well the political opposites understand each other when asked to be on _their opposites_' shoes. Figure 1 presents a few illustrative examples from this rich data set. Note that, while Republicans feel that the comment _Potentially criminal? It WAS fucking criminal and Trump and all associated with the insurrection should be in prison._ will invite equal ire by the Independents, the Independents, actually, do not find it offensive. Our data set reveals how little different political groups understand each other and presents startling findings when sensitive issues such as reproductive rights or gun control/rights are in the mix. \(\bullet\) _Resource:_ We will release a data set of both first-person and vicarious offense 2. Our data set consists of 2,310 YouTube comments drawn from comments on news videos of three major US cable news networks: Fox News, CNN, and MSNBC. Each example is annotated by 20 annotators and at least by six Republicans, Democrats, and Independents. We present all 20 labels with detailed demographic information (race, gender, age, political identity) of individual annotators. In addition to presenting a standard offensive speech data set with fine-grained labels and rich annotator information, we present the first-of-its-kind dataset of vicarious offense introduced above. Footnote 2: The dataset will be available soon on [https://github.com/Homan-Lab/noise-audit-dataset/](https://github.com/Homan-Lab/noise-audit-dataset/) \(\bullet\) _Social:_ Ensuring civil discourse while discussing politics remains a long-standing concern [1, 13, 25]. Over the last few years, both sides across the political aisle have accused web platforms of bias [26]. Our study presents critical insights into modern day moderation challenges when it comes down to political content and reveals that humans often disregard style over content when the content aligns with their political views. Our focused analysis on sensitive issues such as reproductive rights and gun control/rights suggest that human political biases may pose significant challenges to moderating sensitive political content. ### Our Approach and Paper Road-map Section 3 investigates how consistent machine moderators are when evaluated at scale. To this end, we evaluate nine classifiers (described in Section 2.1) trained on well-known offensive speech data sets on a political corpus (described in Section 2.2). Once we establish that moderation outcomes wildly vary across machine moderators in the wild, we turn our focus to human moderators. We carefully select a data set for human annotation (described in Section 4) that contains instances that machine moderators (1) near-unanimously marked as not offensive; (2) near-unanimously marked as offensive; and (3) exhibited maximal disagreement with no clear consensus. We then summarize our annotator demographics (Section 5.1) and conduct a thorough annotation study and examine (1) the extent human moderators are aligned with machine moderators; (2) how political identities affect moderation outcomes. Finally, we explore vicarious offense. We ask annotators to predict out-group labels, i.e., for a Republican annotator, we would ask how she thinks a Democrat and an Independent would label a given instance. We then analyze how well annotators belonging to different political identities _understand_ out-group perception of offense. ## 2 Design Considerations ### Machine Moderators Research on automatic content moderation has focused on a broad taxonomy of inappropriate content that includes - toxic, harmful, abusive, hateful, offensive etc. For calibration purposes, following [27], we transform the labels of instances present in eight well-known data sets into two broad labels: offensive and not offensive. The labels correspond to the level A of the popular OLID taxonomy [28] widely used in offensive speech classification. The offensive class represents posts containing any form of non-acceptable language (e.g. profanity, swear words) or a targeted offense including insults and threats. We investigate nine offensive language identification models. From these models, we trained eight models on the following offensive speech data sets: (1) AHSD [29]; (2) HASOC [30]; (3) HatEval [3]; (4) HateXplain [31]; (5) OLLD [28]; (6) TRAC [32]; (7) OHS [33]; (8) TCC3. These data sets are well-known in the web-toxicity domain and consist of social media posts obtained from Twitter, Facebook, Gab, Reddit, and YouTube. We trained bert-large-cased models on each of the training sets in these datasets following a text classification objective. We employed a batch-size of 16, Adam optimiser with learning rate \(2\mathrm{e}{-5}\), and a linear learning rate warm-up over 10% of the training data. During the training process, the parameters of the bert-large-cased model, as well as the parameters of the subsequent layers, were updated. The models were evaluated while training using an evaluation set that had one fifth of the rows in training data. We performed early stopping if the evaluation loss did not improve over three evaluation steps. All the models were trained for three epochs. As the (9) and final model we used publicly available Detoxify [34] which is a roberta-base model trained on data from Unintended Bias in Toxicity Classification. Footnote 3: Available at [https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge](https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge) ### In-the-wild Evaluation Data Set We consider a data set of more than 92 million comments on 241,112 news videos hosted between 2014, January 1 to 2022, Aug 27 by the official YouTube handles of three prominent US cable news networks: CNN, Fox News, and MSNBC. We select this data set for our noise audit for the following reasons. First, all three YouTube channels have millions of subscribers and substantial user engagement indicating broad participation from a diverse audience. Second, these mainstream news networks cover a broad range of topics. This data set span eight years that have seen a wide range of highly debated sociopolitical events that include a raging pandemic, two bitterly fought presidential elections, a global protest for racial justice and equality, the longest government shutdown, four Supreme Court confirmations, overturning of Row v Wade, and a major war, to name a few. Hence, the YouTube comments data set is rich in topical diversity. Third, previous studies have reported substantial partisan and ideological divergence in both content and audience in these news networks [35, 36, 37, 38] and prior work indicates considerable political dissonance in similar data sets [14, 25]. When compared with years 2014-2018, YouTube engagement has grown for these three YouTube channels [14]. Uniform sampling would bias the data toward more recent years. In order to ensure that our data set is well-distributed over time, for each channel, we sample 200K comments from five bins: 2014-2018, 2019, 2020, 2021, 2022. We club the first five years together into a single bin due to sparse user engagement in the initial years. Overall, our data set comprises of three million randomly sampled comments, one million from each of the three YouTube channels denoted by \(\mathcal{D}_{cnn}\), \(\mathcal{D}_{fox}\), and \(\mathcal{D}_{msnbc}\), respectively. ### Vicarious Offense We break down our study on how political leanings affect perception of offense into two parts: (1) annotators answer questions relevant to direct, first-person perception of offense; (2) annotators answer about vicarious offense - a novel research direction heretofore never explored to our knowledge. Survey design is described in detail in Section 5.1. The first part of the annotation study resembles a standard annotation scheme grounded in existing literature [7, 39]. We first collect demographic information that includes race, gender, age, political identity and then present example YouTube comments and ask about first-person perception of offense. These questions are drawn from existing well-known annotation protocols for offensive speech (see Appendix for details) [28] where one or more first-person perception of offense for a data point is collected. Consider the annotator is a registered Independent. For each instance, in our second part of the annotation study, we ask her two additional questions: (1) _do you think a Republican will find this post offensive?_; and (2) _do you think a Democrat will find this post offensive?_ For each annotator, we adjust this question to reflect the two political beliefs that the annotator does not belong to. A registered Democrat will be asked about her vicarious offense perception for a Republican and an Independent, while a registered Republican will be asked about her vicarious offense perception for a Democrat and an Independent. Existing literature reveals that annotator's gender, race, and political leanings may affect how offensive she finds a given content [40, 41, 42, 43, 44]. However, to our knowledge, vicarious offense, i.e., an annotator's perspective on how someone else with a political belief different from the annotator would feel about the said content, is never explored heretofore. Notice that, for any given social media post \(d\), we have nine vantage perception points of offense (see Figure 1): (1) how offensive a Republican finds \(d\) (2) how offensive a Democrat finds \(d\) (3) how offensive an Independent finds \(d\) (4) how offensive a Republican _thinks_ a Democrat would find \(d\) (5) how offensive a Republican _thinks_ an Independent would find \(d\) (6) how offensive a Democrat _thinks_ a Republican would find \(d\) (7) how offensive a Democrat _thinks_ an Independent would find \(d\) (8) how offensive an Independent _thinks_ a Republican would find \(d\) (9) how offensive an Independent _thinks_ an Democrat would find \(d\) The first three vantage points examine first-person perspective of offense. Recent lines of work examining how political leanings affect perception of offense limit their analyses to the first two vantage points primarily considering a binarized view of US politics [44]. However, Figure 2 demonstrates that the oversimplified political dichotomy most papers adheres to [13, 45], disregards the political majority - the Independents. The next six vantage points present insights into vicarious offense. Our study is the first of its kind to explore how well we can predict offense for others who do not share the same political beliefs. In the era of growing political polarization in the US, our study marks the first attempt to quantify how well the political opposites understand each other when asked to be on _their_ shoes. ## 3 Noise Audit of Machine Moderators Figure 3 summarizes the pairwise agreement results (Cohen's \(\kappa\)) on the overall corpus that combines \(\mathcal{D}_{\textit{can}}\), \(\mathcal{D}_{\textit{mmbc}}\), and \(\mathcal{D}_{\textit{fax}}\). Results for the individual news networks are qualitatively similar. Our results indicate that no machine moderator pair exhibits substantial agreement (\(\kappa\) score \(\geq 0.81\)); only a handful exhibit moderate agreement (\(0.41\leq\kappa\) score \(\leq 0.60\)); and several pairs exhibits fair or slight or none agreement. When we quantify agreement across all machine moderators, the overall Fleiss' \(\kappa\) across \(\mathcal{D}_{\textit{cm}}\), \(\mathcal{D}_{\textit{fax}}\), and \(\mathcal{D}_{\textit{mmbc}}\) are 0.27, 0.25, and 0.22, respectively. We next examine the distribution of machine moderators' aggregate verdicts on individual comments. Let \(\textit{Outcome}(d\), \(\mathcal{M})\) take a YouTube comment \(d\) and a model \(\mathcal{M}\) as input and output \(o\in\{\textit{offensive},\textit{notOffensive}\}\). We define _offenseScore_(\(d\)) as the number of machine moderators that mark \(d\) as offensive, i.e., \[\textit{offenseScore}(d)=\sum_{i=1}^{i=N}\mathbb{I}(\textit{Outcome}(d, \mathcal{M}_{i})=\textit{offensive}).\] Figure 4 summarizes the distribution of _offenseScore_ in our evaluation data set. We note that the plot exhibits power-law distribution. A large fraction (nearly half) of the content is not flagged by any of the MMs, whereas a minuscule proportion (0.03%) is flagged as offensive by all. The bin with _offenseScore_ 1 is particularly interesting. It indicates only one of the nine MMs marks these comments as offensive. Therefore, the moderation fate of every comment in this bin is highly volatile. If any other MM than the one that flags it is deployed, the comment will not be censored. We also observe that 10.1% of the content belongs to bin 4 and bin 5. These two bins represent the comments on which the MMs have maximal disagreement. To sum up, Figure 4 emphasizes that a large fraction of the social web represents disputed moderation zone and possibly requires human moderators' assistance. We thus in the following sections investigate how human moderators fare when they are tasked with the difficult job of determining offense in political discourse. Figure 2: Distribution of political identities as reported in historical Gallup surveys [46] over the last 19 years. ## 4 Stratified Sampling In order to compare and contrast machine moderation and human moderation, we first construct a representative set of easy and challenging examples from the machine moderators' perspective. For each corpus \(\mathcal{D}\), we conduct a stratified sampling from three subsets: (1) a subset where most MMs agree that the content is not offensive (denoted by \(\mathcal{D}_{\textit{nooffensive}}\)); (2) a subset where most MMs agree the content is offensive (denoted by \(\mathcal{D}_{\textit{offensive}}\)); and (3) a subset in the twilight zone where nearly half of the models agree that the content is offensive with the other half agreeing that it is not (denoted by \(\mathcal{D}_{\textit{debued}}\)). Formally, * \(d\in\mathcal{D}_{\textit{nooffensive}}\) if \(0\leq\sum_{i=1}^{i=N}\mathbb{I}(\textit{Outcome}(d,\mathcal{M}_{i})=\textit{ offensive})\leq 1\) * \(d\in\mathcal{D}_{\textit{debued}}\) if \(\left\lfloor\frac{N}{2}\right\rfloor\leq\sum_{i=1}^{i=N}\mathbb{I}(\textit{ Outcome}(d,\mathcal{M}_{i})=\textit{ offensive})\leq\left\lceil\frac{N}{2}\right\rceil\) * \(d\in\mathcal{D}_{\textit{offensive}}\) if \(N-1\leq\sum_{i=1}^{i=N}(\mathbb{I}(\textit{Outcome}(d,\mathcal{M}_{i})=\textit{ offensive})\leq N\), where \(N\) denotes the total number of offensive speech classifiers considered (in our case, \(N=9\)), and \(\mathbb{I}\) is the indicator function. We have three news outlets, three sub-corpora defined based on MM disagreement, and five time periods yielding 45 different combinations of news network, temporal bins, and MM disagreement. We weigh each of these combinations equally and sample 1,110 comments (\(\mathcal{D}_{\textit{general}}\)). In addition, we sample 600 comments with the keyword gun (\(\mathcal{D}_{\textit{gun}}\)) and 600 more with the keyword abortion (\(\mathcal{D}_{\textit{debiortion}}\)) to shed light on human-machine disagreement on hot-button issues. Filtering relevant content by a single, general keyword has been previously used in computational social science literature [47, 45], and we argue that it is a high-recall approach to obtain discussions relevant to reproductive rights and gun control/rights without biasing the selection towards event-specific keywords (e.g., Uvalde or Row v Wade). Figure 4: \(j\)-th bar represents the percentage of overall comments that are flagged by \(j\) machine moderators. Figure 3: Agreement between machine moderators. A cell \(\langle i,j\rangle\) presents the Cohen’s \(\kappa\) agreement between machine moderators \(\mathcal{M}_{i}\) and \(\mathcal{M}_{j}\). Majority is a machine moderator that takes the majority vote of the nine individual machine moderators. ## 5 Human Moderators ### Annotation Study Design We design our survey instrument based on the prior studies conducted on human annotation studies on offensive and subjective content analysis [48, 49]4. We host the survey in Qualtrics, for attention and completion tracking we generated a verification code for the annotator to copy from the survey to MTurk. We release our study in batches of 30 data items in total with 10 items from each news outlet but with varying level of MM disagreements. Each batch consisted of 10 instances each from \(\mathcal{D}_{\textit{effective}}\), \(\mathcal{D}_{\textit{debated}}\), and \(\mathcal{D}_{\textit{notOffensive}}\). Each instance is designed to be annotated by 20 annotators MTurk. We set restrictions on MTurk to show the study to only users registered as living in the USA due to the nature of the study. We not only asked if each item was offensive to the annotator, but how someone from an opposite political identity would find it offensive. Our study was reviewed by our Institutional Review Board and was deemed as exempt. Footnote 4: Anonymized version of this survey can be found on this link [https://anonymous.4open.science/r/www23-667D](https://anonymous.4open.science/r/www23-667D) #### 5.1.1 Pilot Study and Annotator Demographics Since MTurk has documented liberal bias [44], we take a cautious first step and conduct a pilot study with 270 examples (nine batches) to estimate political representation. 117 unique annotators participate in this pilot. As shown in Figure 5 we observe that the annotator pool has strong Democrat bias (66% _Democrat_, 23% _Rep_ublican, and 11% _Ind_ependent). Overall, we observe the following annotator demographics: \(\bullet\) _Political Leaning:_ 66% (76) registered as _Democrats_, 23% (27) as _Rep_ublicans, and 11% (14) as an _Ind_ependent. See Figure 5. \(\bullet\) _Gender:_ An equal split between female (55% _Dem_, 30% _Rep_, and 15% _Ind_) and male (40% _Dem_, 35% _Rep_, and 25% _Ind_). \(\bullet\) _Race:_ Predominantly White or Caucasian, with limited representations from the Asian, Black or African American, and American Indians communities. In order to ensure comparable political representation, we set restrictions for the subsequent annotation batches to have at least six annotators from each political identity (18 annotators in total). The remaining two spots are given to the annotators who first accept the jobs regardless of their political identity. We also re-run batches from our pilot study to ensure they all contain at least six annotators from each political identity. #### 5.1.2 Final Study In the Figure 5 shows a distribution of the annotators based on their political leaning for both the pilot and the final study. Adding the political identity based restrictions aided in building a representative dataset for this work. This was done using by adding a quota on the survey instrument. We conducted in total of 37 batches of our survey of 30 items each, following the same survey structure as the pilot study. Demographics of the final study annotator pool; \(\bullet\) _Political Leaning:_ 35% (267) registered as _Dem_ocrats, 35% (266) as _Rep_ublicans, and 30% (220) as an _Ind_ependent. See Figure 5. \(\bullet\) _Gender:_ 47% Female, 53% male, and one non-binary annotator. \(\bullet\) _Race:_ Similar to the pilot study, majority of the annotators are White or Caucasian, with limited representations from the Asian, Black or African American, and American Indians communities (in line with [48]). \(\bullet\) _Age:_ The study had annotators from all age groups above 18 years, majority of the annotators were from the age group 25-34. We include detailed demographic analysis in our appendix. #### 5.1.3 Annotator Compensation: We compensate the annotator 0.1 USD for each instance. Each batch with 30 instances would thus fetch 3 USD. We allow the annotators to leave a comment on our study at the end. We did not receive any complaint that the pay was low. Rather, many annotators left comments saying that they found our study interesting. ### Human Moderators and Vicarious Offense We now contrast machine moderators, human moderators with different political identities and their first-person and vicarious perspectives of offense. All disagreement analyses are based on aggregate majority labels from relevant moderator groups. We first analyze \(\mathcal{D}_{general}\) in depth. Contrastive findings from \(\mathcal{D}_{abortion}\) and \(\mathcal{D}_{gun}\) are summarized in Section 5.3. **RQ1: _To which political party machine moderators are most closely aligned with?_** Recent reports indicate increasing scrutiny on big-tech platforms' content moderation policies [50]. The discussions center around two diametrically opposite positions: these platforms are not doing enough, or they need to do more. On one hand, certain camps believe that web platforms censor a particular political believers unjustly more [26]. On the other hand, different groups often believe that poor platform censorship led to some of the recent political crises [51]. Table 1 examines to which political party machine moderators are most aligned with. We observe that while all three political identities have comparable agreement with machines on what is offensive, Republicans align slightly more with the machines on what is not offensive. Existing literature hypothesized that conservatives may focus more on linguistic purity than the liberals while determining toxic content [44]. We note that of all the instances in \(\mathcal{D}_{general}\) that contains the keyword \(\mathtt{fuck}\), 94% of them were marked as offensive by the Republicans whereas Democrats marked 88% of them as offensive. Consider the following examples without any profanity; yet the Democrats marked them as _offensive_ but the Republicans did not: * _More fear-mongering. only.oo6% of people die of covid. Russia has no reason to lie about the death total._ * _Diversity\(====\)Less White people, White shaming. I see everyone as Americans not by their Skin color, The real Racist say black, white, brown pride._ It is possible that the Democrats found these examples offensive because they did not align with their liberal views. We notice that some obfuscated profanity escaped machine moderators while human moderator groups caught them (e.g., cocksuxxcer, or 3-letter company). We also observe that humans having deeper contexts allows them to respond differently. A dismissive comment about Caitlyn Jenner, a transgender Olympic Gold Medalist, is unanimously marked by all groups of human moderators as _offensive_ which the machine moderators marked as _notOffensive_ (see Figure 1). **RQ2: _How aligned are different political identities on the perception of offense in first-person?_** Table 2 summarizes the confusion matrices between human annotators of different political identities. We first observe that for any pair of \begin{table} \begin{tabular}{c political identities, the human-human agreement is higher than the best human-machine agreement achieved in our experiment. We next note that while human-human agreement is generally higher than human-machine agreement, the highest human-human Cohen's \(\kappa\) achieved between the Independents and Democrats (0.43) is still at the lower end and is considered as moderate agreement [52]. Within the political identity pairs, the Democrats and the Independents are most aligned on their perception of offense. This result is not surprising. Historically, Independents lean more toward the Democrats than the Republicans as evidenced by the Gallup survey where 47.7 \(\pm\) 2.98% of the Independents report that they lean Democrat as opposed to 42.3 \(\pm\) 3.08% Independents reporting leaning Republican [46]. **RQ3: _Which political identity best predicts vicarious offense?_** In our vicarious offense study, we request the annotators to predict out-group offense labels. Hence, we have information about say, what Democrats believe Republicans find as offensive. Since we also have first-person perspectives from the Republicans, we can tally this information with the first-person perspective to find out how well annotators understand the _political others_. For notational clarity, we indicate the vicarious offense predictor in superscripts. Republicans\({}^{Dom}\) means Democrats are predicting what Republicans would find offensive. Table 3 indicates that Republicans are the worst predictors of vicarious offense. On both cases of predicting vicarious offense for the Democrats and the Independents, they do worse than the Independents and the Democrats, respectively. We further note that while Independents and Democrats can predict vicarious offense for each other reasonably well, they fare poorly in predicting what Republicans would find offensive. Hence, in terms of vicarious offense, Republicans are the least understood political group while they also struggle the most to understand others. Finally, we present a compelling result that shows why inter-rater agreement could be misleading. Table 4 suggests that the Democrats and Independents have the highest agreement on what Republicans would find as offensive. However, Table 3 already shows that neither the Democrats nor the Independents understand well what Republicans actually find offensive. Hence, if we have a pool of web moderators comprising only Democrats and Independents, their evaluations of what Republicans find as offensive will be reasonably consistent; however, it will not reflect what Republicans truly find as offensive. ### Issue focused analysis We observe considerable disagreement among human moderators across different political identities while annotating \(\mathcal{D}_{general}\). When we consider sensitive issues, the disagreement worsens. Table 5 contrasts the pairwise disagreement between human-human moderators and human-machine moderators across \(\mathcal{D}_{general}\), \(\mathcal{D}_{abtor}\), and \(\mathcal{D}_{gun}\). We first observe that machine-human agreement is substantially lower on the issue-specific corpora across all political identities. We \begin{table} \end{table} Table 4: Contrasting vicarious offense predictions \begin{table} \end{table} Table 2: Confusion matrices between humans with different political identities next note that some of the moderator pairs achieved negative Cohen's \(\kappa\) values on \(\mathcal{D}_{\textit{gam}}\) even on first-person offense perspective indicating considerable disagreement [52]. The pairwise group dynamics behave differently on different issues. While Independents exhibit considerable agreement with Republicans on \(\mathcal{D}_{\textit{abortion}}\), they show little or no agreement on \(\mathcal{D}_{\textit{gam}}\). Interestingly, while neither the Republicans nor the Independents agree a lot with a Democrats on \(\mathcal{D}_{\textit{abortion}}\) these two groups (Independents and Republicans) are well-aligned on what Democrats would find offensive in \(\mathcal{D}_{\textit{abortion}}\). However, once again, when we tally that with what the Democrats actually find as offensive, we see the agreement on the pairs \(\langle\)Democrats, Democrats\({}^{\textit{Rep}}\rangle\) and \(\langle\) Democrats, Democrats\({}^{\textit{Ind}}\rangle\) are substantially lower. ### On Censorship and Trust in Democracy Beyond first-person and vicarious offense, we ask the annotators a range of questions on censorship and election fairness. We explicitly ask every annotator is she believes the comment should be allowed on social media. We find that of the comments individual political groups marked as offensive on \(\mathcal{D}_{\textit{general}}\), the Democrats, Republicans, and Independents believe 23%, 23%, and 17%, respectively, should not be allowed on social media. This implies that in general political discussions, Independents are more tolerant than the two political extremes on what should be allowed on social media. On \(\mathcal{D}_{\textit{gam}}\), the Republicans exhibit slightly more intolerance than Democrats and Independents and want to remove 26% of the offensive content as opposed to 23% and 14% by the Democrats, and the Independents, respectively. However, on \(\mathcal{D}_{\textit{abortion}}\) the Democrats exhibit more intolerance seeking to remove 23% of the offensive content as opposed to 21% from both Independents and Republicans. We note that Independents are much more sensitive to reproductive rights than gun control/rights or general political discussions. Our study thus suggests that content moderation is a highly nuanced topic where different political groups can exhibit different levels of tolerance to offensive content depending on the issue. _What is offensive_ and _should this offensive post be removed from social media_ can be subjective, as our study indicates. However, when we ask the annotators about fairness of the 2016 and 2020 elections, we notice a more worrisome issue: eroding trust in democracy. Figure 6 reveals that 5% and 10% of the annotators believed that the 2016 and 2020 elections were not conducted in a fair and democratic manner, respectively with Democrats doubting the fairness of 2016 election more while the Republicans doubting the 2020 election. This result sums up the deep, divergent political divide between the left and the right in the US and asks all the stakeholders - social media platforms, social web users, media, academic and industry researchers, and of course the politicians - to think about how to improve political discourse and brink back trust in democracy. ## 6 Discussions In this paper, we analyze two under-explored aspects moderating social discussing politics: disagreement between machine moderators, and disagreement between human moderators. Our key contributions are (1) a comprehensive noise audit of machine moderators; (2) an offensive speech data set with transparent annotator details; (3) a novel framework of vicarious offense; and (4) focused analysis of moderation challenges present in dealing with sensitive social issues such as reproductive rights and gun control/rights. Our study raises the following points to ponder upon. \begin{table} \begin{tabular}{|l|l|l|l|l|} \hline Moderators & Moderators & \(\mathcal{D}_{\textit{general}}\) & \(\mathcal{D}_{\textit{abortion}}\) & \(\mathcal{D}_{\textit{gun}}\) \\ \hline Machines & Republicans & 0.23 & 0.04 & 0.07 \\ \hline Machines & Democrats & 0.19 & 0.06 & 0.11 \\ \hline Machines & Independents & 0.17 & 0.02 & -0.02 \\ \hline Republicans & Democrats & 0.34 & 0.05 & -0.01 \\ \hline Democrats & Independents & 0.43 & 0.03 & -0.04 \\ \hline Independents & Republicans & 0.39 & 0.36 & -0.03 \\ \hline Democrats & Democrats\({}^{\textit{Rep}}\) & 0.38 & -0.05 & -0.02 \\ \hline Democrats & Democrats\({}^{\textit{Rep}}\) & 0.46 & 0.00 & -0.03 \\ \hline Republicans & Republicans\({}^{\textit{Rep}}\) & 0.37 & -0.01 & 0.00 \\ \hline Republicans & Republicans\({}^{\textit{Rep}}\) & 0.35 & 0.15 & -0.03 \\ \hline Independents & Independents\({}^{\textit{Rep}}\) & 0.37 & 0.18 & 0.04 \\ \hline Independents & Independents\({}^{\textit{Rep}}\) & 0.43 & -0.04 & -0.04 \\ \hline Republicans\({}^{\textit{Rep}}\) & Republicans\({}^{\textit{Rep}}\) & 0.49 & 0.01 & -0.04 \\ \hline Democrats\({}^{\textit{Rep}}\) & Democrats\({}^{\textit{Rep}}\) & 0.44 & 0.57 & 0.10 \\ \hline Independants\({}^{\textit{Rep}}\) & Independants\({}^{\textit{Rep}}\) & 0.37 & 0.06 & -0.05 \\ \hline \end{tabular} \end{table} Table 5: Disagreement across different annotation data sets. * _Revisiting traditional supervised learning paradigm and existence of gold standard labels:_ Traditional supervised learning paradigm assumes existence of _gold standard_ labels. While recent lines of work have investigated disagreement among annotators that stems from the inherent subjectivity of the task [10], our analyses of political discussions on highly sensitive issues reveal that there could be practically no agreement among annotator groups and depending on who we ask, we can end up with wildly different _gold standard_ labels reminiscent of _alternative facts_[53]. Our work, in its current form, is primarily descriptive. We address the elephant in the room and quantify the challenges of offensive content moderation in political discourse both from the machine and human moderators' perspectives. We believe our data set will open the gates for modelling ideas considering multiple vantage points and yielding more robust systems. * _Fine-grained political identities:_ Contrary to most existing work on US politics that sidesteps the issue of dealing with the political middle, our study considers the Independents thus setting the stage for exploring more nuanced political identities. For example, a human moderator can be fiscally conservative but socially liberal. Understanding both first-person and vicarious perspectives of offense considering more fine-grained political identities merit deeper exploration. * _Issue focused analysis:_ In Section 5.3, our study barely scratches the surface of issue-focused analysis. Studies show that there are political disagreement between the left and the right on several other policy issues that include immigration [54], climate change [55], racism in policing [45], to name a few. We believe our study will open the gates for follow on research expanding to more issues. * _Style vs content:_ We observed an important interplay between the style and the content of posts particularly when it comes to polarizing topics and political preference. The analysis of vicarious offense reveals that the topic and targets of a potentially offensive post (e.g. a politician, a political party, etc.) seem to be more important to human moderators than to machine moderators as automatic methods often rely on the presence of profanity to assign a post offensive. This observation is in line with data sets and annotation taxonomies that consider the target of offensive posts as the central aspect of offensive speech identification such as OLID [28], HASOC [30] and others. The new vicarious offense data set presented in this paper is a valuable resource that can be used for further analysis. * _Beyond US politics and political polarization:_ Finally, the framework of vicarious offense has a broader appeal. While we apply this to US politics, there is rising political polarization in many other countries [56, 57]. It does not also have to be always politics. The vicarious offense framework can also be used to understand religious polarization [58, 59, 60]. ## 7 Ethics Statement Our study was reviewed by our Institutional Review Board and was deemed as exempt. Our YouTube data is collected using the publicly available YouTube API. We do not collect or reveal any identifiable information about the annotators. Content moderation can be potentially gruesome and affect the mental health of the moderators [61]. We maintain a small batch size (30 YouTube comments) one-third of which is marked as _notOffensive_ by almost all machine moderators to minimize the stress on annotators. In fact, many of the annotators left a comment at the end of the study indicating that they enjoyed this task. While our goal is to broaden our understanding of first-person and vicarious offence perception and has the potential to robustifying machine moderators, any content filtering system can be tweaked for malicious purposes. For instance, an inverse filter can be made that filters out _notOffensive_ posts while filtering in the _offensive_ ones. Figure 6: Distributions from annotators when the study asked, if 2016 and 2020 presidential elections were conducted in a fair and democratic manner. Percentages are computed on the entire population of annotators who participated in the study.
2310.04417
Diffusion Random Feature Model
Diffusion probabilistic models have been successfully used to generate data from noise. However, most diffusion models are computationally expensive and difficult to interpret with a lack of theoretical justification. Random feature models on the other hand have gained popularity due to their interpretability but their application to complex machine learning tasks remains limited. In this work, we present a diffusion model-inspired deep random feature model that is interpretable and gives comparable numerical results to a fully connected neural network having the same number of trainable parameters. Specifically, we extend existing results for random features and derive generalization bounds between the distribution of sampled data and the true distribution using properties of score matching. We validate our findings by generating samples on the fashion MNIST dataset and instrumental audio data.
Esha Saha, Giang Tran
2023-10-06T17:59:05Z
http://arxiv.org/abs/2310.04417v2
# Diffusion Random Feature Model ###### Abstract Diffusion probabilistic models have been successfully used to generate data from noise. However, most diffusion models are computationally expensive and difficult to interpret with a lack of theoretical justification. Random feature models on the other hand have gained popularity due to their interpretability but their application to complex machine learning tasks remains limited. In this work, we present a diffusion model-inspired deep random feature model that is interpretable and gives comparable numerical results to a fully connected neural network having the same number of trainable parameters. Specifically, we extend existing results for random features and derive generalization bounds between the distribution of sampled data and the true distribution using properties of score matching. We validate our findings by generating samples on the fashion MNIST dataset and instrumental audio data. ## 1 Introduction Generative modeling has been successfully used to generate a wide variety of data. Some well-known models are Generative Adversarial Networks [5, 6, 14], flow-based models [7, 17, 33], autoregressive models [9, 13, 18], and variational autoencoders [14, 15, 20, 31]. Remarkable results can also be obtained using energy-based modeling and score matching [10, 11, 29]. Diffusion models are one such class of generative models that give exemplary performance in terms of data generation. A diffusion probabilistic model (or "diffusion model") is a parameterized model that is trained using variational inference to generate samples matching the data from input distribution after a finite number of timesteps. The model learns to reverse a diffusion process, which is a fixed Markov chain adding noise to the input data until it is destroyed. If the (Gaussian) noise added is very small, then the transition kernels are also conditional Gaussian distribution which leads to a neural network parameterization of the mean and the variance of the transition kernels. Most of the existing diffusion models are extremely complex and very computationally expensive. In this paper, we propose a model architecture to bridge the gap between interpretable models and diffusion models. Our main idea is to build a deep random feature model inspired by semi-random features [12] and diffusion models as formulated in [8]. Our paper is organized as follows. We give a brief background of diffusion models and random features in Section 2 along with the related works. The model architecture along with the algorithm and its theoretical results are given in Section 3. All experimental results are summarized in Section 4 followed by Section 5 where we conclude our paper and discuss future directions. ### Contributions We propose a diffusion model-inspired random feature model, that uses semi-random-like features to learn the reverse process in the diffusion models. Our main contributions are given below: * Our proposed model is the first of its kind to combine the idea of random features with generative models. It acts as a bridge between the theoretical and generative aspects of the diffusion model by providing approximation bounds of samples generated by diffusion model-based training algorithms. We show that for each fixed timestep, our architecture can be reduced to a random feature model preserving the properties of interpretability. * Our numerical results on the fashion MNIST and the audio data validate our findings by generating samples from noise, as well as denoising signals not present in the training data. We show that even with a very small training set, our proposed model can denoise data and generate samples similar to training data. ## 2 Background and related works We first recall some useful notations and terminologies corresponding to diffusion models. In this paper, we denote \(\mathcal{N}(\mathbf{0},\mathbf{I})\) the \(d-\) dimensional Gaussian distribution with the zero mean vector and the identity covariance matrix. We say that a function \(p(\mathbf{x})=\mathcal{N}(\boldsymbol{\mu},\boldsymbol{\Sigma})\) to mean that \(p(\mathbf{x})\) is the p.d.f. of a random vector \(\mathbf{x}\) following the multivariate normal distribution with mean vector \(\boldsymbol{\mu}\) and covariance matrix \(\boldsymbol{\Sigma}\). A diffusion model consists of two Markov chains: a forward process and a reverse process. The goal of the forward process is to degrade the input sample by gradually adding noise to the data over a fixed number of timesteps. The reverse process involves learning to undo the added-noise steps using a parameterized model. Knowledge of the reverse process helps to generate new data starting with a random noisy vector followed by sampling through the reverse Markov chain [16, 32]. ### Forward Process The forward process degrades the input data such that \(q(\mathbf{x}_{K})\approx\mathcal{N}(\mathbf{0},\mathbf{I})\). More precisely, let \(\mathbf{x}_{0}\in\mathbb{R}^{d}\) be an input from an unknown distribution with p.d.f. \(q(\mathbf{x}_{0})\). Given a variance schedule \(0<\beta_{1}<\beta_{2}<...<\beta_{K}<1\), the forward process to obtain a degraded sample for a given timestep is defined as: \[\mathbf{x}_{k+1}=\sqrt{1-\beta_{k+1}}\mathbf{x}_{k}+\sqrt{\beta_{k+1}} \boldsymbol{\epsilon}_{k},\ \ \ \text{where}\ \boldsymbol{\epsilon}_{k}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\ \text{and}\ k=0,\ldots,K-1. \tag{1}\] The forward process generates a sequence of random variables \(\mathbf{x}_{1},\mathbf{x}_{2},...,\mathbf{x}_{K}\) with conditional distributions \[q(\mathbf{x}_{k+1}|\mathbf{x}_{k})=\mathcal{N}(\mathbf{x}_{k+1};\sqrt{1-\beta _{k+1}}\mathbf{x}_{k},\beta_{k+1}\mathbf{I}). \tag{2}\] Let \(\alpha_{k}=1-\beta_{k}\) for \(k=1,\ldots,K\) and \(\overline{\alpha}_{k}=\prod\limits_{i=1}^{k}\alpha_{i}\). Using the reparameterization trick, we can obtain \(\mathbf{x}_{k+1}\) at any given time \(k\in\{0,\ldots,K-1\}\) from \(\mathbf{x}_{0}\): \[\begin{split}\mathbf{x}_{k+1}&=\sqrt{\alpha_{k+1}} \;\mathbf{x}_{k}+\sqrt{1-\alpha_{k+1}}\;\boldsymbol{\epsilon}_{k}\\ &=\sqrt{\alpha_{k+1}}\left(\sqrt{\alpha_{k}}\mathbf{x}_{k-1}+ \sqrt{1-\alpha_{k}}\,\boldsymbol{\epsilon}_{k-1}\right)+\sqrt{1-\alpha_{k+1}} \boldsymbol{\epsilon}_{k}\\ &=\sqrt{\alpha_{k+1}\alpha_{k}}\;\mathbf{x}_{k-1}+\sqrt{1- \alpha_{k+1}\alpha_{k}}\;\widetilde{\boldsymbol{\epsilon}}_{k-1}\\ &\vdots\\ &=\sqrt{\prod\limits_{i=1}^{k+1}\alpha_{i}}\;\mathbf{x}_{0}+ \sqrt{1-\prod\limits_{i=1}^{k+1}\alpha_{i}}\;\widetilde{\boldsymbol{\epsilon }}_{0},\end{split} \tag{3}\] where \(\widetilde{\boldsymbol{\epsilon}}_{i}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\) for \(i=0,\ldots,k-1\). Therefore, the conditional distribution \(q(\mathbf{x}_{k+1}|\mathbf{x}_{0})\) is \[q(\mathbf{x}_{k+1}|\mathbf{x}_{0})=\mathcal{N}(\mathbf{x}_{k+1};\sqrt{ \overline{\alpha}_{k+1}}\,\mathbf{x}_{0},(1-\overline{\alpha}_{k+1})\mathbf{ I}), \tag{4}\] At \(k=K\), we have \[\mathbf{x}_{K}=\sqrt{\overline{\alpha}_{K}}\,\mathbf{x}_{0}+\sqrt{1-\overline {\alpha}_{K}}\,\boldsymbol{\epsilon},\] where \(\boldsymbol{\epsilon}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\). Since \(0<\beta_{1}<...<\beta_{K}<1\), \(0<\overline{\alpha}_{K}<\alpha_{1}^{K}<1\). Therefore, \(\lim\limits_{K\to\infty}\overline{\alpha}_{K}=0\). Hence, \(q(\mathbf{x}_{K})=\int q(\mathbf{x}_{K}|\mathbf{x}_{0})q(\mathbf{x}_{0})d \mathbf{x}_{0}\approx\mathcal{N}(\mathbf{0},\mathbf{I})\), i.e., as the number of timesteps becomes very large, the distribution \(q(\mathbf{x}_{K})\) will approach the Gaussian distribution with mean \(\mathbf{0}\) and covariance \(\mathbf{I}\). ### Reverse Process The reverse process aims to generate data from the input distribution by sampling from \(q(\mathbf{x}_{K})\) and gradually denoising for which one needs to know the reverse distribution \(q(\mathbf{x}_{k-1}|\mathbf{x}_{k})\). In general, computation of \(q(\mathbf{x}_{k-1}|\mathbf{x}_{k})\) is intractable without the knowledge of \(\mathbf{x}_{0}\). Therefore, we condition the reverse distribution on \(\mathbf{x}_{0}\) in order to obtain the mean and variance for the reverse process. More precisely, using Baye's rule, we have: \[q(\mathbf{x}_{k-1}|\mathbf{x}_{k},\mathbf{x}_{0})=q(\mathbf{x}_{k}|\mathbf{x} _{k-1},\mathbf{x}_{0})\,\frac{q(\mathbf{x}_{k-1},\mathbf{x}_{0})}{q(\mathbf{x }_{k},\mathbf{x}_{0})}\,\frac{q(\mathbf{x}_{0})}{q(\mathbf{x}_{0})}=q(\mathbf{ x}_{k}|\mathbf{x}_{k-1},\mathbf{x}_{0})\,\frac{q(\mathbf{x}_{k-1}|\mathbf{x}_{0})}{q( \mathbf{x}_{k}|\mathbf{x}_{0})}.\] Using Equation (2), Equation (4), we have \[q({\bf x}_{k-1}|{\bf x}_{k},{\bf x}_{0})=\frac{1}{\sqrt{(2\pi\beta_ {k})^{d}}}\exp\left(-\frac{1}{2}\frac{({\bf x}_{k}-\sqrt{\alpha_{k}}{\bf x}_{k-1 })^{T}({\bf x}_{k}-\sqrt{\alpha_{k}}{\bf x}_{k-1})}{\beta_{k}}\right)\\ \cdot\frac{1}{\sqrt{(2\pi(1-\overline{\alpha}_{k-1}))^{d}}}\exp \left(-\frac{1}{2}\frac{({\bf x}_{k-1}-\sqrt{\overline{\alpha}_{k-1}}{\bf x}_{0 })^{T}({\bf x}_{k-1}-\sqrt{\overline{\alpha}_{k-1}}{\bf x}_{0})}{1-\overline{ \alpha}_{k-1}}\right)\\ \cdot\left(\sqrt{(2\pi(1-\overline{\alpha}_{k}))^{d}}\right)\exp \left(\frac{1}{2}\frac{({\bf x}_{k}-\sqrt{\overline{\alpha}_{k}}{\bf x}_{0})^{ T}({\bf x}_{k}-\sqrt{\overline{\alpha}_{k}}{\bf x}_{0})}{1-\overline{\alpha}_{k}} \right)\\ =\frac{\sqrt{(1-\overline{\alpha}_{k})^{d}}}{\sqrt{(2\pi\beta_{ k}(1-\overline{\alpha}_{k-1}))^{d}}}\exp\Bigg{\{}-\frac{1}{2}\Bigg{[}\frac{{\bf x }_{k}^{T}{\bf x}_{k}-2\sqrt{\alpha_{k}}{\bf x}_{k}^{T}{\bf x}_{k-1}+\alpha_{k} {\bf x}_{k-1}^{T}{\bf x}_{k-1}}{\beta_{k}}\\ +\frac{{\bf x}_{k-1}^{T}{\bf x}_{k-1}-2\sqrt{\alpha_{k-1}}{\bf x} _{0}^{T}{\bf x}_{k-1}+\overline{\alpha}_{k-1}{\bf x}_{0}^{T}{\bf x}_{0}}{1- \overline{\alpha}_{k-1}}-\frac{{\bf x}_{k}^{T}{\bf x}_{k}-2\sqrt{\overline{ \alpha}_{k}}{\bf x}_{k}^{T}{\bf x}_{0}+\overline{\alpha}_{k}{\bf x}_{0}^{T}{ \bf x}_{0}}{1-\overline{\alpha}_{k}}\Bigg{]}\Bigg{\}}\] \[=\frac{\sqrt{(1-\overline{\alpha}_{k})^{d}}}{\sqrt{(2\pi\beta_{k}(1-\overline {\alpha}_{k-1}))^{d}}}\exp\Bigg{\{}-\frac{1}{2}\frac{1-\overline{\alpha}_{k}} {\beta_{k}(1-\overline{\alpha}_{k-1})}{\bf x}_{k-1}^{T}{\bf x}_{k-1}+\left( \frac{\sqrt{\alpha_{k}}}{\beta_{k}}{\bf x}_{k}^{T}+\frac{\sqrt{\overline{ \alpha}_{k-1}}}{1-\overline{\alpha}_{k-1}}{\bf x}_{0}^{T}){\bf x}_{k-1}+ \mbox{terms}({\bf x}_{k},{\bf x}_{0})\Bigg{\}}\] \[=\frac{\sqrt{(1-\overline{\alpha}_{k})^{d}}}{\sqrt{(2\pi\beta_{k}(1-\overline {\alpha}_{k-1}))^{d}}}\exp\left(-\frac{1}{2}\frac{({\bf x}_{k-1}-\tilde{\mathbf{ \mu}}_{k})^{T}({\bf x}_{k-1}-\tilde{\mathbf{\mu}}_{k})}{\tilde{\beta}_{k}}\right) \tag{5}\] Thus we see that the probability density function of the reverse distribution conditioned on \({\bf x}_{0}\) is also a Gaussian distribution with mean vector \(\tilde{\mathbf{\mu}}_{k}\) and covariance matrix \(\tilde{\beta}_{k}{\bf I}\) which are given by, \[\tilde{\mathbf{\mu}}_{k}=\frac{\sqrt{\alpha_{k}}(1-\overline{\alpha}_{k-1})}{1- \overline{\alpha}_{k}}{\bf x}_{k}+\frac{\sqrt{\overline{\alpha}_{k-1}}\beta_{ k}}{1-\overline{\alpha}_{k}}{\bf x}_{0}\mbox{ and }\tilde{\beta}_{k}=\frac{1-\overline{\alpha}_{k-1}}{1-\overline{\alpha}_{k}}\beta_{k} \tag{6}\] Thus the reverse distribution conditioned on \({\bf x}_{0}\) is \(q({\bf x}_{k-1}|{\bf x}_{k},{\bf x}_{0})=\mathcal{N}(\tilde{\mathbf{\mu}}_{k}, \tilde{\beta}_{k}{\bf I})\), where \(\tilde{\mathbf{\mu}}_{k},\tilde{\beta}_{k}\) are obtained above. Our aim is to learn the reverse distribution from the obtained conditional reverse distribution. From Markovian theory, if \(\beta_{k}\)'s are small, the reverse process is also Gaussian [27]. Suppose \(p_{\theta}({\bf x}_{k-1}|{\bf x}_{k})\) be the learned reverse distribution, then Markovian theory tells us that \(p_{\theta}({\bf x}_{k-1}|{\bf x}_{k})=\mathcal{N}(\mathbf{\mu}_{\theta}({\bf x}_{k},k),\mathbf{\Sigma}_{\theta}({\bf x}_{k},k))\), where \(\mathbf{\mu}_{\theta}({\bf x}_{k},k)\) and \(\mathbf{\Sigma}_{\theta}({\bf x}_{k},k)\) are the learned mean vector and variance matrix respectively. Since the derived covariance matrix \(\tilde{\beta}_{k}{\bf I}\) for conditional reverse distribution is constant, \(\mathbf{\Sigma}_{\theta}({\bf x}_{k},k)\) need not be learnt. In [8], the authors show that choosing \(\mathbf{\Sigma}_{\theta}({\bf x}_{k},t)\) as \(\beta_{k}{\bf I}\) or \(\tilde{\beta}_{k}{\bf I}\) yield similar results and thus we fix \(\mathbf{\Sigma}_{\theta}({\bf x}_{k},k)=\beta_{k}{\bf I}\) for simplicity. Furthermore, since \({\bf x}_{k}\) is also available as input to the model, the loss function derived in [8] as a KL divergence between \(q({\bf x}_{k-1}|{\bf x}_{k},{\bf x}_{0})\) and \(p_{\theta}({\bf x}_{k-1}|{\bf x}_{k})\) can be simplified as \[D_{KL}(q({\bf x}_{k-1}|{\bf x}_{k},{\bf x}_{0})\|p_{\theta}({\bf x}_{k-1}|{\bf x }_{k})=\mathbb{E}_{q}\left[\frac{1}{2\beta_{k}^{2}}\|\tilde{\mathbf{\mu}}_{k}({\bf x }_{k},{\bf x}_{0})-\mathbf{\mu}_{\theta}({\bf x}_{k},k)\|^{2}\right]+\mbox{const} \tag{7}\] where for each timestep \(k\), \(\tilde{\mathbf{\mu}}_{k}({\bf x}_{k},{\bf x}_{0})\) denotes the mean of the reverse distribution conditioned on \({\bf x}_{0}\) i.e., \(q({\bf x}_{k-1}|{\bf x}_{k},{\bf x}_{0})\) and \(\mathbf{\mu}_{\theta}({\bf x}_{k},k)\) denotes the learnt mean vector. Thus, the above equation predicts the mean of the reverse distribution when conditioned on \({\bf x}_{0}\). Substituting \({\bf x}_{0}=\frac{1}{\sqrt{\overline{\alpha}_{k}}}({\bf x}_{k}-\sqrt{1- \overline{\alpha}_{k}}\mathbf{\epsilon}_{k})\) in Eq. (6) we can obtain \(\tilde{\mathbf{\mu}}({\bf x}_{k},k)=\frac{1}{\sqrt{\alpha_{k}}}\left({\bf x}_{k}- \frac{\beta_{k}}{\sqrt{1-\overline{\alpha}_{k}}}\mathbf{\epsilon}\right)\). Further, since \({\bf x}_{k}\) is known, we can use the formula for \(\mathbf{\mu}_{\theta}(\mathbf{x}_{k},k)=\dfrac{1}{\sqrt{\alpha_{k}}}\left(\mathbf{x}_ {k}-\dfrac{\beta_{k}}{\sqrt{1-\widetilde{\alpha}_{k}}}\mathbf{\epsilon}_{\theta}( \mathbf{x}_{k},k)\right)\). We can simplify Eq. (7) as: \[\mathbb{E}_{k,\mathbf{x}_{0},\mathbf{\epsilon}}\left[\dfrac{1}{2\alpha_{k}(1- \widetilde{\alpha}_{k})}\|\mathbf{\epsilon}-\mathbf{\epsilon}_{\theta}(\mathbf{x}_{k}, k)\|^{2}\right]. \tag{8}\] where \(\mathbf{\epsilon}_{\theta}\) now denotes a function approximator intended to predict the noise from \(\mathbf{x}_{k}\). The above results show that we can either train the reverse process mean function approximator \(\mathbf{\mu}_{\theta}\) to predict \(\tilde{\mathbf{\mu}}_{k}\) or modify using its parameterization to predict \(\mathbf{\epsilon}\). In our proposed algorithm, we choose to use the loss function from Eq. (8) since it is one of the simplest forms to train and understand. This formulation of DDPM also helps us to harness the power of SDEs in diffusion models through its connection to DSMs [1]. ### DDPM and DSM We can also apply the DDPM algorithm for score matching by formulating the DDPM objective as a DSM objective. \[L_{\text{DDPM}} =\mathbb{E}_{k,\mathbf{x}_{0},\mathbf{\epsilon}}\left[\dfrac{1}{2 \alpha_{k}(1-\widetilde{\alpha}_{k})}\|\mathbf{\epsilon}-\mathbf{\epsilon}_{\theta}( \mathbf{x}_{k},k)\|^{2}\right] \tag{9}\] \[=\mathbb{E}_{k,\mathbf{x}_{0},\mathbf{\epsilon}}\left[\dfrac{1}{2 \alpha_{k}(1-\widetilde{\alpha}_{k})}\left\|\dfrac{\mathbf{x}_{k}-\sqrt{ \widetilde{\alpha}_{k}}\mathbf{x}_{0}}{\sqrt{1-\widetilde{\alpha}_{k}}}-\mathbf{ \epsilon}_{\theta}(\mathbf{x}_{k},k)\right\|^{2}\right]\] (10) \[=\mathbb{E}_{k,\mathbf{x}_{0},\mathbf{x}_{k}}\left[\dfrac{1}{2 \alpha_{k}(1-\widetilde{\alpha}_{k})}\left\|\dfrac{\mathbf{x}_{k}-\sqrt{ \widetilde{\alpha}_{k}}\mathbf{x}_{0}}{1-\widetilde{\alpha}_{k}}\sqrt{1- \widetilde{\alpha}_{k}}-\dfrac{\sqrt{1-\widetilde{\alpha}_{k}}}{\sqrt{1- \widetilde{\alpha}_{k}}}\mathbf{\epsilon}_{\theta}(\mathbf{x}_{k},k)\right\|^{2}\right]\] (11) \[=\mathbb{E}_{k,\mathbf{x}_{0},\mathbf{x}_{k}}\left[\dfrac{1}{2 \alpha_{k}}\left\|-\nabla_{\mathbf{x}_{k}}\log q(\mathbf{x}_{k}|\mathbf{x}_{0 })-\dfrac{1}{\sqrt{1-\widetilde{\alpha}_{k}}}\mathbf{\epsilon}_{\theta}(\mathbf{x} _{k},k)\right\|^{2}\right]\] (12) \[=\mathbb{E}_{k,\mathbf{x}_{0},\mathbf{x}_{k}}\left[\dfrac{1}{2 \alpha_{k}}\left\|s_{\theta}(\mathbf{x}_{k},k)-\nabla_{\mathbf{x}_{k}}\log q( \mathbf{x}_{k}|\mathbf{x}_{0})\right\|^{2}\right]=L_{\text{DSM}} \tag{13}\] where \(s_{\theta}(\mathbf{x}_{k},k)=-\dfrac{1}{\sqrt{1-\widetilde{\alpha}_{k}}}\mathbf{ \epsilon}_{\theta}(\mathbf{x}_{k},k)\). The above formulation is known as denoising score matching (DSM), which is equivalent to the objective of DDPM. Furthermore, the objective of DSM is also related to the objective of score based generative models using SDEs [28]. We briefly discuss the connection between diffusion models, SDEs and DSM in the upcoming section [8, 28, 32]. ### Diffusion Models and SDEs The forward process can also be generalized to stochastic differential equations (SDEs) if infinite time steps or noise levels are considered (SDEs) as proposed in [32]. To formulate the forward process as an SDE, let \(t=\frac{k}{K}\) and define functions \(\mathbf{x}(t),\beta(t)\) and \(\mathbf{\epsilon}(t)\) such that \(\mathbf{x}(\frac{k}{K})=\mathbf{x}_{k}\), \(\beta(\frac{k}{K})=K\beta_{k}\) and \(\mathbf{\epsilon}(\frac{k}{K})=\mathbf{\epsilon}_{k}\). Note that in the limit \(K\rightarrow\infty\), we get \(t\in[0,1]\). Using the derivations from [32], the forward process can be written as a SDE of the form, \[d\mathbf{x}=\frac{-\beta(t)}{2}\mathbf{x}dt+\sqrt{\beta(t)}d\mathbf{w} \tag{14}\] where \(\mathbf{w}\) is the standard Wiener process. The above equation now is in the form of an SDE \[d\mathbf{x}=f(\mathbf{x},t)dt+g(t)d\mathbf{w} \tag{15}\] where \(f(\mathbf{x},t)\) and \(g(t)\) are diffusion and drift functions of the SDE respectively, and \(\mathbf{w}\) is a standard Wiener process. The above processed can be reversed by solving the reverse-SDE, \[d\mathbf{x}=[f(\mathbf{x},t)-g(t)^{2}\nabla_{\mathbf{x}(t)}\log q(\mathbf{x}( t))]dt+g(t)d\overline{\mathbf{w}} \tag{16}\] where \(\overline{\mathbf{w}}\) is a standard Wiener process backwards in time, and \(dt\) denotes an infinitesimal negative time step and \(q(\mathbf{x}(t))\) is the marginal distribution of \(\mathbf{x}(t)\). Note that in particular for Eq.(14), the reverse SDE will be of the form \[d\mathbf{x}=\left[\frac{\beta(t)}{2}-\beta(t)\nabla_{\mathbf{x}(t)}\log q( \mathbf{x}(t))\right]dt+\sqrt{\beta(t)}d\overline{\mathbf{w}} \tag{17}\] The unknown term \(\nabla_{\mathbf{x}(t)}\log q(\mathbf{x}(t))\) is called the score function and is estimated by training a parameterized model \(s_{\theta}(\mathbf{x}(t),t)\) via minimization of the loss given by \[\mathbb{E}_{q(\mathbf{x}(t))}\left[\frac{1}{2}\Big{\|}s_{\theta}(\mathbf{x}(t ),t)-\nabla_{\mathbf{x}(t)}\log q(\mathbf{x}(t))\Big{\|}_{2}^{2}\right] \tag{18}\] Note that since \(q(\mathbf{x}(0))\) is unknown, therefore the distribution \(q(\mathbf{x}(t))\) and subsequently the score function \(\nabla_{\mathbf{x}(t)}\log q(\mathbf{x}(t))\) are also unknown. Referring to results from [30], we see that the loss in Eq. (18) is equivalent to the denoising score matching (DSM) objective given by, \[\mathbb{E}_{q(\mathbf{x}(t),\mathbf{x}(0))}\left[\frac{1}{2}\Big{\|}s_{\theta }(\mathbf{x}(t),t)-\nabla_{\mathbf{x}(t)}\log q(\mathbf{x}(t)|\mathbf{x}(0)) \Big{\|}_{2}^{2}\right]. \tag{19}\] We can see that the above objective is the same as the objective of DSM in the discrete setting. Apart from the success of score based models using SDEs, an additional advantage of formulating the diffusion model using SDEs is the theoretical analysis based on results from SDEs. In our paper, we aim to use this connection to build a theoretical understanding of our proposed model. While there are remarkable results for improving training and sampling for diffusion models, little has been explored in terms of the model architectures. Since distribution learning and data generation is a complex task, it is unsurprising that conventional diffusion models are computationally expensive. From previous works in [8, 32, 33], U-Net (or variations on U-Net combined with ResNet, CNNs, etc.) architecture is still the most commonly used model for diffusion models. U-Nets not only preserve the dimension of input data, they also apply techniques such as downsampling using convolution which helps to learn the features if the input data. However, all these architectures have millions of parameters making training (and sampling) cumbersome. An alternative approach to reduce the complexity of machine learning algorithms is to use a random feature model (RFM) [21, 22] for approximating the kernels using a randomized basis. RFMs are derived from kernel-based methods which utilize a pre-defined nonlinear function basis called kernel \(K(\mathbf{x},\mathbf{y})\). From the neural network point of view, an RFM is a two-layer network with a fixed single hidden layer sampled randomly [21, 22]. Not only do random feature-based methods give similar results to that of a shallow network, but the model in itself is also interpretable which makes it a favorable method to use. Some recent works which use random features for a variety of tasks are explored in [3, 4, 19, 24, 25, 26]. However, random features can lack expressibility due to their structure and thus we aim to propose an architecture that can be more flexible in learning yet retaining properties of random features. Inspired by semi-random features [12], DDPM [8] and the idea of building deep random features, our diffusion random feature model serves as a potential alternative to the existing diffusion model architectures. ## 3 Algorithm and Theory Our proposed model is a diffusion model inspired random feature model. The main idea of our model is to build an interpretable deep random feature architecture for diffusion models. Our work is inspired by _denoising diffusion probabilistic model_ (DDPM) proposed in [8] and semi-random features proposed in [12]. Let \(\mathbf{x}_{0}\in\mathbb{R}^{d}\) be the input data belonging to an unknown distribution \(q(\mathbf{x}_{0})\). Let \(K\) denote the total number of timesteps in which the forward process is applied. Suppose \(N\) is the number of features. For each timestep \(k\), we build a noise predictor function \(p_{\theta}\) of the form \[p_{\theta}(\mathbf{x}_{k},k)=(\sin(\mathbf{x}_{k}^{T}\mathbf{W}+\mathbf{b}^{T })\odot\cos(\boldsymbol{\tau}_{k}^{T}\boldsymbol{\theta}^{(1)}))\boldsymbol{ \theta}^{(2)}, \tag{20}\] where \(\mathbf{x}_{k}\in\mathbb{R}^{d}\), \(\mathbf{W}\in\mathbb{R}^{d\times N}\), \(\mathbf{b}=\begin{bmatrix}b_{1}&\ldots&b_{N}\end{bmatrix}^{T}\in\mathbb{R}^{N}\), \(\boldsymbol{\theta}^{(1)}=(\theta_{ki}^{(1)})\in\mathbb{R}^{K\times N}\), \(\boldsymbol{\tau}_{k}\in\mathbb{R}^{K}\), \(\boldsymbol{\theta}^{(2)}=(\theta_{ij}^{(2)})\in\mathbb{R}^{N\times d}\), and \(\odot\) denotes element-wise multiplication. The vector \(\boldsymbol{\tau}_{k}\,(k\geq 1)\) is a one-hot vector with the position of one corresponding to the timestep \(k\). The motivation to use trainable weights corresponding to the time parameter is twofold: first, we want to associate importance to the timestep being used when optimizing the weights; secondly, we aim to build a deep random feature model layered through time. The inspiration for using cosine as an activation comes from the idea of positional encoding used for similar tasks. In general, positional encoding remains fixed, but for our method, we wish to make the weights associated with timestep random and trainable. This is done so that the model learns the noise level associated with the timestep. Our aim is to train the parameters \(\boldsymbol{\theta}=\{\boldsymbol{\theta}^{(1)},\boldsymbol{\theta}^{(2)}\}\) while \(\mathbf{W}\) and \(\mathbf{b}\) are randomly sampled and fixed. The model is trained using Algorithm 1. Figure 1: Representation of DRFM. The green boxes denote the random feature layer which is active corresponding to the timestep selected, while the other remain fixed. ``` 0: Sample \(\mathbf{x}_{0}\sim q(\mathbf{x}_{0})\) where \(q\) is an unknown distribution, variance schedule \(\beta=\{\beta_{1},...,\beta_{K}\}\) such that \(0<\beta_{1}<\beta_{2}<...<\beta_{K}<1\), random weight matrix \(\mathbf{W}=[w_{ij}]\) and bias vector \(\mathbf{b}\) sampled from a distribution \(\rho\) and total number of epochs epoch. 0: Training 1: Choose random timestep \(k\in\{1,2,...,K\}\) and build vector \(\boldsymbol{\tau}_{k}=[0,...0,1,0,...,0]^{T}\) where \(1\) is in \(k^{th}\) position. 2: Define the forward process for \(k=1,2,...,K\) as \[\mathbf{x}_{k}=\sqrt{1-\beta_{k}}\mathbf{x}_{k-1}+\sqrt{\beta_{k}}\boldsymbol{ \epsilon}_{k}\] where \(\boldsymbol{\epsilon}_{k}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\). 3:for\(j\) in epochs do 4:\(k\sim\mathcal{U}\{1,2,...,K\}\). 5: Define \(\boldsymbol{\tau}_{k}\) as in line \(1\). 6:\(p_{\theta}(\mathbf{x}_{k},k)\leftarrow(\sin(\mathbf{x}_{k}^{T}\mathbf{W}+ \mathbf{b})\odot\cos(\boldsymbol{\tau}_{k}^{T}\boldsymbol{\theta^{(1)}})) \boldsymbol{\theta^{(2)}}\) 7: Update \(\boldsymbol{\theta}=[\boldsymbol{\theta}^{(1)},\boldsymbol{\theta}^{(2)}]\) by minimizing the loss \( L=\dfrac{1}{K}\sum_{k=1}^{K}\Big{\|}\boldsymbol{\epsilon}_{k}- p_{\theta}(\mathbf{x}_{k},k)\Big{\|}_{2}^{2}\). 8:endfor ``` **Algorithm 1** Training and sampling using DRFM ### Theoretical Results We provide theoretical results corresponding to our proposed model. We first formulate our proposed model as a time-dependent layered random features model, followed by the proof of obtaining sample generation bounds. The obtained bounds help to prove that DRFM is capable of generating samples from the distribution on which it was trained using the results from [2]. For a fixed timestep \(k\), Eq. 20 can be written as: \[p_{\theta}(\mathbf{x}_{k},k) =(\sin(\mathbf{x}_{k}^{T}\mathbf{W}+\mathbf{b}^{T})\odot\cos( \boldsymbol{\tau}_{k}\boldsymbol{\theta}^{(1)}))\boldsymbol{\theta}^{(2)} \tag{21}\] \[=\left(\sin\left([y_{1},...,y_{d}]\begin{bmatrix}\omega_{11}&...& \omega_{1N}\\ \vdots&...&\vdots\\ \omega_{d1}&...&\omega_{dN}\end{bmatrix}+\begin{bmatrix}b_{1}\\ \vdots\\ b_{N}\end{bmatrix}\right)\odot\left[\cos(\theta_{k1}^{(1)})&...&\cos(\theta_{ kN}^{(1)})\right]\right)\boldsymbol{\theta}^{(2)}\] \[=\sin(\mathbf{x}_{k}^{T}\mathbf{W}+\mathbf{b}^{T})\begin{bmatrix} \cos(\theta_{k1}^{(1)})\theta_{11}^{(2)}&...&\cos(\theta_{k1}^{(1)})\theta_{ 1d}^{(2)}\\ \vdots&...&\vdots\\ \cos(\theta_{kN}^{(1)})\theta_{N1}^{(2)}&...&\cos(\theta_{kN}^{(1)})\theta_{ Nd}^{(2)}\end{bmatrix}.\] For each \(j=1,...,N\), let \(a_{j}=\sin\left(\sum_{i=1}^{d}y_{i}\omega_{ij}+b_{j}\right)\). Note that \(a_{j}\)'s are fixed. Then Eq. (21) becomes, \[p_{\theta}(\mathbf{y},k) =\begin{bmatrix}a_{1}\cos(\theta_{k1}^{(1)})&...&a_{N}\cos( \theta_{kN}^{(1)})\end{bmatrix}\begin{bmatrix}\theta_{11}^{(2)}&...&\theta_{1d }^{(2)}\\ \vdots&...&\vdots\\ \theta_{N1}^{(2)}&...&\theta_{Nd}^{(2)}\end{bmatrix} \tag{22}\] \[=\begin{bmatrix}a_{1}&...&a_{N}\end{bmatrix}\begin{bmatrix}\cos( \theta_{k1}^{(1)})\theta_{11}^{(2)}&...&\cos(\theta_{k1}^{(1)})\theta_{1d}^{ (2)}\\ \vdots&...&\vdots\\ \cos(\theta_{kN}^{(1)})\theta_{N1}^{(2)}&...&\cos(\theta_{kN}^{(1)})\theta_{ Nd}^{(2)}\end{bmatrix} \tag{23}\] Thus, for a fixed time \(k\), our proposed architecture is a random feature model with a fixed dictionary having \(N\) features denoted by \(\phi(\langle\mathbf{x}_{k},\boldsymbol{\omega}_{i}\rangle)=\sin(\mathbf{x}_{k }^{T}\boldsymbol{\omega}_{i}+b_{i}),\,\forall i=1,\ldots,N\) and learnt coeeficients \(\mathbf{C}=(c_{ij})\in\mathbb{R}^{N\times d}\) whose entries are \(c_{ij}=\cos(\boldsymbol{\theta}_{ki}^{(1)})\boldsymbol{\theta}_{ij}^{(2)},\, \,\forall i=1,\ldots,N;j=1,\ldots,d\). As depicted in Figure 1, DRFM can be visualized as \(K\) random feature models stacked up in a column. Each random feature model has associated weights corresponding to its timestep which also gets optimized implicitly while training. The reformulation of DRFM into multiple random feature models leads us to our first Lemma stated below. We show that the class of functions generated by our proposed model is the same as the class of functions approximated by random feature models. **Lemma 3.1**.: _Let \(\mathcal{G}_{k,\boldsymbol{\omega}}\) denote the set of functions that can be approximated by DRFM at each timestep \(k\) defined as_ \[\mathcal{G}_{k,\boldsymbol{\omega}}=\left\{\boldsymbol{g}(\mathbf{x})=\sum_{j =1}^{N}\cos(\boldsymbol{\theta}_{kj}^{(1)})\boldsymbol{\theta}_{j}^{(2)}\phi( \mathbf{x}_{k}^{T}\boldsymbol{\omega}_{j})\bigg{|}\bigg{\|}\boldsymbol{\theta} _{j}^{(2)}\right\|_{\infty}\leq\frac{C}{N}\right\} \tag{24}\] _Then for a fixed \(k\) and \(\mathcal{F}_{\boldsymbol{\omega}}\) defined in Eq. (26), \(\mathcal{G}_{k,\boldsymbol{\omega}}=\mathcal{F}_{\boldsymbol{\omega}}\)._ Proof.: The above equality can be proved easily. Fix \(k\). Consider \(\boldsymbol{g}(\mathbf{x})\in\mathcal{G}_{k,\boldsymbol{\omega}}\), then \(\boldsymbol{g}(\mathbf{x})=\sum_{j=1}^{N}\cos(\boldsymbol{\theta}_{kj}^{(1)}) \boldsymbol{\theta}_{j}^{(2)}\phi(\mathbf{x}_{k}^{T}\boldsymbol{\omega}_{j})\). Clearly, \(\boldsymbol{g}(\mathbf{x})\in\mathcal{F}_{\boldsymbol{\omega}}\) as \(\left\|\cos(\mathbf{\theta}_{kj}^{(1)})\mathbf{\theta}_{j}^{(2)}\right\|_{\infty}\leq \left\|\mathbf{\theta}_{j}^{(2)}\right\|_{\infty}\leq\frac{C}{N}\). Thus \(\mathcal{G}_{k,\mathbf{\omega}}\subseteq\mathcal{F}_{\mathbf{\omega}}\). Conversely let \(\mathbf{f}(\mathbf{x})\in\mathcal{F}_{\mathbf{\omega}}\), then \(\mathbf{f}(\mathbf{x})=\sum_{j=1}^{N}\mathbf{\alpha}_{j}\,\phi(\mathbf{x}_{k}^{T}\mathbf{ \omega}_{j})\). Choose \(\mathbf{\theta}_{j}^{(2)}=\mathbf{\alpha}_{j}\) and \(\mathbf{\theta}_{kj}^{(1)}=[0,\cdots,0]\). As \(\cos(\mathbf{\theta}_{kj}^{(1)})\mathbf{\theta}_{j}^{(2)}=\mathbf{\alpha}_{j}\), thus \(\mathbf{f}(\mathbf{x})=\sum_{j=1}^{N}\cos(\mathbf{\theta}_{kj}^{(1)})\mathbf{\theta}_{j}^{ (2)}\phi(\mathbf{x}^{T}\mathbf{\omega}_{j})\) and \(\|\mathbf{\theta}_{j}^{(2)}\|_{\infty}=\|\mathbf{\alpha}_{j}\|_{\infty}\leq \frac{C}{N}\). Hence \(\mathbf{f}(\mathbf{x})\in\mathcal{G}(k,\mathbf{\omega})\) and \(\mathcal{F}_{\mathbf{\omega}}\subseteq\mathcal{G}_{k,\mathbf{\omega}}\). In the next Lemma stated below, we extend results from [21, 22, 23] to find approximation error bounds for vector-valued functions. **Lemma 3.2**.: _Let \(X\subset\mathbb{R}^{d}\) denote the training dataset and suppose \(q\) is a measure on \(X\), and \(\mathbf{f}^{*}\) a function in \(\mathcal{F}_{\rho}\) where_ \[\mathcal{F}_{\rho}=\left\{\mathbf{f}(\mathbf{x})=\int_{\Omega}\mathbf{\alpha}(\mathbf{ \omega})\phi(\mathbf{x};\mathbf{\omega})\,d\mathbf{\omega}\ \Bigg{|}\ \|\mathbf{\alpha}(\mathbf{\omega})\|_{\infty}\leq C\rho(\mathbf{ \omega})\right\}.\] _If \([\mathbf{\omega}_{j}]_{j\in[N]}\) are drawn iid from \(\rho\), then for \(\delta>0\), with probability at least \(1-\delta\) over \([\mathbf{\omega}_{j}]_{j\in[N]}\), there exists a function \(\mathbf{f}^{\sharp}\in\mathcal{F}_{\mathbf{\omega}}\) such that_ \[\|\mathbf{f}^{\sharp}-\mathbf{f}^{*}\|_{2}\leq\frac{C\sqrt{d}}{\sqrt{N}}\left(1+\sqrt{ 2\log\frac{1}{\delta}}\right), \tag{25}\] _where_ \[\mathcal{F}_{\mathbf{\omega}}=\left\{\mathbf{f}(\mathbf{x})=\sum_{j=1}^{N}\mathbf{\alpha} _{j}\,\phi(\mathbf{x}^{T}\mathbf{\omega}_{j})\Bigg{|}\|\mathbf{\alpha}_{j}\|_{\infty} \leq\frac{C}{N}\right\}. \tag{26}\] Proof.: We follow the proof technique described in [23]. As \(\mathbf{f}^{*}\in\mathcal{F}_{\rho}\), then \(\mathbf{f}^{*}(\mathbf{x})=\int_{\mathbf{\Omega}}\mathbf{\alpha}(\mathbf{\omega})\phi( \mathbf{x};\mathbf{\omega})d\mathbf{\omega}\). Construct \(\mathbf{f}_{k}=\mathbf{\beta}_{k}\phi(\cdot;\mathbf{\omega}_{k})\), \(k=1,\cdots,N\) such that \(\mathbf{\beta}_{k}=\frac{\mathbf{\alpha}(\mathbf{\omega}_{k})}{\rho(\mathbf{\omega}_{k})}= \frac{1}{\rho(\mathbf{\omega}_{k})}\begin{bmatrix}\alpha_{1}(\mathbf{\omega}_{k})\\ \vdots\\ \alpha_{d}(\mathbf{\omega}_{k})\end{bmatrix}\). Note that \(\mathbb{E}_{\mathbf{\omega}}(\mathbf{f}_{k})=\int_{\mathbf{\omega}}\frac{\mathbf{\alpha}(\mathbf{ \omega}_{k})}{\rho(\mathbf{\omega}_{k})}\phi(\mathbf{x};\mathbf{\omega})\rho(\mathbf{\omega }_{k})d\mathbf{\omega}=\mathbf{f}^{*}\). Define the sample average of these functions as \(\mathbf{f}^{\sharp}(\mathbf{x})=\sum_{k=1}^{N}\frac{\mathbf{\beta}_{k}}{N}\phi( \mathbf{x};\mathbf{\omega}_{k})\). As \(\left\|\frac{\mathbf{\beta}_{k}}{N}\right\|_{\infty}\leq\frac{C}{N}\), thus \(\mathbf{f}^{\sharp}\in\mathcal{F}_{\mathbf{\omega}}\). Also note that \(\|\mathbf{\beta}_{k}\phi(\cdot;\mathbf{\omega}_{k})\|_{2}\leq\sqrt{d}\|\|\mathbf{\beta}_{k} \phi(\cdot;\mathbf{\omega}_{k})\|_{\infty}\leq\sqrt{d}C\). In order to getthe desired result, we use McDiarmid's inequality. Define a scaler function on \(F=\{\mathbf{f}_{1},\cdots,\mathbf{f}_{N}\}\) as \(g(F)=\|\mathbf{f}^{\sharp}-\mathbb{E}_{F}\mathbf{f}^{\sharp}\|_{2}\). We claim that the function \(g\) is stable under perturbation of its \(i\)th argument. Define \(\tilde{F}=\{\mathbf{f}_{1},\cdots,\tilde{\mathbf{f}}_{i},\cdots,\mathbf{f}_{N}\}\) i.e., \(\tilde{F}\) differs from \(F\) only in its \(i\)th element. Then \[\left|g(F)-g(\tilde{F})\right|=\left|\left\|\mathbf{f}^{\sharp}-\mathbb{E}_{F}\mathbf{f} ^{\sharp}\right\|_{2}-\left\|\tilde{\mathbf{f}}^{\sharp}-\mathbb{E}_{\tilde{F}}\mathbf{f} ^{\sharp}\right\|_{2}\right|\leq\left\|\mathbf{f}^{\sharp}-\tilde{\mathbf{f}}^{\sharp }\right\|_{2} \tag{27}\] where the above inequality is obtained from triangle inequality. Further, \[\left\|\boldsymbol{f}^{\sharp}-\tilde{\boldsymbol{f}}^{\sharp}\right\|_{2}=\frac{1 }{N}\left\|\boldsymbol{f}_{i}-\tilde{\boldsymbol{f}}_{i}\right\|_{2}\leq\left\| (\boldsymbol{\beta}_{i}-\tilde{\boldsymbol{\beta}}_{i})\phi(\cdot;\boldsymbol {\omega})\right\|_{2}\leq\frac{\sqrt{d}C}{N}. \tag{28}\] Thus \(\mathbb{E}[g(F)^{2}]=\mathbb{E}\left[\left\|\boldsymbol{f}^{\sharp}-\mathbb{E }_{F}\boldsymbol{f}^{\sharp}\right\|_{2}^{2}\right]=\frac{1}{N}\left[\mathbb{E }\left[\left\|\sum_{k=1}^{N}\boldsymbol{f}_{k}\right\|_{2}^{2}\right]-\left\| \mathbb{E}\left[\sum_{k=1}^{N}\boldsymbol{f}_{k}\right]\right.\right\|_{2}^{2} \right]\). Since \(\|\boldsymbol{f}_{k}\|_{2}\leq\sqrt{d}C\), using Jensen's inequality and above result we get \[\mathbb{E}[g(F)]\leq\sqrt{\mathbb{E}(g^{2}(F))}\leq\frac{\sqrt{d}C}{\sqrt{N}}. \tag{29}\] Finally the required bounds can be obtained by combining above result and McDiarmid's inequality. Using the above-stated Lemmas and results given in [2], we derive our main theorem. Specifically, we quantify the total variation between the distribution learned by our model and the true data distribution. **Theorem 3.3**.: _For a given probability density \(q(\mathbf{x}_{0})\) on \(\mathbb{R}^{d}\) suppose the following conditions hold:_ 1. _For all_ \(t\geq 0\)_, the score_ \(\nabla\log q(\mathbf{x}(t))\) _is_ \(L-\)_Lipschitz._ 2. _The second moment of_ \(q(\mathbf{x}_{0})\) _is finite i.e.,_ \(m_{2}^{2}=E_{q(\mathbf{x}_{0})}[\|.\|^{2}]<\infty\)_._ _Let \(p_{\theta}(\mathbf{x}_{0},0)\) be the sample generated by DRFM after \(K\) timesteps at terminal time \(T\). Then for the SDE formulation of the DDPM algorithm, if the step size \(h:=T/K\) satisfies \(h\lesssim 1/L\), where \(L\geq 1\). Then,_ \[TV(p_{\theta}(\mathbf{x}_{0},0),q(\mathbf{x}_{0}))\lesssim\sqrt{KL(q(\mathbf{ x}_{0})\|\gamma)}\exp(-T)+(L\sqrt{dh}+Lm_{2}h)\sqrt{T}+\frac{C_{2}\sqrt{T Kd}}{\sqrt{N}}\left(1+\sqrt{2\log\frac{1}{\delta}} \right) \tag{30}\] _where \(C_{2}\geq\max_{1\leq i\leq N,1\leq j\leq d}\left|\boldsymbol{\theta}_{ij}^{(2 )}\right|\) and \(\gamma\) is the p.d.f. of the multivariate normal distribution with mean vector \(\mathbf{0}\) and covariance matrix \(\mathbf{I}\)._ The error bound given in Eq. (30) consists of three types of errors: (i) convergence error of the forward process; (ii) discretization error of the associated SDE with step size \(h>0\); and (iii) score estimation error, respectively. Note that the first two terms are independent of the model architecture. While for most models score estimation is difficult, our main contribution involves quantifying that error which gives us an estimate of the number of parameters are needed for the third error to become negligible. We can combine the proof of Lemma 3.1, 3.2 and results of Theorem 1 from [2] to get the required bounds. ## 4 Experimental Results In order to validate our findings, we train our proposed model on both image and audio data. We evaluate the validity of our model on two tasks: (i) generating data from noise and; (ii) denoising data corrupted with gaussian noise. For audio data, we use two music samples corresponding to flute and guitar. The experiment images was done by taking one hundred images of the class "dress" and "shoes" separately from fashion MNIST dataset. We compare our method with: (1) fully connected version of DRFM denoted by NN where we train \([\mathbf{W},\mathbf{b},\boldsymbol{\theta}^{(1)},\boldsymbol{\theta}^{(2)}]\) while preserving the number of trainable parameters; (2) classical random feature approach with only \(\boldsymbol{\theta}^{(2)}\) being trainable denoted by RF. The details of the implementation of all the experiments and their results are described in the sections below. ### Results on Fashion-MNIST data We create the image dataset for our experiments by considering 100 images of size \(28\times 28\) taken from a particular class of fashion MNIST dataset. The model is trained with 80000 semi-random features with 100 equally spaced timesteps between 0.0001 and 0.02 trained for 30000 epochs. For generating images, we generated fifteen samples from randomly sampled noise. In our second task we give fifteen images from the same class (but not in the training set) corrupted with noise and test if our model can denoise the image. Results from Figure 2 demonstrate that our method learns the overall features of the input distribution. Although the model is trained with a very small number of training data (only one hundred images) and timesteps (one hundred timesteps), we can see that the samples generated from pure noise have already started to resemble the input distribution of dresses. For NN model we note that most of the generated samples are the same with a dark shadow while for RF model, the generated images are very noisy and barely recognizable. We also test our models ability to remove noise from images. We take fifteen random images of "dress" not in the training data and corrupt it with 20% noise. The proposed model is used Figure 2: Figures generated from random noise when trained on 100 “dress” images. Samples from: Gaussian noise (top left), DRFM (top right), fully connected network (bottom left) and random feature model (bottom right). for denoising. In Figure 3 we can see that the model can recover a denoised image which is in fact better than the results obtained when sampling from pure noise. The generated images for most of the noised data look very similar to the original ones in terms of their shape and size. The NN model performs quite well for most of the images, however for a few cases it fails to denoise anything and the final image looks similar to the noisy input. RF model fails to denoise and the resultant images still contain noise. In order to check the effect of the number of timesteps on the sampling power of DRFM, we also run our model using 1000 timesteps between 0.0001 and 0.02. The images generated/denoised are given in Figure 4. Samples generated from noise seem to improve with the increase in number of timesteps. However, for the task of denoising, the results are better with 100 timesteps. The improvement in sample quality with an increased number of timesteps for generating data is expected as more number of reverse steps would be required to generate a point in the input distribution than for the task of denoising. We also conduct experiment of a different class of data with the same hyperparameters as discussed above with 100 timesteps. This time we select the class "shoes" and test our model's performance. The conclusions drawn supports the claims we made with the previous class of data where DRFM can denoise images well while only learning to generate basic features of the model class when generating from pure noise. The plots for the above are depicted Figure 5. ### Results on Audio Data Our second experiment involves learning to generate and denoise audio signals. We use one data sample each taken from two different instruments, namely guitar and flute. There are a total of 5560 points for each instrument piece. We train our model using 15000 random features with 100 timesteps taken between 0.0001 and 0.02 for 30,000 epochs. The samples are generated from pure noise using the trained model to remove noise for each reverse diffusion step. We also test if our model is capable of denoising a signal when it is not a part of the training set explicitly (but similar Figure 3: Figures denoised from corrupted “dress” images. Corrupted images (top left); denoised images by DRFM (top right), fully connected network (bottom left) and random feature model (bottom right). to it). For that, we use a validation data point containing samples from a music piece when both guitar and flute are played simultaneously. We plot the samples generated from pure Gaussian noise in Figure 6. The plots in Figure 6 demonstrate the potential of DRFM to generate signals not a part of the original training data. Figure 6(b) shows that there is no advantage of using a NN model since the results are much worse. The network does not learn anything and the signal generated is just another noise. On the other hand, our proposed model DRFM generates signals that are similar to the original two signals used to train the data. Figure 7 shows that when our trained model is applied to a validation data point, it can successfully recover and denoise the signal. This is more evident when the signal is played as an audio file. There are however some extra elements that get added while recovering due to the presence of noise which is a common effect of using diffusion models for denoising. Figure 4: Generated and denoised images trained on 100 “dress” images with 1000 timesteps. Top row depicts samples generated from Gaussian noise. Bottom row depicts denoised images. Figure 5: Samples generated from noise and noisy images when trained on 100 images of “shoes”. Top row depicts samples generated from Gaussian noise. Bottom row depicts denoised images. ## 5 Conclusion In this paper, We proposed a diffusion random feature model. It was shown that with our network architecture, the model becomes interpretable helping us to find theoretical upper bounds on the samples generated by DRFM with respect to the input distribution. We validated our findings using numerical experiments on audio and a small subset of the fashion MNIST dataset. Our findings indicated the power of our method to learn the process of generating data from as few as one hundred training samples and one hundred timesteps. Comparisons with a fully connected network (when all layers are trainable) and random features method (all but the last layer is fixed i.e., only \(\mathbf{\theta}_{2}\) is trainable) highlighted the advantages of our model which performed better than both. Our proposed model combining the idea of random features with diffusion models opens up potential directions worthy of further exploration into the power of random features in developing models which are interpretable and suited to complex tasks. Further direction of this work involves extending DRFM beyond its shallow nature into deeper architecture to avoid the curse of dimensionality with respect to the number of features required for approximation. ## Acknowledgements E.S. and G.T. were supported in part by NSERC RGPIN 50503-10842. Figure 6: Results using (a)DRFM; (b) NN; (c) random feature model (RF). Figure 7: Results using (a) DRFM; (b) NN; (c) random feature model (RF).
2308.11536
Lax Matrices & Clusters for Type A & C Q-Deformed Open Toda Chain
At the turn of the century, Etingof and Sevostyanov independently constructed a family of quantum integrable systems, quantizing the open Toda chain associated to a simple Lie group $G$. The elements of this family are parameterized by Coxeter words of the corresponding Weyl group. Twenty years later, in the works of Finkelberg, Gonin, and Tsymbaliuk, this was generalized to a family of quantum Toda chains parameterized by pairs of Coxeter words. In this paper, we show that this family is actually a single cluster integrable system written in different clusters associated to cyclic double Coxeter words. Furthermore, if we restrict the action of Hamiltonians to its positive representation, these systems become unitary equivalent.
Corey Lunsford
2023-08-22T16:04:38Z
http://arxiv.org/abs/2308.11536v1
# Lax matrices & clusters for type A & C Q-deformed open Toda chain ###### Abstract. At the turn of the century, Etingof and Sevostyanov independently constructed a family of quantum integrable systems, quantizing the open Toda chain associated to a simple Lie group \(G\). The elements of this family are parameterized by Coxeter words of the corresponding Weyl group. Twenty years later, in the works of Finkelberg, Gonin, and Tsymbaliuk, this was generalized to a family of quantum Toda chains parameterized by pairs of Coxeter words. In this paper, we show that this family is actually a single cluster integrable system written in different clusters associated to cyclic double Coxeter words. Furthermore, if we restrict the action of Hamiltonians to its positive representation, these systems become unitary equivalent. ## 1. Introduction Let \(G\) be a simple Lie group of rank \(n\), \(H\subset G\) a maximal torus, and \(W=N_{G}(H)/H\) the corresponding Weyl group. Let \(N_{\pm}\) be the positive and negative maximal unipotent subgroups of \(G\) and consider the open cell \(G_{0}=N_{-}HN_{+}\). Furthermore, consider \(\chi_{\pm}\colon N_{\pm}\to\mathbb{C}^{*}\) to be holomorphic nondegenerate characters. Then a _Whittaker function_ on \(G_{0}\) with characters \(\chi_{\pm}\) is a holomorphic function \(\psi\) on \(G_{0}\) satisfying the relation \(\psi(n_{-}hn_{+})=\chi_{-}(n_{-})\chi_{+}(n_{+})\psi(h)\) for any \(n_{\pm}\in N_{\pm},h\in H\). It was shown that the restriction of the Laplace operator on \(G\) to the space of Whittaker functions gives the 2nd quantum Toda Hamiltonian. Thus, one gets a quantum integrable system, where the quantum integrals are restrictions to Whittaker functions of the higher Casimirs of \(G\). Etingof [1] and Sevostyanov [21] independently applied this construction to the case when \(G\) is replaced by the quantum group \(U_{q}(\mathfrak{g})\). The key difference in this case is that \(U_{q}(\mathfrak{n}_{+})\) has no non-degenerate characters. In order to deal with this issue, a choice of orientation of the Dynkin diagram (equivalently, a choice of a Coxeter word \(u\in W\)) can be made. Therefore, each choice leads to a \(q\)-deformation of the quantum Toda system. In [1], a natural generalization of Sevastyanov's construction to a choice of two Coxeter words is given. This leads to \(3^{\text{rk}(\mathfrak{g})-1}\) quantum integrable systems which are \(q\)-deformations of the quantum Toda system. We will denote these as \(q\)-Toda systems. When \(G\) is of Dynkin type \(A\)[1] and type \(C\)[1], there is an alternative presentation by \(2\times 2\) Lax matrices, which is identified with the \(q\)-Toda systems. For each choice of a pair of Coxeter words \(u,v\in W\), the corresponding \(q\)-Toda Hamiltonians generate a commutative subalgebra inside \(D_{q}(H)\), the algebra of \(q\)-difference operators on \(H\). Since conjugation by \(H\) is a Poisson map with respect to the standard Sklyanin Poisson structure, there is an induced Poisson structure on the reduced double Bruhat cell \(G^{u,v}/H\). The conjugation invariant functions on \(G\) form a Poisson commutative subalgebra of functions on \(G\), in which there are \(n\) algebraically independent such functions restricted to \(G^{u,v}/H\). When \(u,v\in W\) are Coxeter elements, the complex dimension of the reduced double Bruhat cell is \[\dim(G^{u,v}/H)=l(u)+l(v)=2n \tag{1}\] where \(l(u)\) denotes the length (i.e., number of simple reflections in the reduced expression) of \(u\). Hence, the \(n\) algebraically independent conjugation invariant functions determine an integrable system on \(G^{u,v}/H\), called the Coxeter-Toda system. A standard choice of such functions is given by the trace in the \(i\)-th fundamental representation of \(G\), \(H_{i}(g)=\text{tr}(\pi_{i}(g))\), denoted the \(i\)-th _Hamiltonian_. For more details, see [10]. There is a cluster realization of Coxeter-Toda systems through the language of planar directed networks, combinatorial tools first introduced by Postnikov in [14] to gain insight into totally nonnegative Grassmanians. In [11], a Poisson structure was assigned to the space of edge weights of a planar directed network on a cylinder. In [11], this Poisson structure was applied to special directed networks associated to a pair of Coxeter elements in \(S_{n+1}\). A directed network in this context represents an element of \(G^{u,v}/H\) when \(G\) is of type \(A_{n}\). The Poisson structure on the space of weights thus induces a Poisson structure on \(G^{u,v}/H\), the phase space for a Coxeter-Toda system. Furthermore, it is shown in [11] that there is a cluster structure compatible with the Poisson bracket on the space of weights that assigns a quiver \(Q\) to each pair \(u,v\in W\). Cluster transformations are then the so-called _generalized Backlund-Darboux transformations_ between solutions of Coxeter-Toda systems corresponding to different Coxeter elements. A recent thesis [13] generalized this construction to all classical Dynkin types. As in [12], there is a canonical way to quantize a cluster structure on a Poisson variety. By choosing a polarization, one obtains a \(\mathbb{C}(q)\)-algebra homomorphism \(\varphi_{Q}\colon\mathcal{X}_{Q}^{q}\to D_{q}(\mathbb{R}^{n})\). Moreover, restricting \(\varphi(\mathcal{X}_{Q}^{q})\) onto its maximal domain in \(L^{2}(\mathbb{R}^{N})\), the corresponding _positive_ representations (see [12]) of different cluster charts are unitary equivalent, thus giving rise to a representation of the universally Laurent algebra \(\mathbb{L}_{Q}^{q}\). Thus, there are two ways to obtain a quantum Toda system related to a simple Lie group \(G\). When \(G\) is of type \(A\) or \(C\), the q-Toda Hamiltonians can be obtained through \(2\times 2\) Lax matrices. Alternatively, the cluster-Poisson structure on \(G^{u,v}/H\) can be quantized to obtain a quantum cluster algebra, in which a family of a quantum Hamiltonians is a set of mutually commuting elements lying in a quantum torus algebra. In this paper, we establish an equivalence of these quantum Toda systems using the language of directed networks. Identifying \(H\cong\mathbb{R}^{n}\), we prove the following theorem: **Theorem 1.1**.: _Let \(G\) be a simple Lie Group of Dynkin type \(A\) or \(C\), \(u,v\) a pair of Coxeter Weyl words, and \(Q\) the quiver assigned to \(G^{u,v}/H\). Let \(\mathbb{L}_{Q}^{q}\) be the universally Laurent algebra associated to \(Q\) as defined in [12]. Then for each family of q-Toda Hamiltonians \(\{\mathbb{H}_{i}\}_{1\leq i\leq n}\subset D_{q}(H)\) in [10], there is a quantum cluster chart, \(\mathcal{X}_{Q}^{q}\), of \(\mathbb{L}_{Q}^{q}\) and a polarization \(\varphi_{Q}\colon\mathcal{X}_{Q}^{q}\to D_{q}(H)\) such that \(\varphi_{Q}(\mathbb{H}_{i})=H_{i}^{Q}\)._ This theorem shows that the \(3^{\text{rk}(\mathfrak{g})-1}\) families of \(q\)-Toda Hamiltonians for \(G\) are mutation equivalent. Therefore, the following corollary is immediate: **Corollary 1.2**.: _The \(3^{\text{rk}(\mathfrak{g})-1}\)\(q\)-Toda systems for \(G\) restricted to their positive representation are unitary equivalent._ The results of this paper show that two different settings where the choice of a pair of Coxeter words are needed to obtain a quantum integrable system actually produce the same system. Moreover, these results can be thought of as a comparison between \(2\times 2\) and \(N\times N\) Lax matrix descriptions. For Dynkin type A, this can be drawn from the theory of Goncharov-Kenyon integrable systems related to the rotation of a Newton polygon ([1], [2]). We would like to view this paper as a first step towards an extension to other Dynkin types. The paper is organized as follows. In Section 2 we go through the Poisson structure assigned to \(G^{u,v}/H\) and the accompanying Coxeter-Toda systems from [10] and [11]. In Section 3, we recall the Poisson structure on the space of edge weights on directed networks from [12] and offer a quantization in the sense of [13]. In Section 4, we recall the presentation of the \(3^{\mathrm{rk}(\mathfrak{g})-1}\) families of q-Toda Hamiltonians by \(2\times 2\) Lax matrices found in [14] and [15]. Finally, we give proofs of the correspondence and provide explicit formulas for the Hamiltonians in Section 5. ## Acknowledgements I give my gratitude to Alexander Shapiro for his guidance, support and fruitful discussions at all stages of this project. I would also like to thank the University of Edinburgh for their hospitality and support to help complete this paper. This work was partially supported by the Royal Society University Research Fellowship, grant No. URF-R1-201530. ## 2. Double Bruhat Cells & Coxeter-Toda Systems In this section, we recall the construction of reduced Double Bruhat Cells and the accompanying symplectic structure from [10] and [11]. Let \(G\) be a simple complex Lie group of rank \(n\). Furthermore, let \(B_{+},B_{-}\) (\(U_{+},U_{-}\)) be a choice of positive and negative Borel (unipotent) subgroups and \(H=B_{+}\cap B_{-}\) be the maximal torus of \(G\). Recall that the Lie algebra, \(\mathfrak{g}\), of \(G\) has a decomposition \(\mathfrak{g}=\mathfrak{n}_{-}\oplus\mathfrak{h}\oplus\mathfrak{n}_{+}\) where \(\mathfrak{n}_{-},\mathfrak{h},\mathfrak{n}_{+}\) are the Lie algebras of \(U_{-},H,U_{+}\), respectively. We can fix a basis \(\alpha_{i}\in\mathfrak{h}\) of simple roots and a dual basis of simple coroots \(\alpha_{i}^{\vee}\in\mathfrak{h}^{*}\) for \(i\in[1,n]\) such that \(\alpha_{j}(\alpha_{i}^{\vee})=C_{ij}\) where \(C\) is the Cartan matrix. This allows us to fix Chevalley generators \(e_{i}\in\mathfrak{n}_{+}\) and \(e_{-i}\in\mathfrak{n}_{-}\). They give rise to the one-parameter subgroups \(E_{i}(t),E_{-i}(t)\in G\) for \(t\in\mathbb{C}^{\times}\). The group \(G\) admits two Bruhat decompositions given by \[G=\bigsqcup_{u\in W}B_{+}\dot{u}B_{+}=\bigsqcup_{v\in W}B_{-}\dot{v}B_{-} \tag{2}\] where \(\dot{u},\dot{v}\) are representatives of the Weyl group \(W=N(H)/H\) in \(G\). The _double Bruhat cell_ of \(G\) with respect to \(u,v\in W\) is denoted \[G^{u,v}=B_{+}\dot{u}B_{+}\cap B_{-}\dot{v}B_{-}, \tag{3}\] which allows us to decompose \(G\) in the following way: \[G=\bigsqcup_{u,v\in W}G^{u,v}. \tag{4}\] Let \((s_{i})_{i\in[1,n]}\) be the simple transpositions for \(W\). Then \(u\in W\) can be written as \(u=s_{i_{1}}\cdots s_{i_{m}}\) for some \(i_{1},\ldots,i_{m}\in[1,n]\). A _word_ corresponding to \(u\) is defined as the sequence \(\dot{\mathbf{i}}=(i_{1},\ldots,i_{m})\) for \(i_{j}\in[1,n]\). Let \(l(u)\) be the _length_ of \(u\in W\), i.e. the number of simple reflections in the decomposition of \(u\). Then a word is reduced if \(l(u)\) is minimal. Furthermore, a _double reduced word_ for \(u,v\in W\) is a tuple \(\mathbf{i}\) such that the entries are a shuffling of the letters of \(-\mathbf{i}_{u}\) and \(\mathbf{i}_{v}\), where \(\mathbf{i}_{u},\mathbf{i}_{v}\) are reduced words for \(u\) and \(v\), respectively. A double reduced word is called _unmixed_ if it can be written as \(\mathbf{i}=(-\mathbf{i}_{u},\mathbf{i}_{v})\). In [10], it was proved that if \(\mathbf{i}=(i_{1},\ldots,i_{m})\) is a double reduced word for \((u,v)\in W\times W\), then the map \(H\times\mathbb{C}^{m}\to G\) such that \[(h,a_{1},\ldots,a_{m})\mapsto hE_{i_{1}}(a_{1})\cdots E_{i_{m}}(a_{m}) \tag{5}\] restricts to a biregular isomorphism on a Zariski open dense subset of \(G^{u,v}\). Effectively, this allows us to decompose any element \(g\in G^{u,v}\) as \[g=D(t_{1},\ldots,t_{n})E_{i_{1}}(a_{1})\cdots E_{i_{m}}(a_{m}) \tag{6}\] for \(t_{j},a_{k}\in\mathbb{C}^{\times}\), where \(D(t_{1},\ldots,t_{n})=\prod_{i=1}^{n}t_{i}^{\alpha_{i}^{\vee}}\). Since conjugation by the Cartan subgroup \(H\) preserves \(G^{u,v}\), it is possible to define the quotient \(G^{u,v}/H\). If \(u,v\) are Coxeter elements and \(\mathbf{i}\) unmixed, then in the notation of [10], any element \(\bar{g}\in G^{u,v}/H\) can be factorized as \[\bar{g}=E_{i_{1}}(1)\cdots E_{i_{n}}(1)D(t_{1},\ldots,t_{n})E_{i_{n+1}}(c_{i_ {n+1}})\cdots E_{i_{2n}}(c_{i_{2n}}) \tag{7}\] for \(t_{j},c_{k}\in\mathbb{C}^{\times}\). ## 3. Cluster Structure of q-Toda Systems ### Directed Networks In this section, we will construct special directed networks on the disk corresponding to an element of \(G^{u,v}\). The Type A case was done in [10] and [12], and this was generalized to all classical types in [11]. We will recall this construction for types A and C. Given a double reduced Coxeter word \(\mathbf{i}=(i_{1},\ldots,i_{m})\) for \((u,v)\in W\times W\), a directed network \(N_{u,v}(\mathbf{i})\) can be formed out of _elementary chips_ to represent the factorization scheme on \(G^{u,v}/H\) given by \[g=E_{i_{1}}(1)\cdots E_{i_{n}}(1)D(t_{1},\ldots,t_{n})E_{i_{r+1}}\cdots E_{i_ {2n}}(c_{i_{2n}}). \tag{8}\] The elementary chips defined explicitly below are glued together from left to right in the order specified by the above factorization scheme. Weights are assigned to edges according to the elementary chips (See Figures 1 & 2). The matrix element \(g_{ij}\) is then given by the sum over all path weights from source \(i\) to sink \(j\). Thus, the directed networks give a clear combinatorial description of the matrix elements of \(g\), discussed further in Section 6.4. #### 3.1.1. Type A Take \(G=SL_{n+1}(\mathbb{C})\). A set of Chevalley generators for \(\mathfrak{g}=\mathfrak{sl}_{2}\) are given by \(e_{i}=e_{i,i+1},e_{-i}=e_{i+1,i}\), and \(h_{i}=e_{i,i}-e_{i+1,i+1}\) for \(1\leq i\leq n\), where \(e_{ij}\) is the matrix with 1 in the \((i,j)\)-th entry and 0's everywhere else. This gives us the group generators \[E_{i}(a_{i}) =I_{n+1}+a_{i}e_{i,i+1}\] \[E_{-i}(b_{i}) =I_{n+1}+b_{i}e_{i+1,i}\] \[D(t_{1},\ldots,t_{n}) =\text{diag}(t_{1},t_{1}^{-1}t_{2},\ldots,t_{n-1}^{-1}t_{n},t_{n} ^{-1}) \tag{9}\] where \(I_{n+1}\) is the \((n+1)\times(n+1)\) identity matrix. These matrices are represented by the elementary chips in **Figure 1**. Explicitly, we can see that the \((i,j)\)-th matrix element is given by the weight of the path from row \(i\) to row \(j\). As an example, the type \(A_{3}\) network diagram \(N_{u,v}(\mathbf{i})\) for the standard double Coxeter word \(\mathbf{i_{0}}=(-1,\ldots-n,1,\ldots,n)\) is given in **Figure 2**. #### 3.1.2. Type C Now take \(G=Sp_{2n}(\mathbb{C})\). The Chevalley generators for the Lie algebra \(\mathfrak{sp}_{2n}\) of type \(C_{n}\) are given by \(e_{i}=e_{i,i+1}+e_{2n-i,2n+1-i}\), \(e_{-i}=e_{i+1,i}+e_{2n+1-i,2n-i}\) for \(1\leq i\leq n-1\), \(e_{n}=e_{n,n+1}\), \(e_{-n}=e_{n+1,n}\), and \(h_{i}=e_{i,i}-e_{i+1,i+1}+e_{2n-i,2n-i}-e_{2n+1-i,2n+1-i}\) for \(1\leq i\leq n-1\) and \(h_{n}=e_{n,n}-e_{n+1,n+1}\). The corresponding Lie group elements are therefore \[\begin{split} E_{i}(a_{i})&=I_{2n}+a_{i}e_{i,i+1}+a _{i}e_{2n-i,2n+1-i},\quad E_{n}=I_{2n}+a_{n}e_{n,n+1}\\ E_{-i}(b_{i})&=I_{2n}+b_{i}e_{i+1,i}+b_{i}e_{2n+1-i,2 n-i},\quad E_{-n}=I_{2n}+b_{n}e_{n+1,n}\\ D(t_{1},\ldots,t_{n})&=\text{diag}(t_{1},t_{1}^{-1 }t_{2},\ldots,t_{n-1}^{-1}t_{n},t_{n-1}t_{n}^{-1},\ldots,t_{1}t_{2}^{-1},t_{1}^ {-1}).\end{split} \tag{10}\] The corresponding elementary chips are given in **Figure 3**. Figure 1. Type \(A_{n}\) Elementary Chips Figure 2. Type \(A_{3}\) Directed Network for \(\mathbf{i_{0}}\) The Type \(C_{2}\) network diagram \(N_{u,v}(\mathbf{i})\) for the standard double Coxeter word \(\mathbf{i_{0}}=(-1,\ldots-n,1,\ldots,n)\) is given in **Figure 4**. ### Cluster Varieties In this section, we will recall some basic facts about cluster varieties and quantum cluster varieties following [10] and [10]. #### 3.2.1. Classical Cluster Varieties **Definition 3.1**.: A _seed_ is the datum \(\Sigma=(\Lambda,(*,*),\{e_{i}\},\{d_{i}\})\), where 1. \(\Lambda\) is a lattice; Figure 4. Type \(C_{2}\) Directed Network for \(\mathbf{i_{0}}\) Figure 3. Type \(C_{n}\) Elementary Chips 2. \((*,*)\) is a skew-symmetric \(\mathbb{Q}\)-valued bilinear form on \(\Lambda\); 3. \(\{e_{i}\}\) is a basis of the lattice \(\Lambda\), and \(I_{0}\) is a subset called the frozen basis vectors; 4. \(\{d_{i}\}\) are positive integers assigned to the basis vectors such that (11) \[\epsilon_{ij}\coloneqq(e_{i},e_{j})d_{j}\in\mathbb{Z},\quad\text{unless }i,j\in I_{0}\times I_{0}.\] The matrix \(\epsilon_{ij}\) is denoted the _exchange matrix_ for \(\Sigma\). One can perform a _cluster mutation_ of a seed at an index \(k\) to get a new seed, denoted \(\mu_{k}(\Sigma)\). The new quadruple for \(\mu_{k}(\Sigma)\) is \((I,I_{0},\mu_{k}(B),D)\) where \[\mu_{k}(\epsilon)_{ij}=\begin{cases}-\epsilon_{ij}&\text{if }i=k\text{ or }j=k\\ \epsilon_{ij}+\frac{\epsilon_{ik}|\epsilon_{kj}|+|\epsilon_{ik}|\epsilon_{kj} }{2}&\text{otherwise}\end{cases}. \tag{12}\] If two seed \(\Sigma,\Sigma^{\prime}\) are connected by a sequence of such isomorphisms \(\mu_{k}\), we say that \(\Sigma,\Sigma^{\prime}\) are _mutation equivalent_. The lattice \(\Lambda\) gives rise to a split algebraic torus \(\mathcal{X}_{\Lambda}\coloneqq\text{Hom}(\Lambda,\mathbb{G}_{m})\) denoted the _seed_\(\mathcal{X}\)-torus with elements \(X_{v}\in\mathcal{X}_{\Lambda}\) for any \(v\in\Lambda\). The form \((*,*)\) induces a Poisson structure on \(\mathcal{X}_{\Lambda}\) given by \[\{X_{v},X_{w}\}=(v,w)X_{v}X_{w}. \tag{13}\] The basis \(\{e_{i}\}\) induces a basis \(\{X_{i}=X_{e_{i}}\}\) in the group of characters of \(X_{\Lambda}\), denoted the _cluster \(\mathcal{X}\) coordinates_. Furthermore, the basis \(\{e_{i}\}\) induces a dual basis \(\{e_{i}^{*}\}\) for the dual lattice \(\Lambda^{*}=\text{Hom}(\Lambda,\mathbb{Z})\). Let \(\Lambda^{0}\) be the sublattice spanned by \(f_{i}=d_{i}^{-1}e_{i}^{*}\). Then we have another split algebraic torus \(\mathcal{A}_{\Lambda}=\text{Hom}(\Lambda^{0},\mathbb{G}_{m})\) with \(\{f_{i}\}\) providing the basis \(\{A_{i}\}\), denoted the _cluster \(\mathcal{A}\) coordinates_. There is a natural regular map \(p_{\Sigma}:\mathcal{A}_{\Lambda}\to\mathcal{X}_{\Lambda}\) called the _cluster ensemble map_ that translates between cluster \(\mathcal{A}\)-variables and cluster \(\mathcal{X}\)-variables. It is given by the formula \[p_{\Sigma}^{*}(X_{i})=\prod_{j\in I}A_{j}^{\epsilon_{ji}}. \tag{14}\] **Lemma 3.2**.: _[_11_]_ _The subtorus \(p(\mathcal{A}_{\Lambda})\) is a symplectic leaf of the Poisson structure on \(\mathcal{X}_{\Lambda}\)._ In accordance with standard notation, we will write \(\mathcal{X}_{\Sigma},\mathcal{A}_{\Sigma}\) even though these tori only depend on the underlying lattice. To any seed mutation \(\mu_{k}:\Sigma\to\Sigma^{\prime}\), we can associate a pair of birational isomorphisms \(\mu_{k}^{\mathcal{A}}:\mathcal{A}_{\Sigma}\to\mathcal{A}_{\Sigma^{\prime}}\) and \(\mu_{k}^{\mathcal{X}}:\mathcal{X}_{\Sigma}\to\mathcal{X}_{\Sigma^{\prime}}\) given by the formulas \[\begin{split}(\mu_{k}^{\mathcal{A}})^{*}(A_{i}^{\prime})& =\begin{cases}A_{i}&\text{if }i\neq k\\ A_{k}^{-1}\left(\prod_{j=1}^{m}A_{j}^{[\epsilon_{jk}]_{+}}+\prod_{j=1}^{m}A_{j}^{ [-\epsilon_{jk}]_{+}}\right)&\text{if }i=k\end{cases}\\ (\mu_{k}^{\mathcal{X}})^{*}(X_{i}^{\prime})&=\begin{cases}X_{i}X_{k}^{[ \epsilon_{ki}]_{+}}(1+X_{k})^{-\epsilon_{ki}}&\text{if }i\neq k\\ X_{k}^{-1}&\text{if }i=k\end{cases}.\end{split} \tag{15}\] where \([a]_{+}=\max(a,0)\). **Lemma 3.3**.: _The canonical ensemble map commutes with cluster variable mutations, i.e.,_ \[\mu_{k}^{\mathcal{X}}\circ p_{\Sigma}=p_{\mu_{k}(\Sigma)}\circ\mu_{k}^{ \mathcal{A}}. \tag{16}\] #### 3.2.2. Quivers/Amalgamation For simplicity, we will denote \(\omega_{ij}=(e_{i},e_{j})\). The combinatorial data of a seed and subsequential mutations can be encoded by a _quiver_\(Q\), a planar graph such that \(V(Q)=I\sqcup I_{0}\) with a vertex \(i\in I\) for each basis vector \(e_{i}\), a vertex \(j\in I_{0}\) for each frozen index, and arrows \(i\to j\) weighted by the matrix entries \(\omega_{ij}\). In this framework, a cluster mutation at vertex \(k\) corresponds to a mutation of \(Q\) characterized by the following steps: 1. Reverse any arrows incident to \(k\), 2. For any pair of arrows \(i\to k\) and \(k\to j\) with weights \(\omega_{ik}\) and \(\omega_{kj}\), respectively, draw an arrow \(i\to j\) with weight \(\omega_{ij}+\frac{\omega_{ik}\omega_{kj}}{d_{k}}\), 3. Delete all arrows with weight \(\omega_{ij}=0\) and if there are two arrows \(i\to k\) with weights \(\omega_{1},\omega_{2}\), then draw one arrow with weight \(\omega_{1}+\omega_{2}\). **Definition 3.4**.: Let \(Q,Q^{\prime}\) be two quivers with vertices \(V(Q)=I\sqcup I_{0},V(Q^{\prime})=J\sqcup J_{0}\) and exchange matrices \(\epsilon_{ij},\eta_{ij}\) respectively. Let \(L\) be a set embedded into both \(I_{0}\) and \(J_{0}\). Then the amalgamation along L is a new quiver \(Q^{\prime\prime}\) with \(V(Q^{\prime\prime})=K\sqcup K_{0}\) such that \(K=I\cup_{L}J,K_{0}=I_{0}\cup_{L}J_{0}\) and exchange matrices \(\zeta_{ij}\) given by \[\zeta_{ij}=\begin{cases}\begin{array}{cl}0&\text{if $i\in I-L$ and $j\in J-L$}\\ 0&\text{if $i\in J-L$ and $j\in I-L$}\\ \epsilon_{ij}&\text{if $i\in I-L$ or $j\in I-L$}\\ \eta_{ij}&\text{if $i\in J-L$ or $j\in J-L$}\\ \epsilon_{ij}+\eta_{ij}&\text{if $i,j\in L$}\end{array}\end{cases} \tag{17}\] **Lemma 3.5**.: _[_10_]_ _Let \(\Sigma,\Sigma^{\prime},\Sigma^{\prime\prime}\) be the cluster seeds associated to the quivers \(Q,Q^{\prime},Q^{\prime\prime}\), respectively. Then amalgamation induces a homomorphism \(\mathcal{X}_{\Sigma}\times\mathcal{X}_{\Sigma^{\prime}}\to\mathcal{X}_{\Sigma ^{\prime\prime}}\) given by the rule_ \[Z_{i}=\begin{cases}\begin{array}{ll}X_{i}&\text{if $i\in I-L$}\\ Y_{i}&\text{if $i\in J-L$}\\ X_{i}Y_{i}&\text{if $i\in L$}\end{array}\end{cases} \tag{18}\] _Moreover, amalgamation is compatible with both the Poisson and cluster structures._ #### 3.2.3. Quantum Cluster Algebras Consider the Heisenberg group \(\mathcal{H}_{\Lambda}\), which is the central extension \[0\to\mathbb{Z}\to\mathcal{H}_{\Lambda}\to\Lambda\to 0. \tag{19}\] **Definition 3.6**.: The _quantum torus algebra_\(\mathcal{X}_{\Sigma}^{q}\) is the group ring of \(\mathcal{H}_{\Lambda}\). It is identified with the algebra of non-commutative polynomials in \(\{X_{i}\}\) over \(\mathbb{Z}[q,q^{-1}]\) with relations \[q^{-\omega_{ij}}X_{i}X_{j}=q^{\omega_{ij}}X_{j}X_{i}. \tag{20}\] We will also denote \[X_{i_{1}^{m_{1}},\dots,i_{n}^{m_{n}}}=q^{C}X_{i_{1}}^{m_{1}}\cdots X_{i_{n}}^{ m_{n}} \tag{21}\] where \(C\) is the unique rational number such that \[q^{C}X_{i_{1}}^{m_{1}}\cdots X_{i_{n}}^{m_{n}}=q^{-C}X_{i_{n}}^{m_{n}}\cdots X_ {i_{1}}^{m_{1}}. \tag{22}\] Given a quantum torus algebra \(\mathcal{X}_{\Sigma}^{q}\) associated to a seed \(\Sigma\), the cluster mutation on index \(k\) induces an isomorphism of the skew field of fractions \(\mu_{k}^{q}:\text{Frac}(\mathcal{X}_{\mu_{k}(\Sigma)}^{q})\to\text{Frac}( \mathcal{X}_{\Sigma}^{q})\) called the quantum cluster mutation_ given by \[\mu_{k}^{q}(X_{i}^{\prime})=\begin{cases}\begin{array}{ll}X_{k}^{-1}&\text{ if }i=k,\\ X_{i}\prod_{r=1}^{|\epsilon_{ki}|}(1+q_{i}^{2r-1}X_{k})&\text{ if }i\neq k\text{ and } \epsilon_{ki}\leq 0,\\ X_{i}\prod_{r=1}^{\epsilon_{ki}}(1+q_{i}^{2r-1}X_{k}^{-1})^{-1}&\text{ if }i\neq k\text{ and } \epsilon_{ki}\geq 0.\end{array}\end{cases} \tag{23}\] The _quantum cluster algebra_\(\mathcal{X}^{q}\) associated to a seed is defined as the subalgebra of \(\mathcal{X}^{q}_{\Sigma}\) of universally Laurent elements (i.e., remain Laurent polynomials under any combination of finite sequences of cluster mutations). ### Cluster Structure on Directed Networks In [6], a cluster structure was attached to the directed networks \(N_{u,v}\) for Type A. This was generalized in [11] to give a cluster structure associated to directed networks of any classical type. In this section, we will recall this construction explicitly. Let \(\mathbf{i}\) be an unmixed double reduced Coxeter word for a pair of elements \((u,v)\) in the Weyl group for \(G\). Denote \(I=\{-n,\ldots,n\}\cup\{1,\ldots,m\}\) the indexing set for the seed \(\Sigma_{\mathbf{i}}\). Furthermore, denote \(D_{\mathbf{i}}\) to be the subnetwork consisting of the bottom \(n+1\) rows. For type A, this is the entire network. We can then label the \(n+m\) faces of \(D_{\mathbf{i}}\) in the following way. Fix \(k\in[1,n]\) and let \(\{j:|i_{j}|=k\}=\{j_{1}<\cdots<j_{r}\}\). Then label the \(r\) faces between levels \(n-1\) and \(n+1\) from left to right with \(j_{1},\ldots,j_{r}\). Now, form the quiver \(\Gamma_{\mathbf{i}}\) with the faces of \(D_{\mathbf{i}}\) as the vertices. Finally, draw arrows between the vertices if the corresponding faces are connected by an edge and exactly one of the two vertices of the edge is either orange or black. Explicitly, draw arrows across edges of the directed network according to **Figure 5** where whole arrows have weight \(\omega_{ij}=1\) and dashed arrows have weight \(\omega_{ij}=1/2\). The quotient \(G^{u,v}/H\) corresponds to drawing \(D_{\mathbf{i}}\) on a cylinder to obtain \(\widetilde{D}_{\mathbf{i}}\)[11, Prop. 3.4]. This is due to the following observation: let \(hgh^{-1}\in G^{u,v}\) for \(h\in H\) be written as a directed network \(D\). Then by writing \(D\) on a cylinder, we are identifying the left and right ends, which allows us to write \(hgh^{-1}\) as \(h^{-1}hg=g\) for any \(h\in H\). Thus, passing \(D_{\mathbf{i}}\) to \(\widetilde{D}_{\mathbf{i}}\) corresponds to the projection \(G^{u,v}\to G^{u,v}/H\). In doing this, we obtain a new quiver \(Q_{\mathbf{i}}\) by amalgamating the left-most and right-most vertices lying in the same row. This gives us the exchange matrix \(\epsilon\) having entries in \(\{-n,\ldots,n\}\) where \(\epsilon_{ij}>0\) if \(i\to j\), \(\epsilon_{ij}<0\) if \(j\to i\), and \(\epsilon_{ij}=0\) if there are no edges connecting \(i\) and \(j\). Moreover, \[|\epsilon_{jk}|=\begin{cases}\begin{array}{ll}2&\text{ if }|j|=|k|,\\ -C_{|i_{j}|,|i_{k}|}\times\#\{\text{arrows connecting $v_{j}$ and $v_{k}$}\}&\text{ if }|j|\neq|k| \end{array}\end{cases} \tag{24}\] Figure 5. Rules to obtain Cluster Quiver from Directed Network where \(C\) is the Cartan matrix. Explicitly, for the standard double Coxeter word \(\mathbf{i_{0}}=(-1,\ldots,-n,1,\ldots,n)\), it follows \[\epsilon_{\mathbf{i_{0}}}=\begin{pmatrix}0&C\\ -C&0\end{pmatrix} \tag{25}\] with the columns labelled \(-1,\ldots,n,1,\ldots n\). Given the exchange matrix \(\epsilon_{ij}\), the edge weights for \(Q_{\mathbf{i}}\) are \(\omega_{ij}=\epsilon_{ij}d_{j}\) (our convention has \(d_{j}\) instead of \(d_{j}^{-1}\) to force all edge weights to have integer values). Explicitly, the quivers \(Q_{\mathbf{i_{0}}}\) for types A and C are given in **Figure 6**. _Remark 3.7_.: Using the convention of [1], a mutation at a vertex \(-k\) will be paired with a permutation of indices in the following way: \[\tau_{k}=\mu_{-k}\circ\sigma_{k} \tag{26}\] where \(\mu_{-k}\) is a cluster mutation at vertex \(-k\) and \(\sigma_{k}\) is the permutation such that \[\sigma_{k}(j)=\begin{cases}j&\text{ if }|j|\neq k\\ -j&\text{ if }|j|=k\end{cases} \tag{27}\] Consider the 3 _quiver blocks_ shown in **Figure 7**. For a quiver of Type \(A_{n}\) or \(C_{n}\), there will be \(n-1\) quiver blocks \(Q_{1},\ldots,Q_{n-1}\) labelled from bottom to top glued together to make the full quiver \(Q_{\mathbf{i}}\). The following definition will be useful: **Definition 3.8**.: Let \(\mathbf{i}\) be a double unmixed Coxeter word for a type \(A_{n}\) Coxeter-Toda system. We define the _quiver vector associated to \(\mathbf{i}\)_, \(\vec{Q}_{n-1}=(Q_{n-1},\ldots,Q_{1})\in\{-1,0,1\}^{n-1}\), in the following way: if the quiver block \(Q_{i}\) is of type 7(a), 7(b), or 7(c), then the corresponding entry \(Q_{i}=0,1\), or \(-1\), respectively. **Lemma 3.9**.: _There are exactly \(3^{n-1}\) double reduced Coxeter words \((u,v)\) when \(u,v\) both have length \(n\). Furthermore, each double reduced Coxeter word corresponds to a quiver, giving \(3^{n-1}\) quivers for types \(A_{n}\) and \(C_{n}\)._ Proof.: Consider the reflection \(s_{i}\in W\) given to a letter \(i\) when \(i>0\) and \(\overline{s}_{-i}\) if \(i<0\). The relations between letters are given by \[\begin{split} s_{i}s_{j}&=s_{j}s_{i}\quad\text{if} \quad|i-j|>1\\ \overline{s}_{i}\overline{s}_{j}&=\overline{s}_{j} \overline{s}_{i}\quad\text{if}\quad|i-j|>1\\ s_{i}\overline{s}_{j}&=\overline{s}_{j}s_{i}\quad \text{if}\quad|i-j|\neq 0.\end{split} \tag{28}\] We will proceed by induction on \(n\). Let \(W_{n}\) be the group of double Coxeter Weyl words of length \(2n\). Consider the base case \(n=1\). It follows \[s_{1}\overline{s}_{1} \tag{29}\] is the unique element of \(W_{1}\). Now, suppose the proposition holds for \(W_{k}\). Let \(w\in W_{k}\). In general, we can write \(w\) in the form \[w=s_{k}\cdots\overline{s}_{k}\cdots \tag{30}\] where we can always put \(s_{k}\) as the first entry by the 3rd relation stated above. Then there are 3 ways to distinctly place \(s_{k+1}\) and \(\overline{s}_{k+1}\) given by \[\begin{split} s_{k}\cdots s_{k+1}\overline{s}_{k+1}\cdots \overline{s}_{k}\cdots,\\ s_{k}\cdots\overline{s}_{k+1}s_{k+1}\cdots\overline{s}_{k} \cdots,\\ s_{k}\cdots\overline{s}_{k}\cdots\overline{s}_{k+1}s_{k+1}\cdots \end{split} \tag{31}\] Figure 8. Possible \(Q_{n-1}\) Quiver Blocks for \(C_{n}\) Figure 7. Possible Quiver Blocks Thus, we have 3 ways to distinctly place \(s_{k+1}\) and \(\overline{s}_{k+1}\) in any \(w\in W_{k}\), and there are \(3^{k-1}\) such \(w\)'s by the inductive hypothesis. Hence, it follows that there are \(3\cdot 3^{k-1}=3^{k}\) distinct words in \(W_{k+1}\), proving the inductive step. Furthermore, it is a direct consequence from the quiver construction that the 3 possible double Coxeter words above correspond to \(Q_{k}=0,1,-1\), respectively, proving the second statement in the lemma. Therefore, there are \(3^{n-1}\) possible possible Type \(A_{n}\) and \(C_{n}\) quivers. **Theorem 3.10**.: _[_10_]_ _Given any two double reduced Coxeter words \(\mathbf{i}\) and \(\mathbf{j}\), the quivers \(Q_{\mathbf{i}}\) and \(Q_{\mathbf{j}}\) are mutation equivalent._ Combining Lemma 4.9 and Theorem 4.10, all \(3^{n-1}\) possible quivers (i.e., cluster seeds) corresponding to a double reduced Coxeter word are mutation equivalent. ### Poisson Structure & Quantization In this section we will first recall the Poisson brackets of edge weights on a given direct network in [10]. We then quantize these Poisson brackets. In Section 5, we prove that the quantized Coxeter-Toda Hamiltonians obtained are precisely the ones obtained by the Lax formalism in [11],[12] by counting quantum path weights inductively. **Lemma 3.11**.: _[_10_]_ _Let \((u,v)\) be a pair of Coxeter elements and \(\mathbf{i}=(i_{1},\ldots,i_{2n})\) be a double reduced word for \((u,v)\). Then with respect to the factorization_ \[\overline{g}=E_{i_{1}}(1)\cdots E_{i_{n}}(1)D(t_{1},\ldots,t_{n})E_{i_{n+1}}( c_{i_{n+1}})\cdots E_{i_{2n}}(c_{i_{2n}}) \tag{32}\] _of \(g\in G^{u,v}/H\), the Poisson brackets between the rational functions \(c_{j},t_{k}\) for \(j,k\in[1,n]\) are given by_ \[\begin{split}\{c_{j},c_{k}\}&=2\omega_{ij}c_{j}c_{k }\\ \{c_{j},t_{k}\}&=2d_{j}\delta_{jk}c_{j}t_{k}\\ \{t_{j},t_{k}\}&=0.\end{split} \tag{33}\] Now, let \(\mathcal{A}_{\mathbf{i}},\mathcal{X}_{\mathbf{i}}\) be the cluster seeds coming from the quiver \(Q_{\mathbf{i}}\). Recall the induced Poisson structure on \(\mathcal{A}_{\mathbf{i}}\) from \(\mathcal{X}_{\mathbf{i}}\): \[\{A_{i},A_{j}\}=\omega_{ij}A_{i}A_{j}. \tag{34}\] **Theorem 3.12**.: _[_10_]_ _There is a Poisson map \(a_{\mathbf{i}}:\mathcal{A}_{\mathbf{i}}\to G^{u,v}/H\) given by_ \[\begin{split} a_{\mathbf{i}}^{*}(t_{j})&=A_{-j}A_{ j}^{-1}\\ a_{\mathbf{i}}^{*}(c_{j})&=\prod_{k\in I}A_{k}^{- \epsilon_{kj}}.\end{split} \tag{35}\] **Definition 3.13**.: Using \(a_{\mathbf{i}}\) and quantizing the Poisson brackets above, the quantum torus algebra, \(\mathcal{X}_{\mathbf{i}}^{q}\), can be simultaneously defined as the associative algebra over \(\mathbb{C}(q^{d})\), with \(d=\min_{j\in I}(d_{j})\), defined by generators \(\{X_{t_{j}}^{\pm 1},X_{c_{j}}^{\pm 1}\}_{j\in\tilde{I}}\) and relations \[\begin{split} X_{c_{j}}X_{c_{k}}&=q^{-2\omega_{ij}}X _{c_{k}}X_{c_{j}}\\ X_{c_{j}}X_{t_{k}}&=q^{-2\delta_{jk}d_{j}}X_{t_{k}}X _{c_{j}}\\ X_{t_{j}}X_{t_{k}}&=X_{t_{k}}X_{t_{j}}.\end{split} \tag{36}\] Furthermore, we will denote \[X_{a_{1}\cdots a_{m}}=q^{C}X_{a_{1}}\cdots X_{a_{m}},\quad a_{k}\in\{t_{j},c_{j}\}_ {j\in I} \tag{37}\] where \(C\) is the unique rational number such that \[q^{C}X_{a_{1}}\cdots X_{a_{m}}=q^{-C}X_{a_{m}}\cdots X_{a_{1}}. \tag{38}\] The convention we will use for quantization of paths on the network \(N_{u,v}(\mathbf{i})\) will be the following: if the weights collected along a single path are given by \(a_{1},\ldots,a_{m}\), then the quantized weight of the whole path will be \(X_{a_{1}\cdots a_{m}}\). Moreover, the quantized weight corresponding to a family of non-intersecting paths with quantized weights \(X_{1},\ldots,X_{m}\) is \(X_{1}\cdots X_{m}\) with the convention that multiplication is done from top to bottom along \(N_{u,v}(\mathbf{i})\). _Remark 3.14_.: It will be convenient for us to label quantized path weights as \(X_{j,k}\) where \(k\) is the row of the source (and sink) of the path and \(j\) is the lowest row intersecting the path. ### Coxeter-Toda Hamiltonians Given a directed network for an unmixed double Coxeter word \(\mathbf{i}\), we can reproduce the matrix \(\bar{g}\). The \((i,j)\)-th entry of \(\bar{g}\) is the sum of path weights over all paths from source \(i\) to sink \(j\). We will be using these networks to recover the Hamiltonians for \(\bar{g}\): **Theorem 3.15**.: _[_11_]_ _The Coxeter-Toda Hamiltonians for a simple complex Lie group \(G\) of dimension \(n\) and a choice of Coxeter words \(u,v\in W\) is given by_ \[H_{j}^{u,v}=\sum_{I\subseteq[1,n],|I|=j}\sum_{P\in P_{ni}^{u,v}(I)}wt(P) \tag{39}\] _where \(P_{ni}^{u,v}(I)\) is the set consisting of families of non-intersecting paths such that the set of sources and the set of sinks of all paths in the family is equal to \(I\)._ ## 4. \(2\times 2\) Lax Formulation ### Type A The following Lax matrix formulation for the type A q-Toda system is given in [11]. Let \(v\) be an indeterminate and consider the associative \(\mathbb{C}(v)\)-algebra \(\mathcal{A}_{n}^{v}\) generated by \(\{w_{i}^{\pm 1},D_{i}^{\pm 1}\}_{i=1}^{n}\) with defining relations \[[w_{i},w_{j}]=[D_{i},D_{j}]=0,\quad w_{i}^{\pm 1}w_{i}^{\mp 1}=D_{i}^{\pm 1}D_{i }^{\mp 1}=1,\quad D_{i}w_{j}=v^{\delta_{ij}}w_{j}D_{i} \tag{40}\] Define 3 (local) trigonometric Lax matrices: \[\begin{split} L_{i}^{v,0}(z)&=\begin{pmatrix}w_{i}^{- 1}z^{1/2}-w_{i}z^{-1/2}&D_{i}^{-1}z^{1/2}\\ -D_{i}z^{-1/2}&0\end{pmatrix}\\ L_{i}^{v,-1}(z)&=\begin{pmatrix}w_{i}^{-1}-w_{i}z^{-1}&w_{i}D_{i}^{-1}\\ -w_{i}D_{i}z^{-1}&w_{i}\end{pmatrix}\\ L_{i}^{v,1}(z)&=\begin{pmatrix}w_{i}^{-1}z-w_{i}&w_{i}^{-1}D_{i}^{-1}z\\ -w_{i}^{-1}D_{i}&-w_{i}^{-1}\end{pmatrix}.\end{split} \tag{41}\] **Lemma 4.1**.: _[_11_]_ _The 3 Lax matrices above satisfy the trigonometric RTT relation with the standard trigonometric \(R\)-matrix_ \[R_{\text{trig}}(z)=\begin{pmatrix}1&0&0&0\\ 0&\frac{z-1}{yz-v^{-1}}&\frac{z(v-v^{-1})}{vz-v^{-1}}&0\\ 0&\frac{v-v^{-1}}{vz-v^{-1}}&\frac{z-1}{vz-v^{-1}}&0\\ 0&0&0&1\end{pmatrix} \tag{42}\] Now, let \(\vec{k}_{n}=(k_{n},\dots,k_{1})\in\{-1,0,1\}^{n}\) be an index vector. Define the mixed complete monodromy matrix \[T^{v}_{\vec{k}_{n}}(z)=L^{v,k_{n}}_{n}(z)\cdots L^{v,k_{1}}_{1}(z). \tag{43}\] _Remark 4.2_.: It follows from the above lemma that the mixed complete monodromy matrix \(T^{v}_{\vec{k}_{n}}(z)\) satisfies the trigonometric RTT relation with R-matrix given by \(R_{\text{trig}}(z)\). Therefore, the coefficients in \(z\) of \(T^{v}_{\vec{k}_{n}}(z)_{11}\) generate a commutative subalgebra of \(\mathcal{A}^{v}_{n}\). Explicitly, we have \[T^{v}_{\vec{k}_{n}}(z)_{11}=H^{\vec{k}_{n}}_{1}z^{\sigma_{n}}+H^{\vec{k}_{n}}_ {2}z^{\sigma_{n}+1}+\cdots+H^{\vec{k}_{n}}_{n+1}z^{\sigma_{n}+n} \tag{44}\] where \[\sigma_{n}=\sum_{i=1}^{n}s_{i},\quad s_{i}=\frac{k_{i}-1}{2}. \tag{45}\] **Proposition 4.3**.: _[_11_]_ _Let \(\vec{k}^{\prime}_{n}=(0,k_{n-1},\dots,k_{2},0)\). Then \(H^{\vec{k}_{n}}_{2}=H^{\vec{k}^{\prime}_{n}}_{2}\)._ The above proposition shows that there are at most \(3^{n-2}\) different q-Toda systems given by the above Lax formalism. Furthermore, these Hamiltonians are identified with the type \(A_{n-1}\) q-Toda Hamiltonians given in [1],[2]. ### Type C The following construction is due to [11]. To start, define 3 more (local) trigonometric Lax matrices: \[\begin{split}\bar{L}^{v,0}_{i}(z)&=\begin{pmatrix} w_{i}z^{1/2}-w_{i}^{-1}z^{-1/2}&D_{i}z^{1/2}\\ -D_{i}^{-1}z^{-1/2}&0\end{pmatrix}\\ \bar{L}^{v,-1}_{i}(z)&=\begin{pmatrix}w_{i}-w_{i}^{-1}z^{-1}&w_{i}^{-1}D_{i} \\ -w_{i}^{-1}D_{i}^{-1}z^{-1}&w_{i}^{-1}\end{pmatrix}\\ \bar{L}^{v,1}_{i}(z)&=\begin{pmatrix}w_{i}z-w_{i}^{-1}&w_{i}D_{i}z \\ -w_{i}D_{i}^{-1}&-w_{i}\end{pmatrix}.\end{split} \tag{46}\] Furthermore, given an index vector \(\vec{k}_{n}=(k_{n},\dots,k_{1})\), define the double complete mixed monodromy matrix: \[\mathbb{T}^{v}_{\vec{k}_{n}}(z)=\bar{L}^{v,-k_{1}}_{1}(z)\cdots\bar{L}^{v,-k_{ n}}_{n}(z)L^{v,k_{n}}_{n}(z)\cdots L^{v,k_{1}}_{1}(z). \tag{47}\] The following theorem is the result of a direct calculation. **Theorem 4.4**.: _[_11_]_ _The double complete mixed monodromy matrix \(\mathbb{T}_{\vec{k}_{n}}^{v}(z)\) satisfies the RTT relation with R-matrix given by \(R_{\text{trig}}(z)\). Therefore, the coeffients in \(z\) of \(\mathbb{T}_{\vec{k}_{n}}^{v}(z)_{11}\), given by_ \[\mathbb{T}_{\vec{k}_{n}}^{v}(z)_{11}=\mathbb{H}_{1}^{\vec{k}_{n}}z^{-n}+ \mathbb{H}_{2}^{\vec{k}_{n}}z^{-n+1}+\cdots+\mathbb{H}_{2n+1}^{\vec{k}_{n}}z^{n} \tag{48}\] _form a commutative subalgebra of \(\mathcal{A}_{n}^{v}\). Furthermore, this commutative subalgebra offers a Lax matrix realization of the type \(C_{n}\) modified q-Toda system._ ## 5. Calculation of q-Toda Hamiltonians ### Type A In this section, we provide a recursive formula to obtain q-Toda Hamiltonians based on the Lax formulation in [10]. Then we will use this formula to prove a bijection between the q-Toda Hamiltonians obtained via directed networks and Lax matrices. Thus, we begin by stating the recursive formula: **Theorem 5.1**.: _Let \(\vec{k}_{n+1}\in\{-1,0,1\}^{n+1}\) be an index vector for the mixed complete monodromy matrix \(T_{\vec{k}_{n+1}}^{v}(z)\). Then the \(i\)-th \(A_{n}\) q-Toda Hamiltonian associated to \(\vec{k}_{n+1}\) can be written as_ \[\begin{split} H_{i}^{\vec{k}_{n+1}}&=-w_{n+1}H_{i}^{ \vec{k}_{n}}+w_{n+1}^{-1}H_{i-1}^{\vec{k}_{n}}-\sigma_{n,n+1}D_{n}D_{n+1}^{-1} H_{i-1}^{\vec{k}_{n-1}}\\ &\quad+\sum_{m=0}^{n-2}(-1)^{n-j}k_{m+2,n}\sigma_{m+1,n+1}D_{m+1}D _{n+1}^{-1}H_{i-1-S_{n,m+1}}^{\vec{k}_{m}}\end{split} \tag{49}\] _where_ \[S_{l,m}=S_{l}-S_{m},\quad S_{j}=\sum_{i=1}^{j}s_{i},\quad s_{i}=\frac{k_{i}-1} {2}. \tag{50}\] _and_ \[k_{i,j}=k_{i}k_{i+1}\cdots k_{j},\quad\sigma_{i,j}=w_{i}^{-k_{i}}w_{i+1}^{-k_{ i+1}}\cdots w_{j}^{-k_{j}}. \tag{51}\] Proof.: As a preliminary result, we claim that the type \(A_{n-1}\) mixed complete monodromy matrix associated to an index vector \(\vec{k}_{n}\) takes the form \[T_{\vec{k}_{n}}^{v}(z)=\begin{pmatrix}\sum_{j=1}^{n+1}H_{j}^{\vec{k}_{n}}z^{S_ {n}+j-1}&*\\ \sum_{j=1}^{n}\left[-w_{n}^{-k_{n}}D_{n}H_{j}^{\vec{k}_{n-1}}z^{S_{n}+j-1}&* \\ +\sum_{m=0}^{n-2}(-1)^{n-j}k_{m+2,n}\sigma_{m+1,n+1}D_{m+1}D_{n+1}^{-1}H_{j}^{ \vec{k}_{m}}z^{S_{m+1}+j-1}\right]&\end{pmatrix} \tag{52}\] To prove this, we will proceed by induction. First, observe that each of the 3 (local) trigonometric Lax matrices can be written in the general form \[L_{i}^{k_{i}}(z)=\begin{pmatrix}w_{i}^{-1}z^{s_{i}+1}-w_{i}z^{s_{i}}&w_{i}^{-k _{i}}D_{i}^{-1}z^{s_{i}+1}\\ w_{i}^{-k_{i}}D_{i}z^{s_{i}}&-k_{i}w_{i}^{-k_{i}}\end{pmatrix}. \tag{53}\] By setting \(i=1\), this proves the base case. Now, assume the inductive hypothesis to be true for an arbitrary index vector \(\vec{k}_{n}\). Then by definition of the complete mixed monodromy matrix, it follows \[T_{\vec{k}_{n+1}}^{v}(z)=L_{n+1}^{k_{n+1}}(z)\cdot T_{\vec{k}_{n}}^{v}(z). \tag{54}\] By a direct calculation, we prove the inductive step, hence prove the claim. Furthermore, from this calculation we obtain \[\begin{split} T_{\vec{k}_{n+1}}^{v}(z)_{11}&=\sum_{j=1} ^{n+1}\left[w_{n+1}^{-1}H_{j}^{\vec{k}_{n}}z^{S_{n+1}+j}-w_{n+1}H_{j}^{\vec{k}_ {n}}z^{S_{n+1}+j-1}-\sigma_{n,n+1}D_{n}D_{n+1}^{-1}H_{j}^{\vec{k}_{n-1}}z^{S_{n +1}+j}\right.\\ &=\sum_{m=0}^{n-2}(-1)^{n-j}k_{m+2,n}\sigma_{m+1,n+1}D_{m+1}D_{n+ 1}^{-1}H_{j}^{\vec{k}_{m}}z^{s_{n+1}+S_{m+1}+j}\right].\end{split} \tag{55}\] By convention, \(H_{i}^{\vec{k}_{n+1}}\) is the coefficient of \(z^{S_{n+1}+i-1}\). Using this and the equation above, we obtain the recursion formula. **Corollary 5.2**.: _Let \(\vec{k}_{n+1}\) be an index vector. Then the \(A_{n}\) q-Toda Lax Hamiltonians obey the following symmetry:_ \[H_{i}^{\vec{k}_{n+1}}=\bar{H}_{n+2-i}^{-\vec{k}_{n+1}} \tag{56}\] _where \(\bar{H}_{i}^{\vec{k}_{n}}\) are the type \(A_{n}\) Hamiltonians with \(w_{i}\mapsto w_{i}^{-1}\)._ Using Remark 3.14, we can now assign explicit quantized path weights. Denote \(\alpha:\mathcal{X}_{\mathbf{i}}^{q}\to\mathcal{A}_{n}^{v}\) the map such that \[X_{i,j}\mapsto\begin{cases}w_{i}^{-2}&\quad\text{if }i=j\\ \hat{\sigma}_{i,j}D_{i}D_{j}^{-1}&\quad\text{if }i\neq j\end{cases},\quad i,j \in[1,n+1]. \tag{57}\] Figure 9. Directed Networks associated to Quiver Blocks where \(\hat{\sigma}_{i,j}=w_{i}^{-Q_{i}-1}w_{i+1}^{-Q_{i+1}-1}\cdots w_{j}^{-Q_{j}-1}\). **Lemma 5.3**.: _The map \(\alpha\) is a homomorphism of \(\mathbb{C}(v)\)-algebras under the identification \(v=q\)._ Proof.: For the quantized path weights, we can compute the following relations: \[\begin{split} X_{i,i}X_{j,j}&=X_{j,j}X_{i,i}\\ X_{i,j}X_{l,l}&=\begin{cases}q^{-2}X_{l,l}X_{i,j}& \text{if }i=l\\ q^{2}X_{l,l}X_{i,j}&\text{if }j=l\\ X_{l,l}X_{i,j}&\text{otherwise}\end{cases}\\ X_{i,j}X_{l,m}&=q^{A}X_{l,m}X_{i,j}\end{split} \tag{58}\] where the exponent of \(q\) in the last equation can take on the following values \[A=\begin{cases}0&\text{if }(i=l,j=m),\;(i<l,j<m),\;\text{or }(i>l,j>m)\\ 2&\text{if }(i=l,j<m),\;\text{or }(i>l,j=m)\\ -2&\text{if }(i=l,j>m),\;\text{or }(i<l,j=m)\\ 4&\text{if }(i>l,j<m)\\ -4&\text{if }(i<l,j>m)\end{cases} \tag{59}\] It is a simple computation to check that the terms \(w_{i}^{-2},\hat{\sigma}_{i,j}D_{i}D_{j}^{-1}\) satisfy the same commutation relations under the identification \(v=q\). Lemma 4.4 allows us to identify the quantized path weights with the elements of \(\mathcal{A}_{n}^{v}\): \[X_{i,j}=\begin{cases}w_{i}^{-2}&\text{if }i=j\\ \hat{\sigma}_{i,j}D_{i}D_{j}^{-1}&\text{if }i\neq j\end{cases},\quad i,j \in[1,n+1]. \tag{60}\] Now, we are in a position to prove the main theorem of this section: **Theorem 5.4**.: _The type \(A_{n}\) q-Toda Hamiltonians from the network formulation are related to the type \(A_{n}\) q-Toda Hamiltonians from the Lax formulation by the formula_ \[H_{i}^{\mathbf{i}_{n}}=(w_{1}^{-1}\cdots w_{n+1}^{-1})H_{i+1}^{(0,Q_{n-1},0)}, \quad i\in[1,n] \tag{61}\] Proof.: We will proceed by induction on the quiver blocks. The base case is the type \(A_{2}\) quiver \(Q_{1}=Q_{1}\) consisting of a single quiver block. By counting paths on the associated directed network for the 3 possible quiver blocks in **Figure 7**, we obtain \[\begin{split} H_{1}^{(-1,-2,1,2)}&=X_{1,1}+X_{2,2}+X_{ 3,3}+X_{1,2}+X_{2,3}\\ H_{2}^{(-1,-2,1,2)}&=X_{1,1}X_{2,2}+X_{1,1}X_{3,3}+X_{ 2,2}X_{3,3}+X_{1,1}X_{2,3}+X_{1,2}X_{3,3}\\ H_{1}^{(-2,-1,1,2)}&=X_{1,1}+X_{2,2}+X_{3,3}+X_{ 1,2}+X_{2,3}+X_{1,3}\\ H_{2}^{(-2,-1,1,2)}&=X_{1,1}X_{2,2}+X_{1,1}X_{3,3}+X_{ 2,2}X_{3,3}+X_{1,1}X_{2,3}+X_{1,2}X_{3,3}\\ H_{1}^{(-1,-2,2,1)}&=X_{1,1}+X_{2,2}+X_{3,3}+X_{ 1,2}+X_{2,3}\\ H_{2}^{(-1,-2,2,1)}&=X_{1,1}X_{2,2}+X_{1,1}X_{3,3}+X_{ 2,2}X_{3,3}+X_{1,1}X_{2,3}+X_{1,2}X_{3,3}\\ &\quad+X_{1,2}X_{2,3}\end{split} \tag{62}\] On the other hand, the 3 sets of Hamiltonians obtained via Lax matrices are \[\begin{split} H_{2}^{(0,0,0)}&=w_{1}^{-1}w_{2}w_{3}+w_{ 1}w_{2}^{-1}w_{3}+w_{1}w_{2}w_{3}^{-1}+w_{3}D_{1}D_{2}^{-1}+w_{1}D_{2}D_{3}^{-1 }\\ H_{3}^{(0,0,0)}&=w_{1}w_{2}^{-1}w_{3}^{-1}+w_{1}^{- 1}w_{2}w_{3}^{-1}+w_{1}^{-1}w_{2}^{-1}w_{3}+w_{3}^{-1}D_{1}D_{2}^{-1}+w_{1}^{-1 }D_{2}D_{3}^{-1}\\ & H_{2}^{(0,1,0)}&=w_{1}^{-1}w_{2}w_{3}+w_{1}w_{2}^{- 1}w_{3}+w_{1}w_{2}w_{3}^{-1}+w_{2}^{-1}w_{3}D_{1}D_{2}^{-1}+w_{1}w_{2}^{-1}D_{2 }D_{3}^{-1}\\ &\quad\quad+w_{2}^{-1}D_{1}D_{3}^{-1}\\ H_{3}^{(0,1,0)}&=w_{1}w_{2}^{-1}w_{3}^{-1}+w_{1}^{- 1}w_{2}w_{3}^{-1}+w_{1}^{-1}w_{2}^{-1}w_{3}+w_{2}^{-1}w_{3}^{-1}D_{1}D_{2}^{-1} \\ &\quad\quad+w_{1}^{-1}w_{2}^{-1}D_{2}D_{3}^{-1}\end{split} \tag{65}\] \[\begin{split} H_{2}^{(0,-1,0)}&=w_{1}^{-1}w_{2}w_{3}+w _{1}w_{2}^{-1}w_{3}+w_{1}w_{2}w_{3}^{-1}+w_{2}w_{3}D_{1}D_{2}^{-1}+w_{1}w_{2}D_ {2}D_{3}^{-1}\\ H_{3}^{(0,-1,0)}&=w_{1}w_{2}^{-1}w_{3}^{-1}+w_{1}^{- 1}w_{2}w_{3}^{-1}+w_{1}^{-1}w_{2}^{-1}w_{3}+w_{2}w_{3}^{-1}D_{1}D_{2}^{-1}\\ &\quad\quad+w_{1}^{-1}w_{2}D_{2}D_{3}^{-1}+w_{2}D_{1}D_{3}^{-1} \end{split} \tag{66}\] Thus, we can see that under our homomorphism, \[\begin{split} H_{i}^{(-1,-2,1,2)}&=(w_{1}^{-1}w_{2}^ {-1}w_{3}^{-1})H_{i+1}^{(0,0,0)}\\ H_{i}^{(-2,-1,1,2)}&=(w_{1}^{-1}w_{2}^{-1}w_{3}^{-1}) H_{i+1}^{(0,1,0)}\\ H_{i}^{(-1,-2,2,1)}&=(w_{1}^{-1}w_{2}^{-1}w_{3}^{-1}) H_{i+1}^{(0,-1,0)}.\end{split} \tag{68}\] Now, assume the inductive hypothesis for the first \(n-2\) blocks \(Q_{1},\ldots,Q_{n-2}\) for type \(A_{n-1}\). To prove the inductive step, here are 3 cases to investigate corresponding to \(Q_{n-1}=0,1,-1\). If \(Q_{n-1}=0\), we obtain 2 new paths corresponding to the weights \(X_{n,n+1}\) and \(X_{n+1,n+1}\), which gives us \[(H_{i}^{\mathsf{i_{n}}})_{0}=H_{i}^{\mathsf{i_{n-1}}}+X_{n+1,n+1}H_{i-1}^{ \mathsf{i_{n-1}}}+X_{n,n+1}H_{i-1}^{\mathsf{i_{n-2}}}. \tag{69}\] Using the inductive hypothesis, it follows \[\begin{split}(H_{i}^{\mathsf{i_{n}}})_{0}&=(w_{1}^{- 1}\cdots w_{n}^{-1})H_{i+1}^{(0,\vec{Q}_{n-2,0})}+\omega_{n+1}^{-2}(w_{1}^{-1} \cdots w_{n}^{-1})H_{i}^{(0,\vec{Q}_{n-2,0})}+\hat{\sigma}_{n,n+1}D_{n}D_{n+1}^ {-1}(w_{1}^{-1}\cdots w_{n-1}^{-1})H_{i}^{(0,\vec{Q}_{n-3,0})}\\ &=(w_{1}^{-1}\cdots w_{n+1}^{-1})(w_{n+1}H_{i+1}^{(0,\vec{Q}_{n-2,0 })}+w_{n+1}^{-1}H_{i}^{(0,\vec{Q}_{n-2,0})}+\sigma_{n,n+1}D_{n}D_{n+1}^{-1}H_{ i}^{(0,\vec{Q}_{n-3,0})})\\ &=(w_{1}^{-1}\cdots w_{n+1}^{-1})H_{i+1}^{(0,\vec{Q}_{n-1,0})} \end{split} \tag{70}\] Now, suppose \(Q_{n-1}=1\). It will make things easier if we split up the quiver vector in the following way: \[\vec{Q}_{n-1}=(\vec{Q}_{n-1,j_{r}+1},\vec{Q}_{j_{r},j_{r-1}+1},\ldots,\vec{Q}_ {j_{1},j_{0}+1},Q_{j_{0}},\vec{Q}_{j_{0}-1}) \tag{71}\] where \(\vec{Q}_{i,j}=(Q_{i},Q_{i-1},\ldots,Q_{j})\) such that \(\vec{Q}_{n-1,j_{r}+1}=(1,1,\ldots,1),\vec{Q}_{j_{r},j_{r-1}+1}=(-1,-1,\ldots,-1),\ldots\) and \(Q_{j_{0}}=0\) is the leftmost entry equal to 0. In addition to the paths \(X_{n,n+1},X_{n+1,n+1}\), the subvector \(\vec{Q}_{n-1,j_{r}+1}\) offers the paths \(X_{j_{r}+1,n+1},\ldots,X_{n-1,n+1}\). The contribution to the \(i\)-th Hamiltonian corresponding to these paths is \[(H_{i}^{\mathrm{i}_{n}})_{1}=\sum_{m=j_{r}}^{n-2}X_{m+1,n+1}H_{i-1}^{\mathrm{i}_{ m-1}}. \tag{72}\] Moving on to the subvector \(\vec{Q}_{j_{r},j_{r-1}+1}=(-1,-1,\ldots,-1)\), inspection of the directed network tells us that there are no new path contributions. However, the paths \(X_{m,m+1}\) and \(X_{m+1,m+2}\) no longer intersect for \(m\in[j_{r-1}+1,j_{r}]\). Furthermore, \(X_{j_{r},j_{r}+1}\) does not intersect \(X_{j_{r}+1,n+1}\). Therefore, all of these paths can be multiplied together to obtain terms in the Hamiltonian. Hence, for every \(m\in[j_{r-1},j_{r}-1]\), we obtain the term \[X_{m+1,m+2}X_{m+2,m+3}\cdots X_{j_{r},j_{r}+1}X_{j_{r}+1,n+1}H_{i-1-(j_{r}-m)}^ {\mathbf{i}_{m-1}} \tag{73}\] Therefore, the contribution to the \(i\)-th Hamiltonian from the subvector \(\vec{Q}_{j_{r},j_{r-1}+1}\) is \[(H_{i}^{\mathrm{i}_{n}})_{2}=\sum_{m=j_{r-1}}^{j_{r}-1}X_{m+1,m+2}\cdots X_{j _{r},j_{r}+1}X_{j_{r}+1,n+1}H_{i-1-(j_{r}-m)}^{\mathbf{i}_{m-1}} \tag{74}\] The next subvector \(\vec{Q}_{j_{r-1},j_{r-2}+1}=(1,1,\ldots 1)\) gives us the paths \(X_{j_{r-2}+1,j_{r-1}+2},\ldots X_{j_{r-1},j_{r-1}+2}\), which do not intersect the paths from \(\vec{Q}_{j_{r},j_{r-1}+1}\). Thus, we can multiply all of these paths together to obtain the contribution \[(H_{i}^{\mathbf{i}_{n}})_{3}=\sum_{m=j_{r-2}}^{j_{r-1}-1}X_{m+1,j_{r-1}+2}(X_{ j_{r-1}+2,j_{r-1}+3}\cdots X_{j_{r},j_{r}+1}X_{j_{r}+1,n+1})H_{i-1-(j_{r}-m)}^{ \mathbf{i}_{m-1}} \tag{75}\] It now becomes clear that next contribution from \(\vec{Q}_{j_{r-2},j_{r-3}+1}=(-1,1,\ldots,-1)\) is \[(H_{i}^{\mathbf{i}_{n}})_{4}= \sum_{m=j_{r-3}}^{j_{r-2}-1}X_{m+1,m+2}\cdots X_{j_{r-2},j_{r-2}+ 1}(X_{j_{r-2}+1,j_{r-1}+2})\] \[\times(X_{j_{r-1}+2,j_{r-1}+3}\cdots X_{j_{r},j_{r}+1}X_{j_{r}+1, n+1})H_{i-1-(j_{r}-j_{r-1})-(j_{r-2}-m)}^{\mathbf{i}_{m-1}} \tag{76}\] and so forth. Once we get to \(Q_{j_{0}}=0\), then there are no more extra paths to consider that are not already terms in a lower Hamiltonian. Now, recall that we assigned the quantized path weights \(X_{i,j}=\hat{\sigma}_{i,j}D_{i}D_{j}^{-1}\) for \(i\neq j\). Using this fact, it follows \[\begin{split}\hat{\sigma}_{m+1,n+1}D_{m+1}D_{n+1}^{-1}& =X_{m+1,n+1}\\ &=X_{m+1,m+2}\cdots X_{j_{r},j_{r}+1}X_{j_{r}+1,n+1}\\ &=X_{m+1,j_{r-1}+2}(X_{j_{r-1}+2,j_{r-1}+3}\cdots X_{j_{r},j_{r}+ 1}X_{j_{r}+1,n+1})\\ &=X_{m+1,m+2}\cdots X_{j_{r-2},j_{r-2}+1}(X_{j_{r-2}+1,j_{r-1}+2}) \\ &\quad\times(X_{j_{r-1}+2,j_{r-1}+3}\cdots X_{j_{r},j_{r}+1}X_{j_{r }+1,n+1})\end{split} \tag{77}\] So, we see that all of the coefficients in front of the Hamiltonians are equal to the weight \(\hat{\sigma}_{m+1,n+1}D_{m+1}D_{n+1}^{-1}\). Thus, we have \[H_{i}^{\mathrm{I_{n}}}=(H_{i}^{\mathrm{I_{n}}})_{0}+\sum_{m=j_{0}}^{n-2}\hat{ \sigma}_{m+1,n+1}D_{m+1}D_{n+1}^{-1}H_{i-1-A_{r,m}}^{\mathrm{I_{m-1}}} \tag{78}\] where \(A_{r,m}=(j_{r}-j_{r-1})+(j_{r-2}-j_{r-3})+\cdots+(j_{r-N}-m)\) for some \(1\leq N\leq r\). By construction, \(A_{r,m}\) is equal to the number of entries in the subvector \(\vec{Q}_{n-1,m+1}\) that are equal to \(-1\). By identifying \(\vec{k}_{n}=(0,\vec{Q}_{n-1},0)\), we can see that \(S_{n,m+1}=A_{r,m}\). Lastly, notice that the leftmost entry in \(\vec{Q}\) equal to 0 is the starting point for the sum. Hence, we can write this as a sum from \(m=0\) and insert \(k_{m+2,n}\) into the summand. Therefore, it follows \[\begin{split} H_{i}^{\mathrm{I_{n}}}&=(H_{i}^{ \mathrm{I_{n}}})_{0}+\sum_{m=0}^{n-2}k_{m+2,n}\hat{\sigma}_{m+1,n+1}D_{m+1}D_{ n+1}^{-1}H_{i-1-S_{n,m+1}}^{\mathrm{I_{m-1}}}\\ &=(w_{1}^{-1}\cdots w_{n+1}^{-1})\left((H_{i+1}^{(0,\vec{Q}_{n-1 },0)})_{0}+\sum_{m=0}^{n-2}k_{m+2,n}\sigma_{m+1,n+1}D_{m+1}D_{n+1}^{-1}H_{i-S_{ n,m+1}}^{(0,\vec{Q}_{m-2},0)}\right)\\ &=(w_{1}^{-1}\cdots w_{n+1}^{-1})H_{i+1}^{(0,\vec{Q}_{n-1},0)}. \end{split} \tag{79}\] The case where \(Q_{n-1}=-1\) is treated analogously. ### Type C Using the Lax formalism for \(C_{n}\) in [10], we provide a recursive formula for the type \(C_{n}\) q-Toda Hamiltonians in terms of type \(A\) q-Toda Hamiltonians. Similarly to the previous section, we will use this formula to prove a bijection between the q-Toda Hamiltonians obtained via Lax matrices vs. quantized weights on the type C directed network. **Theorem 5.5**.: _Let \(\vec{k}_{n}\in\{-1,0,1\}^{n}\) be an index vector for the double complete mixed monodromy matrix \(\mathbb{T}_{\vec{k}_{n}}^{v}(z)\). Then the \(i\)-th type \(C_{n}\) q-Toda Hamiltonian associated to \(\vec{k}_{n}\) can be written as_ \[\begin{split}\mathbb{H}_{i}^{\vec{k}_{n}}&=\sum_{j=1 }^{n+1}\left[H_{n+1+j-i}^{\vec{k}_{n}}H_{j}^{\vec{k}_{n}}+\omega_{n}^{-2k_{n}} D_{n}^{2}H_{n+1+j-i}^{\vec{k}_{n-1}}H_{j}^{\vec{k}_{n-1}}\right]+\sum_{j=1}^{n+1} \sum_{m=0}^{n-2}k_{m+2,n}\sigma_{m+1,n}\\ &\quad\times\left[H_{n+1+j-i}^{\vec{k}_{n-1}}(\omega_{n}^{-k_{n} }D_{m+1}D_{n})H_{j-S_{n,m+1}}^{\vec{k}_{m}}+H_{n+1+j-i+S_{n,m+1}}^{\vec{k}_{m}} (\omega^{-k_{n}}D_{m+1}D_{n})H_{j}^{\vec{k}_{n-1}}\right]\\ &\quad+\sum_{j=1}^{n+1}\sum_{m,m^{\prime}=0}^{n-2}k_{m+2,n}k_{m^{ \prime}+2,n}\sigma_{m+1,n}\sigma_{m^{\prime}+1,n}H_{j+i-n-1+S_{m^{\prime}+1,m+ 1}}^{\vec{k}_{m^{\prime}}}(\omega_{n}^{-k_{n}}D_{m^{\prime}+1}D_{m+1}D_{n})H_{ j}^{\vec{k}_{m}}\end{split} \tag{80}\] Proof.: Recall the 3 (local) Lax matrices defined in [10], \(\bar{L}_{i}^{-k_{i}}(z)\). Observe that these matrices are related to the original matrices \(L_{i}^{k_{i}}(z)\) in the following way: \[\bar{L}_{i}^{-k_{i}}(z)=-[L_{i}^{k_{i}}(z^{-1})]^{T}. \tag{81}\] Hence, the _double complete mixed monodromy matrix_ can be expressed as \[\begin{split}\mathbb{T}^{v}_{\vec{k}_{n}}(z)&=\bar{L}^{ v,-k_{1}}_{1}(z)\cdots\bar{L}^{v,-k_{n}}_{n}(z)L^{v,k_{n}}_{n}(z)\cdots L^{v,k_{1}}_{1}(z) \\ &=(-1)^{n}[L^{k_{1}}_{1}(z^{-1})]^{T}\cdots[L^{k_{n}}_{n}(z^{-1}) ]^{T}\cdot T^{v}_{\vec{k}_{n}}(z)\\ &=(-1)^{n}[L^{k_{n}}_{n}(z^{-1})\cdots L^{k_{1}}_{1}(z^{-1})]^{T} \cdot T^{v}_{\vec{k}_{n}}(z)\\ &=(-1)^{n}[T^{v}_{\vec{k}_{n}}(z^{-1})]^{T}\cdot T^{v}_{\vec{k}_ {n}}(z)\end{split} \tag{82}\] where \(T^{v}_{\vec{k}_{n}}(z)\) is the type A _complete mixed monodromy matrix_. By direct multiplication and using the convention that \(\mathbb{H}^{\vec{k}_{n}}_{i}\) is the coefficient of \(z^{-n+i-1}\), the formula follows. **Corollary 5.6**.: _Let \(\vec{k}_{n}=(k_{n},\ldots,k_{1})\) be an index vector. Suppose \(\vec{k}^{\prime}_{n}=(k_{n},\ldots,k_{2},0)\) is another index vector. Then it follows_ \[\mathbb{H}^{\vec{k}_{n}}_{i}=\mathbb{H}^{\vec{k}^{\prime}_{n}}_{i}. \tag{83}\] Now, recall the 3 possible quiver blocks \(Q_{i}=0,1,-1\) from the previous section. These possible blocks are the same for type \(C_{n}\) with the exception the top block \(Q_{n-1}\) has double the amount of arrows, as illustrated in **Figure 9**. The directed subnetwork corresponding to the different quiver blocks is given in **Figure 10**. Furthermore, in **Figure 10** we give show the directed networks corresponding to the top block \(Q_{n-1}\). **Definition 5.7**.: The quantized weights for the possible types of paths in a type \(C_{n}\) directed network are \[X_{i,j}=\begin{cases}\omega_{i}^{-2}&\text{if $i=j$ and $i\leq n$}\\ \omega_{2n+1-i}^{2}&\text{if $i=j$ and $i\geq n+1$}\\ \hat{\sigma}_{i,j}D_{i}D_{j}^{-1}&\text{if $i\neq j$ and $j\leq n$}\\ v^{-1}\hat{\sigma}_{i,n}D_{i}D_{n}&\text{if $i\neq j$ and $j=n+1$}\\ \tilde{\sigma}_{2n+1-j,2n+1-i}D_{2n+1-j}D_{2n+1-i}^{-1}&\text{if $i\neq j$ and $i\geq n+1$}\\ v\tilde{\sigma}_{2n+1-j,n}D_{2n+1-j}D_{n}&\text{if $i\neq j$ and $i=n$}\\ v^{-Q_{n-1}}w_{n}^{-2Q_{n-1}}D_{n}^{2}&\text{if $i=n$ and $j=n+1$}\end{cases} \tag{84}\] where \(\tilde{\sigma}_{i,j}=w_{i}^{-Q_{i}+1}w_{i+1}^{-Q_{i+1}+1}\cdots w_{j}^{-Q_{j}+1}\). **Lemma 5.8**.: _Suppose \(N_{u,v}(\mathbf{i}_{n})\) is a type \(C_{n}\) directed network for an unmixed double Coxeter word \(\mathbf{i}_{n}\) associated to \((u,v)\in W\times W\). Let \(N_{u,v}(\mathbf{i})_{i,j}\) be the subnetwork consisting of rows \(i\) to \(j\) where \(i\leq j\), and let \((\mathbb{H}^{\mathbf{i}_{n}}_{i})_{i,j}\) be the corresponding Hamiltonians. Then_ \[\begin{split}(\mathbb{H}^{\mathbf{i}_{n}}_{i})_{1,m}& =(w_{1}^{-1}\cdots w_{m}^{-1})H_{i+1}^{(0,\tilde{Q}_{m-2},0)}, \quad m\in[2,n]\\ (\mathbb{H}^{\mathbf{i}_{n}}_{i})_{m^{\prime},2n}& =(w_{1}\cdots w_{2n+1-m^{\prime}})H_{n+1-i}^{(0,\tilde{Q}_{2n-1-m^{\prime}},0) },\quad m^{\prime}\in[n+1,2n-1]\end{split} \tag{85}\] _where \(H_{i}^{\vec{k}_{m}}\) are the type \(A_{m}\) Lax Hamiltonians associated to \(\vec{k}_{m}\) for \(m\leq n\), and \(\bar{H}_{i}^{\vec{k}_{m}}\) are the same Hamiltonians with \(w_{i}\mapsto w_{i}^{-1}\)._ **Theorem 5.9**.: _The type \(C_{n}\) q-Toda Hamiltonians from the network formulation with path weights given by **Definition 5.7** are related to the type \(C_{n}\) q-Toda Hamiltonians from the Lax formulation Figure 10. Directed Networks associated to Quiver Blocks in [1] by the formula_ \[\mathbb{H}_{i}^{\mathbf{i}_{n}}=\mathbb{H}_{i+1}^{(Q_{n-1},0)},\quad i\in[1,n] \tag{86}\] Proof.: First, consider the case \(Q_{n-1}=0\). Then we can obtain terms by multiplying Hamiltonians from the top \(n\) rows with Hamiltonians from the bottom \(n\) rows, i.e., \[(\mathbb{H}_{i-j}^{\mathbf{i}_{n}})_{n+1,2n}(\mathbb{H}_{j}^{\mathbf{i}_{n}})_ {1,n},\quad j\in[0,i] \tag{87}\] We can also obtain terms by multiplying the Hamiltonians from the top \(n-1\) rows with \(X_{n,n+1}\) and the bottom \(n-1\) rows, i.e., \[(\mathbb{H}_{i-j}^{\mathbf{i}_{n}})_{n+2,2n}X_{n,n+1}(\mathbb{H}_{j}^{\mathbf{ i}_{n}})_{1,n-1},\quad j\in[0,i] \tag{88}\] Figure 11. Directed Networks associated to \(Q_{n-1}\) Therefore, \[\begin{split}(\mathbb{H}_{i}^{\mathbb{H}_{n}})_{0}&= \sum_{j=0}^{i}\left[(\mathbb{H}_{i-j}^{\mathbb{H}_{n}})_{n+1,2n}(\mathbb{H}_{j}^ {\mathbb{H}_{n}})_{1,n}+(\mathbb{H}_{i-j}^{\mathbb{H}_{n}})_{n+2,2n}X_{n,n+1}( \mathbb{H}_{j}^{\mathbb{H}_{n}})_{1,n-1}\right]\\ &=\sum_{j=0}^{i}\left[H_{n+1+j-i}^{(0,\bar{Q}_{n-2},0)}H_{j+1}^{( 0,\bar{Q}_{n-2},0)}+H_{n+j-i}^{(0,\bar{Q}_{n-3},0)}(w_{n}^{-2Q_{n-1}}D_{n}^{2}) H_{j+1}^{(0,\bar{Q}_{n-3},0)}\right]\\ &=(\mathbb{H}_{i+1}^{(\bar{Q}_{n-1},0)})_{0}\end{split} \tag{89}\] where in the second equality we used **Lemma 5.8**. Again, using **Lemma 5.8** and **Theorem 5.1**, it follows the bottom \(n\) rows give \[\begin{split}(\mathbb{H}_{j}^{\mathbb{H}_{n}})_{1,n}& =(w_{1}^{-1}\cdots w_{n}^{-1})H_{j+1}^{(0,\bar{Q}_{n-2},0)}\\ &=(w_{1}^{-1}\cdots w_{n}^{-1})\left(w_{n}H_{j+1}^{(0,\bar{Q}_{n- 3},0)}+w_{n+1}^{-1}H_{j}^{(0,\bar{Q}_{n-3},0)}+\sigma_{n-1,n}D_{n-1}D_{n}^{-1} H_{j}^{(0,\bar{Q}_{n-4},0)}\right.\\ &\quad+\sum_{m=0}^{n-3}k_{m+2,n}\sigma_{m+1,n}D_{m+1}D_{n}^{-1}H_ {j-S_{n-1,m+1}}^{(0,\bar{Q}_{m-2},0)}\Bigg{)}\end{split} \tag{90}\] and the top \(n\) rows give us \[\begin{split}(\mathbb{H}_{j}^{\mathbb{H}_{n}})_{n+1,2n}& =(w_{1}\cdots w_{n})H_{n+1-j}^{(0,\bar{Q}_{n-2},0)}\\ &=(w_{1}\cdots w_{n})\left(w_{n}H_{n+2-j}^{(0,\bar{Q}_{n-3},0)}+w_ {n+1}^{-1}H_{n+1-j}^{(0,\bar{Q}_{n-3},0)}+\sigma_{n-1,n}D_{n-1}D_{n}^{-1}H_{n+ 1-j}^{(0,\bar{Q}_{n-4},0)}\right.\\ &\quad+\left.\sum_{m=0}^{n-3}k_{m+2,n}\sigma_{m+1,n}D_{m+1}D_{n}^ {-1}H_{n+1-j-S_{n-1,m+1}}^{(0,\bar{Q}_{m-2},0)}\right).\end{split} \tag{91}\] Now, suppose \(Q_{n-1}=1\). In addition to the contribution \((\mathbb{H}_{i}^{\mathbb{H}_{n}})_{0}\), there are additional contributions from new paths \(X_{i,n+1}\) (as discussed in the type A proof) and from the newly non-intersecting paths \(X_{n,n+1},X_{n+1,n+2}\). In particular, the last term in \((\mathbb{H}_{j}^{\mathbb{H}_{n}})_{1,n}\), \((\mathbb{H}_{j}^{\mathbb{H}_{n}})_{1,n}\) can be multiplied by the weight \(X_{n,n+1}\) to account for the new path contributions in the first case and the non-intersecting paths in the second case. Furthermore, the corresponding paths for the 2 cases do not intersect with the Hamiltonians \((\mathbb{H}_{j}^{\mathbb{H}_{n}})_{n+2,2n}\) and \((\mathbb{H}_{j}^{\mathbb{H}_{n}})_{1,n-1}\), respectively. Explicitly, we obtain the contributions \[\begin{split}(\mathbb{H}_{i-j+S_{n-1,m+1}}^{\mathbb{H}_{n}})_{n+2,2n}X_{n,n+1}\left(\sum_{m=0}^{n-3}k_{m+2,n}\sigma_{m+1,n}D_{m+1}D_{n}^{-1}H_{ j-S_{n-1,m+1}}^{(0,\bar{Q}_{m-2},0)}\right)\\ &\quad=\sum_{m=0}^{n-2}k_{m+2,n}\sigma_{m+1,n}H_{n+1+j-i-S_{n,m+1} }^{\bar{Q}_{n-3}}(\omega_{n}^{-Q_{n-1}}D_{m+1}D_{n})H_{j-S_{n,m+1}}^{\bar{Q}_{ m-2}}\end{split} \tag{92}\] and \[\begin{split}&\left(\sum_{m=0}^{n-3}k_{m+2,n}\sigma_{m+1,n}D_{m+1}D_{ n}^{-1}H_{n+1+j-i-S_{n-1,m+1}}^{(0,\bar{\mathcal{Q}}_{m-2},0)}\right)X_{n,n+1}(\mathbb{H }_{j}^{\mathbb{i}n})_{1,n-1}\\ &\quad=\sum_{m=0}^{n-2}k_{m+2,n}\sigma_{m+1,n}H_{n+1+j-i+S_{n,m+1} }^{\bar{\mathcal{Q}}_{n-3}}(\omega_{n}^{-Q_{n-1}}D_{m+1}D_{n})H_{j}^{\bar{ \mathcal{Q}}_{m-2}}.\end{split} \tag{93}\] Finally, we obtain one last contribution coming from the observation that the paths corresponding to the last terms in \((\mathbb{H}_{j}^{\mathbb{i}n})_{1,n}\), \((\mathbb{H}_{j}^{\mathbb{i}n})_{1,n}\) can be multiplied together along with \(X_{n,n+1}\). This gives us the last contribution: \[\sum_{m,m^{\prime}=0}^{n-2}k_{m+2,n}k_{m^{\prime}+2,n}\sigma_{m+1,n}\sigma_{m^ {\prime}+1,n}H_{j+i-n-1+S_{m^{\prime}+1,m+1}}^{\bar{\mathcal{Q}}_{m^{\prime}- 2}}(\omega_{n}^{-Q_{n-1}}D_{m^{\prime}+1}D_{m+1}D_{n})H_{j}^{\bar{\mathcal{Q}} _{m-2}} \tag{94}\] By adding up all these contributions, we obtain the equation in **Theorem 5.5** under the identification \(\vec{k}_{n}=(\vec{Q}_{n-1},0)\), proving the theorem. The case of \(Q_{n-1}=-1\) is treated analogously to \(Q_{n-1}=1\).
2310.05709
Joint Optimization of seismometer arrays for the cancellation of Newtonian noise from seismic body waves in the Einstein Telescope
Seismic Newtonian noise is predicted to limit the sensitivity of the Einstein Telescope. It can be reduced with coherent noise cancellation techniques using data from seismometers. To achieve the best results, it is important to place the seismic sensors in optimal positions. A preliminary study on this topic was conducted for the Einstein Telescope (ET): it focused on the optimization of the seismic array for the cancellation of Newtonian noise at an isolated test mass. In this paper, we expand the study to include the nested shape of ET, i.e., four test masses of the low-frequency interferometers at each vertex of the detector. Results are investigated in function of the polarization content of the seismic field composed of body waves. The study also examines how performance can be affected by displacing the sensor array from its optimal position or by operating at frequencies other than those used for optimization.
Francesca Badaracco, Jan Harms, Luca Rei
2023-10-09T13:31:21Z
http://arxiv.org/abs/2310.05709v1
Joint Optimization of seismometer arrays for the cancellation of Newtonian noise from seismic body waves in the Einstein Telescope ###### Abstract Seismic Newtonian noise is predicted to limit the sensitivity of the Einstein Telescope. It can be reduced with coherent noise cancellation techniques using data from seismometers. To achieve the best results, it is important to place the seismic sensors in optimal positions. A preliminary study on this topic was conducted for the Einstein Telescope (ET): it focused on the optimization of the seismic array for the cancellation of Newtonian noise at an isolated test mass. In this paper, we expand the study to include the nested shape of ET, i.e., four test masses of the low-frequency interferometers at each vertex of the detector. Results are investigated in function of the polarization content of the seismic field composed of body waves. The study also examines how performance can be affected by displacing the sensor array from its optimal position or by operating at frequencies other than those used for optimization. ## 1 Introduction Third-generation gravitational wave (GW) detectors will have enhanced sensitivity compared to current GW detectors. The Einstein Telescope (ET), in particular, is designed to push the observation band down to about \(3\,\mathrm{Hz}\)[7]. At these frequencies, two major hindrances are seismic noise and Newtonian noise (NN). Seismic noise can be suppressed using suspension systems and/or active seismic isolation. Newtonian noise is closely linked to seismic noise as it is produced by the gravity fluctuations induced by seismic waves that cause a density variation in the elastic medium where they propagate [8]. ET will be built underground to reduce the impact of seismic noise (and therefore NN), but a NN cancellation system will still be necessary to achieve the ET design sensitivity. This system will consist of a Wiener Filter (WF) that estimates the NN affecting ET using data collected by a seismic array [4, 5]. The array configuration must be optimized to maximize the cancellation capabilities of the system [6, 5, 2, 3]. In total, ET has 24 test masses (TMs); half of which form the high-frequency interferometers where NN plays a minor role. So, when it comes to NN in ET, we can focus on the 12 TMs of the ET low-frequency interferometers [7]. The work presented in Ref. [2] is a first attempt to understand the geometry and robustness of a seismic array for NN cancellation in underground environments. Since the seismic field is composed of both P-waves (compression) and S-waves (shear), the effectiveness of an array is reduced because correlations are affected by the presence of two types of seismic waves, each one with different propagation velocities. Ref. [2] optimized for a single TM and assumed a homogeneous and isotropic seismic field. This type of optimization would work well only for an isolated end TM of an interferometer. However, the two input TMs of the interferometer arms are close to each other, and due to ET's triangular configuration, the input TMs of one interferometer are only a few \(100\,\mathrm{m}\) away from the end test masses of another interferometer (see Fig 1). To perform the optimization of the array configuration around the four (relevant) test masses of the ET detector at each vertex, correlations of NN between TMs must be taken into account, and the cost function is defined by the residuals after noise cancellation. In the following sections, we will introduce the cost function and equations necessary to evaluate it, then present and discuss the optimization results. ## 2 Cost function choice An array optimization for ET should include all four TMs at a vertex to account for the possibility that a deployed seismometer can be used to subtract NN in all three ET Low-Frequency (ET-LF) interferometers (see black rectangle in Fig. 1). Optimizing arrays for each TM separately is less efficient (in terms of number of seismometers) than simultaneously considering all the TMs of one corner. Joint optimization can be performed using a cost function that takes into account NN from the four TMs and its correlations between TMs during each step of the optimization. The NN residuals left in all three ET-LF interferometers after subtracting out the estimated NN contributions from a vertex serve to define the cost function. The WF is built to be the optimal filter for linear problems such as NN cancellation [13]. The frequency domain residual can be expressed as in Ref [2]: \[R(\omega)=1-\frac{\vec{C}_{\rm SN}^{\dagger}(\omega)\mathbf{C}_{\rm SS}^{-1}( \omega)\vec{C}_{\rm SN}(\omega)}{C_{\rm NN}(\omega)}, \tag{1}\] Where \(\vec{C}_{\rm SN}\) is the vector containing the cross-power spectral densities between the witness sensors (the seismic sensors in the array) and the target signal (ET-LF NN), \(\mathbf{C}_{\rm SS}\) is the matrix of the cross-power spectral densities between the witness sensors while \(C_{\rm NN}\) is the power spectral density of NN contributed by the TMs of a vertex. To optimize the array simultaneously considering the two input TMs of one interferometer and the end TMs of the other two interferometers, it is necessary to use both the residual for the two input TMs (\(R_{\rm input}\)) and two other residuals: one for each end TM of the other two interferometers (\(R_{\rm end1}\) and \(R_{\rm end2}\)). The global cost function will then be defined as: \[\mathcal{L}=\max_{\forall k\in({\rm input,end1,end2})}R_{k}(\omega) \tag{2}\] This type of joint optimization was also employed in Ref. [2] to perform a frequency broadband optimization. The meaning of Eq. 2 is that at each optimization step, the algorithm attempts to optimize the worst residual between the three, thereby striving to obtain the best array for all the TMs. This kind of cost function is convenient since we know that the residual relative to each interferometer will be equal or even less than the value indicated in the plots of the results (Section 4). Moreover, we also tested an alternative cost function: the sum of the three residuals. Equation 2 gives better single residuals (one for each interferometer) compared to the sum. The global cost function contains only three residuals instead of four because the two input TMs are accounted for Figure 1: ET layout. At each vertex there are 4 test masses close to each other. in the same residual. Indeed, the correlation between a sensor close to the input TMs and the GW signal depends on both the input TM displacements. In the following Section it will be shown how to retrieve the residual for the input TMs. ## 3 Residual for the input test masses Consider one vertex of ET. There will be two input TMs belonging to one interferometer and two end TMs belonging to the other two interferometers (see Fig. 1). In the following, we will focus solely on the two input TMs of one interferometer and calculate the cross-correlation between the target signal (represented by the input TMs) and one sensor. Assuming that the origin of the reference system is at one vertex of ET, then the two input TMs will be located at: \[\mathbf{i}_{1}= r_{in}(1,0,0) \tag{3}\] \[\mathbf{i}_{2}= r_{in}(1/2,\sqrt{3}/2,0) \tag{4}\] With \(r_{in}\) the distance of the input TM from the origin. The position variations of the end TM and input TM in one interferometer arm are denoted by \(\delta x_{\mathrm{e}1}\) and \(\delta x_{\mathrm{i}1}\), respectively. The total length of the interferometer arm is described by: \(L_{1}=L_{0}+\delta x_{\mathrm{e}1}-\delta x_{\mathrm{i}1}\) for one arm and \(L_{2}=L_{0}+\delta x_{\mathrm{e}2}-\delta x_{\mathrm{i}2}\) for the other arm, with \(L_{0}=10\,\mathrm{km}\). As a result, the signal detected by the interferometer will be: \[\delta L=L_{2}-L_{1}=(\delta x_{\mathrm{e}2}-\delta x_{\mathrm{i}2})-(\delta x _{\mathrm{e}1}-\delta x_{\mathrm{i}1}) \tag{5}\] However, the generic displacement \(\delta x\) can also be written as a function of the NN acceleration: \(\delta x=-\delta a_{\mathrm{NN}}/\omega^{2}\). We use an analytical NN model valid for spherical caverns sufficiently far underground (see [9] for a more detailed discussion of the underlying assumptions): \[\delta\mathbf{a}_{\mathrm{NN}}(\mathbf{r}_{0},\omega)=4\pi G\rho_{0}\left(2 \boldsymbol{\xi}^{\mathrm{P}}(\mathbf{r}_{0},\omega)\frac{j_{1}(k^{\mathrm{P} }r_{c})}{k^{\mathrm{P}}r_{c}}-\boldsymbol{\xi}^{\mathrm{S}}(\mathbf{r}_{0}, \omega)\frac{j_{1}(k^{\mathrm{S}}r_{c})}{k^{\mathrm{S}}r_{c}}\right) \tag{6}\] where \(r_{c}\) is the cavern radius and \(\boldsymbol{\xi}^{\mathrm{P,S}}\) the seismic displacement of P and S seismic waves with wave vector \(k^{\mathrm{P,S}}\), respectively. For \(k^{\mathrm{P,S}}r_{c}\ll 1\): \[\delta\mathbf{a}_{\mathrm{NN}}(\mathbf{r}_{0},\omega)=\frac{4}{3}\pi G\rho_{0 }\left(2\boldsymbol{\xi}^{\mathrm{P}}(\mathbf{r}_{0},\omega)-\boldsymbol{\xi }^{\mathrm{S}}(\mathbf{r}_{0},\omega)\right) \tag{7}\] To calculate the residual (see Eq. 1), one needs the correlation between NN and seismic displacement measured at \(\mathbf{r}_{s}\). If the measurement axis of the sensor is along the unit vector \(\hat{\mathbf{e}}_{s}\), we find: \[C_{\mathrm{SN}}=\langle\hat{\mathbf{e}}_{s}\cdot\boldsymbol{\xi}(\mathbf{r}_{ s},\omega),\delta L(\omega)/L_{0}\rangle\,. \tag{8}\] Seismic and NN correlations across the 10 km arms are assumed to be zero in the NN band, therefore: \[C_{\rm SN}(\omega)=\frac{1}{L_{0}\omega^{2}}\left\langle\hat{\mathbf{e}}_{s}\cdot \boldsymbol{\xi}(\mathbf{r}_{s},\omega),(\delta a_{\rm i2}(\mathbf{i}_{2}, \omega)-\delta a_{\rm i1}(\mathbf{i}_{1},\omega))\right\rangle \tag{9}\] Where, in the WF framework, \((\delta a_{\rm i2}(\mathbf{i}_{2},\omega)-\delta a_{\rm i1}(\mathbf{i}_{1}, \omega))/(L_{0}\omega^{2})\) will be considered the _target signal_. Moreover, considering Eq. 7 and 9 we can write: \[C_{\rm SN}(\omega)=\frac{1}{L_{0}\omega^{2}}\left\langle\hat{\mathbf{e}}_{s} \cdot\boldsymbol{\xi}(\mathbf{r}_{s},\omega),(\hat{\mathbf{e}}_{2}\cdot\delta \mathbf{a}_{\rm NN}(\mathbf{i}_{2},\omega)-\hat{\mathbf{e}}_{1}\cdot\delta \mathbf{a}_{\rm NN}(\mathbf{i}_{1},\omega))\right\rangle \tag{10}\] and then: \[C_{\rm SN}(\omega) =\frac{1}{L_{0}\omega^{2}}\frac{4}{3}\pi G\rho_{0} \tag{11}\] \[\quad\cdot\left\langle\hat{\mathbf{e}}_{s}\cdot\boldsymbol{\xi} (\mathbf{r}_{s},\omega),\hat{\mathbf{e}}_{2}\cdot\left(2\boldsymbol{\xi}^{ \rm P}(\mathbf{i}_{2},\omega)-\boldsymbol{\xi}^{\rm S}(\mathbf{i}_{2},\omega) \right)-\hat{\mathbf{e}}_{1}\cdot\left(2\boldsymbol{\xi}^{\rm P}(\mathbf{i}_{ 1},\omega)-\boldsymbol{\xi}^{\rm S}(\mathbf{i}_{1},\omega)\right)\right\rangle\] \(C_{\rm SN}\) can be rewritten by defining \(\boldsymbol{\xi}(\mathbf{r}_{s},\omega)=\boldsymbol{\xi}^{\rm P}(\mathbf{r}_{ s},\omega)+\boldsymbol{\xi}^{\rm S}(\mathbf{r}_{s},\omega)\), and: \[\left\langle\hat{\mathbf{e}}_{s}\cdot\boldsymbol{\xi}^{\rm P}( \mathbf{r}_{s},\omega),\hat{\mathbf{e}}_{k}\cdot\boldsymbol{\xi}^{\rm P}( \mathbf{i}_{k},\omega)\right\rangle =\mathcal{C}(\xi^{\rm P},\omega)f^{\rm P}(\Phi^{\rm P}_{sk}) \tag{12}\] \[\left\langle\hat{\mathbf{e}}_{s}\cdot\boldsymbol{\xi}^{\rm S}( \mathbf{r}_{s},\omega),\hat{\mathbf{e}}_{k}\cdot\boldsymbol{\xi}^{\rm S}( \mathbf{i}_{k},\omega)\right\rangle =\mathcal{C}(\xi^{\rm S},\omega)f^{\rm S}(\Phi^{\rm S}_{sk}) \tag{13}\] where \(\mathcal{C}(\xi^{\rm S,P},\omega)\) represents the power spectral density of the seismic displacement \(\xi^{\rm S,P}\) and \(f^{\rm S,P}(\Phi^{\rm S,P}_{sk})\) is a function depending on the assumptions used for the seismic field model. In this case, a homogeneous and isotropic seismic field was assumed and \(f^{\rm S}(\Phi^{\rm S}_{sk})\) and \(f^{\rm P}(\Phi^{\rm P}_{sk})\) can be expressed as (see Ref. [8], Sec. 7.1.3): \[f^{\rm P}(\Phi^{P}_{\rm sk}) =(j_{0}(\Phi^{P}_{\rm sk})+j_{2}(\Phi^{P}_{\rm sk}))(\hat{ \boldsymbol{e}}_{s}\cdot\hat{\boldsymbol{e}}_{k})-3j_{2}(\Phi^{P}_{\rm sk})( \hat{\boldsymbol{e}}_{s}\cdot\hat{\boldsymbol{e}}_{\rm sk})(\boldsymbol{e}_{k} \cdot\hat{\boldsymbol{e}}_{\rm sk}) \tag{15}\] \[f^{\rm S}(\Phi^{S}_{\rm sk}) =(j_{0}(\Phi^{S}_{\rm sk})-\frac{1}{2}j_{2}(\Phi^{S}_{\rm sk}))( \hat{\boldsymbol{e}}_{s}\cdot\boldsymbol{e}_{k})+\frac{3}{2}j_{2}(\Phi^{S}_{\rm sk })(\hat{\boldsymbol{e}}_{s}\cdot\hat{\boldsymbol{e}}_{\rm sk})(\hat{ \boldsymbol{e}}_{k}\cdot\hat{\boldsymbol{e}}_{\rm sk}) \tag{16}\] with: \(\Phi^{\rm P,S}_{sk}=k^{\rm P,S}|\mathbf{r}_{s}-\mathbf{r}_{k}|\) and \(\mathbf{e}_{sk}=\left(\mathbf{r}_{s}-\mathbf{r}_{k}\right)/|\mathbf{r}_{s}- \mathbf{r}_{k}|\). Finally we get to the expression for \(C_{\rm SN}(\omega)\): \[C_{\rm SN}(\omega)=\frac{1}{L_{0}\omega^{2}}\frac{4}{3}\pi G\rho_{0}\, \mathcal{C}(\xi^{\rm tot},\omega)\left[2p\left(f^{\rm P}(\Phi^{\rm P}_{s2})-f^ {\rm P}(\Phi^{\rm P}_{s1})\right)-(1-p)\left(f^{\rm S}(\Phi^{\rm S}_{s2})-f^{ \rm S}(\Phi^{\rm S}_{s1})\right)\right] \tag{17}\] where the indices \(s\) and 1-2 are to be interpreted as the indexes relative to the sensor and the two input TMs, while \(p\) is defined as follows: \[p =\frac{\mathcal{C}(\xi^{\rm P},\omega)}{\mathcal{C}(\xi^{\rm tot}, \omega)} \tag{18}\] \[1-p =\frac{\mathcal{C}(\xi^{\rm S},\omega)}{\mathcal{C}(\xi^{\rm tot}, \omega)} \tag{19}\] Being \((\delta a_{\rm i2}({\bf i}_{2},\omega)-\delta a_{\rm i1}({\bf i}_{1},\omega))/(L_{0 }\omega^{2})\) the target signal, \(C_{\rm NN}\) will read as: \[C_{\rm NN}=\left\langle\left((\delta a_{\rm i2}({\bf i}_{2},\omega)-\delta a_{ \rm i1}({\bf i}_{1},\omega))/(L_{0}\omega^{2})\right)^{2}\right\rangle \tag{20}\] and following the same simple calculations as before, \(C_{\rm NN}\) can be expressed as: \[C_{\rm NN}=\left(\frac{1}{L_{0}\omega^{2}}\frac{4}{3}\pi G\rho_{0}\right)^{2} \mathcal{C}(\xi^{\rm tot},\omega)\left(2(3p+1)-2\left(4pf^{\rm P}(\Phi_{12}^{ \rm P})+(1-p)f^{\rm S}(\Phi_{12}^{\rm S})\right)\right) \tag{21}\] with \(\Phi_{12}^{\rm P,S}=k^{\rm P,S}|{\bf i}_{1}-{\bf i}_{2}|\). Finally, the correlation between P and S-waves have always been neglected. Now, the residual (Eq. 1) for the input TMs of one interferometer can be evaluated using Eqs. 17, and 21 along with the value for \(C_{\rm SS}\), which is identical to that in Ref. [2]: \[C_{\rm SS}(\omega)=\mathcal{C}(\xi^{\rm tot},\omega)\left[pf^{\rm P}(\Phi_{sk }^{\rm P})+(1-p)f^{\rm S}(\Phi_{sk}^{\rm S})\right], \tag{22}\] where the indexes \(s\) and \(k\) represent two seismic sensors. Note that the \(C_{\rm SS}(\omega)\) diagonal elements contain the seismometer's SNR (as explained in Reference [8], Equation (202)). The residual for the end TMs of the other two interferometers are calculated as described in Ref. [2]. ## 4 Results The Differential Evolution Optimizer, a stochastic algorithm, was chosen to find the minimum of the cost function. It was run 100 times for each array composed of \(N\) sensors to select the best configuration among those selected by each single optimization. The results of these optimizations are discussed below. In the optimization the following distances from the origin were used for the TMs positions: \(r_{in}=64.12\,\)m for the input TMs and \(r_{end}=536.35\,\)m for the end TMs. The signal-to-noise ratio assumed for the seismometers was SNR = 15, the seismic velocities for S-waves and P-waves were respectively assumed to be: \(v_{S}=4\) and \(v_{P}=6\,\) km/s and the frequency at which the array was optimized was \(10\,\)Hz. ## 5 Optimal arrays In Fig. 2, the optimal arrays obtained from 100 optimizations are shown: the sensors belonging to the array producing the lower (i.e. the best) residual are indicated with yellow stars (the other sensors are represented as blue dots). Only the sensor arrays that generate a residual no greater than twice the variance calculated over all the 100 optimizations are displayed. It can be noticed that there is a certain degree of degeneracy: in some cases the sensors tends to be located around circles and/or arches. This is in agreement with the results of the previous work (Ref [2]) and it is quite intuitive since we assumed an isotropic field. In other cases the sensors simply group around the same point or along straight lines, depending on the number of sensors used. This mostly occurs for extreme values of \(p\) (0 or 1). When N grows, some sensors are located in the same place: this happens because placing more sensors reduces the SNR and a super-sensor is created. ### Cancellation performances at different field compositions The value of \(p\) in Eq. 18 represents the fraction of compressional waves in the seismic field. Values close to 1 indicate that the field is composed solely of compressional waves, while values close to 0 indicate that the field contains only shear waves. It is known that if both polarizations are present, NN cancellation capabilities of an array are reduced [2]. In Fig. 3, we quantify how the performance (measured by the residual value) changes with respect to the field composition (value of \(p\)). We examined sensor arrays composed of \(N\) sensors, with \(N\) ranging from 1 to 15 and then 20. Fig. 3 shows that NN cancellation performance strongly depends on the value of \(p\), with the worst cases occurring for values between 0.1-0.4. The curve \(R(p)\) is asymmetric due to the fact that compressional waves produce stronger NN (see Eq. 7). This is the reason why at values close to 0 the residual worsens quickly: the array's ability to cancel shear-wave NN is compromised by the presence of compressional waves. Furthermore, for \(N=1\) and 2 with \(p\) closer to 1 or 0, cancellation performance is visibly worse than for other values of \(N\). This is expected since the array is attempting to cancel the NN in three interferometers (three residual functions to be jointly minimized) using less than three seismometers. For 3 - 12 seismometers, the shape of the curve depends weakly on the number of sensors, and as one would expect, the residual decreases with increasing number of sensors. Looking at Fig. 3, \(p\) has an important impact on NNC performance only when a few sensors (per vertex) are deployed. Already with 20 seismometers, the value of \(p\) does not have a strong influence unless it is close to its boundaries. However, it should be kept in mind that the value of \(p\) has an impact on the optimal array configuration. In the NN literature, the value \(p=1/3\) was typically used [8, 2, 1, 10]. This assumption was made because the composition of the seismic was not known for sites of interest, and so the ad hoc assumption was made that the energy is equipartitioned among the three polarizations (one compressional mode and two shear modes). The actual value of \(p\) will be different for each site and depend on the types of seismic sources and the local source distribution. An analytical model exists for a diffusive field produced by strong continuous or repeated scattering of waves from distant sources, which establishes a characteristic polarization mix \(p\): \[p=\frac{1}{1+2(\alpha/\beta)^{3}} \tag{23}\] where \(\alpha\) and \(\beta\) represent the velocities of compressional and shear waves, respectively. This Figure 2: Projection of the optimal sensors (see text for more details). _First row_: N=3 and \(p\)=0.2; _Second row_: N=3 and \(p\)=0.1; _Third row_: N=6 and \(p\)=0.2; _Fourth row_: N=12 and \(p\)=0.0; _First row_: N=12 and \(p\)=0.6; relationship was first derived by Weaver in 1982 [14] and holds true in the diffuse regime independently of the details of the scattering processes. Equation 23 was derived for an infinite medium. Later, the model was extended to the case of a half-space taking into account the contribution of surface Rayleigh waves [11]. The compressional-wave content of the diffusive field is small, but the model does not apply to our analysis where seismic waves in the NN band will not have passed through a lasting period of strong scattering to establish this polarization mix. The polarization mix of underground seismic fields in the NN band will strongly depend on the source properties and distribution. Since the sources of seismic waves at the ET candidate sites are mostly unknown and varying with time (some of them potentially located many kilometers away from the site), the only way to provide an informed estimate of \(p\) is by analyzing data from seismic arrays. Figure 3: The plot shows how the residual changes varying the value of \(p\) at a fixed number \(N\) of sensors in the array. The instrument signal to noise ratio of the seismometers is assumed to be SNR = 15. ### Robustness under small deviations from the optimal configuration A NN cancellation system requires the deployment of a seismic array in optimal locations and the use of an optimal filter, such as the Wiener Filter, to process the data and cancel the noise. This necessitates two types of optimizations: one related to the spatial characteristics of the seismic field, providing the optimal positions of the array; and another related to the temporal characteristics of the seismic field, providing the estimates of the WF coefficients. If the seismic field is non-stationary, it is possible to update the WF coefficients, to ensure that it remains the optimal filter over time provided that the field varies slowly enough to be able to adapt to the changes [12]. However, if the field is non-stationary, its spatial correlations can change as well affecting the optimal array positions. In an underground environment, it is not possible to update the optimal positions of the array. Spatial optimization must be a compromise between possible configurations of the seismic field. In this regard, it is preferable to have a site where seismic field correlations have minimal temporal variations. Another important consideration is that the residual produced by an optimal seismic array will be higher than its nominal value, leading to reduced cancellation performance. This is because it will not be possible to deploy sensors in their exact optimal positions. In Fig. 4, we tested the robustness of an optimal array for a homogeneous and isotropic seismic field to changes in sensor positions by randomly displacing sensors from their optimal positions. Fig. 4 shows how the residual changes: the coloured points represent the residual of the optimal array for a given \(N\) (3, 6 and 13) and for each value of \(p\). However, in a real-world scenario with seismic and geological inhomogeneities, robustness may differ, and whether seismometers can be deployed with a typical offset from optimal locations of 30 m is unclear. Constraints on borehole positions due to surface conditions and infrastructure might well enforce larger offsets. Furthermore, since we run the optimization with the assumption of isotropy, we do not know yet by how much optimal configurations change under realistic time variations of seismic correlations. ### Robustness to a change in frequency with respect to the frequency used for the optimization The optimization is typically performed at a single frequency: the upper plot of Fig. 5 shows the residual calculated at other frequencies as a histogram indicating how the residual varies when the array is displaced from the optimal position as in the previous section. The optimization was done at 10 Hz with \(p\) = 0.3 for 3, 6 and 12 sensors. At frequencies higher than the optimal one, cancellation performances worsen and above 15 Hz cancellation becomes almost entirely ineffective. This is expected since the optimization was performed at another frequency. However, the behaviour at lower frequencies is opposite: the residual decreases even further for 13 sensors at 3 Hz. This can be attributed to the fact that correlations increase at lower frequencies, meaning that the yellow area of Fig. 6 becomes larger. Plotting the residual versus the frequency for a seismic array randomly positioned in the space surrounding an ET vertex reveals that the residual is close to \(R=1\) at higher frequencies and reaches lower values below 15 Hz. This effect is more emphasized when \(p=0\) or 1, because correlations are larger even at higher frequencies. The same behaviour can be observed in the lower plot of Fig. 5, where the residual of the optimal arrays at different values of \(p\) are shown. As a final note on broadband optimization, this type of optimization is particularly useful for expanding the frequency band where cancellation is most effective. However, it has the side effect of reducing overall performance compared to single-frequency optimization (see Ref. [2]). One way to minimize this effect could be to check how the residual of an array optimized at a frequency \(f_{0}\) changes at other frequencies. Examining the upper plot of Fig. 5, it can be seen that there are some frequencies (other than the optimal one) where local minima occur. Therefore, broadband optimization could be performed at Figure 4: This figure illustrates the robustness of the array. For each value of \(p\), the array was displaced from its optimal configuration and the residual was recalculated. The sensors were moved according to a normal distribution with a mean of 0 m and \(\sigma=30\) m. The coloured dots plotted over the violins indicate their respective best residual found with optimization. these specific frequencies (which should be larger than \(f_{0}\) since the low-frequency residual is always good due to large correlations). For example, in the case \(p=0.8\) of Fig. 5, optimization could be done at 10, 16.1 and 21.2 Hz where local minima are clearly visible. Figure 5: The _upper_ plot shows how the residual changes as the frequency varies at a fixed value of \(p\) = 0.3 with \(N=3\), 6, and 13 (from top to bottom). The colors represent the frequency of occurrence of the residual when the sensors are displaced from their optimal positions by a quantity drawn from a normal distribution with mean 0 m and \(\sigma=30\) m. The _lower_ plot shows how the residual changes as the frequency varies at different values of \(p\) (with 10 sensors optimized at 10 Hz.) Figure 6: Seismic correlations between the origin point and all other points in the plane. The x-y, y-z and x-z planes are displayed. The ET layout is superimposed to provide a sense of distance. The full red circles represent the TMs laying on the plane, while the black dashed lines represent the ET baseline. The empty red dots represent the projection of the TMs not laying on the plane. Correlations are showed for a frequency of 10 Hz and for \(p\) = 0 (_top_ row), \(p\) = 0.3 (_middle_ row) and \(p\) = 1.0 (_bottom_ row). Conclusions This paper builds upon previous work reported in Ref. [2] by taking into account the triangular shape of ET to optimize the seismometer array for effective Newtonian-noise cancellation at an entire vertex of the detector. We studied the impact of the seismic field composition on noise cancellation in terms of body-wave polarizations and the robustness of the performance with respect to changes in optimal frequency and positions. The triangular shape of ET entails that the input test masses of an interferometer will be close to the end test masses of two other interferometers. Correlations of Newtonian noise between test masses and seismic correlations were taken into account in our study (see also Ref. [10] for a study on the impact of correlated noise in ET). The optimal arrays calculated in this study minimized the residuals simultaneously in three interferometers. We find that under the assumption of an isotropic, stationary field, a factor 5 reduction of Newtonian noise in amplitude can be achieved with 20 seismometers provided that they are located precisely at their ideal positions. A deviation from the optimal positions degrades the performance depending on the field composition and number of sensors. For example, when the sensors of an array with 12 seismometers are randomly displaced from their ideal positions by (typically) 30 m, the residuals are larger by (in average) a factor 1.45 in amplitude, and performance degrades more strongly when either compressional or shear waves dominate instead of having a more uniform polarization mix. While this study is based on idealized conditions (isotropy, stationarity), which will not be found at the ET candidate sites, it provides important constraints on what can realistically be achieved with Newtonian-noise cancellation. The optimal placement of seismometers in boreholes will be a formidable challenge since our understanding of seismic correlations underground are very limited, and correlations can vary with time. The work presented here now needs to be connected to more realistic models of the seismic field to estimate more accurately how good and robust the performance of the ET Newtonian-noise cancellation system will be. ## 7 Acknowledgements This article is based upon work from COST Action CA17137, supported by COST (European Cooperation in Science and Technology). FB and LR thank the experiment next_AIM which provided computational resources for the analysis. F.B would like to express her deepest appreciation to Francisco Sanchez-Sesma for the time he dedicate in providing his invaluable assistance in finding and understanding the correct bibliography on the theory of equipartitioning of diffuse seismic fields. F.B would like also to extend her thanks to Matteo Di Giovanni who provided help in understanding the correct values for seismic velocities at Sos Enattos and Luca Naticchioni for providing valuable insights on positioning errors that can be made while excavating boreholes. Finally, a thanks goes also to Giovanni Luca Cardello who provided clarifications about the seismology and geology of Sos Enattos.
2307.06817
On the distribution of a random variable involved in an independent ratio
In this paper, using inverse integral transforms, we derive the exact distribution of the random variable $X$ that is involved in the ratio $Z \stackrel{d}{=} X/(X+Y)$ where $X$ and $Y$ are independent random variables having the same support, and $Z$ and $Y$ have known distributions. We introduce new distributions this way. As applications of the obtained results, several examples are presented.
Roberto Vila, Narayanaswamy Balakrishnan, Marcelo Bourguignon
2023-07-13T15:30:43Z
http://arxiv.org/abs/2307.06817v1
# On the distribution of a random variable involved in an independent ratio ###### Abstract In this paper, using inverse integral transforms, we derive the exact distribution of the random variable \(X\) that is involved in the ratio \(Z\stackrel{{ d}}{{=}}X/(X+Y)\) where \(X\) and \(Y\) are independent random variables having the same support, and \(Z\) and \(Y\) have known distributions. We introduce new distributions this way. As applications of the obtained results, several examples are presented. **Keywords.** Laplace transform \(\cdot\) Inverse Laplace transform \(\cdot\) Generalized Stieltjes transform \(\cdot\) Inverse generalized Stieltjes transform \(\cdot\) Type I ratio. **Mathematics Subject Classification (2010).** MSC 60E05 \(\cdot\) MSC 62Exx \(\cdot\) MSC 62Fxx. ## 1 Introduction Let \(X,Y\) and \(Z\) be (absolutely) continuous and positive random variables such that \[Z\stackrel{{ d}}{{=}}\frac{X}{X+Y}, \tag{1.1}\] being \(\stackrel{{ d}}{{=}}\) equality in distribution, and the ratio in (1.1) has support in the unit interval \((0,1)\). The random variable \(Z=X/(X+Y)\) is known in the literature as type I ratio (Johnson et al., 1995; Bekker et al., 2009). The distributions of ratios of random variables are of interest in many fields (Nadarajah and Kotz, 2006). An important recent example of ratios of random variables is in the case fatality rate of Covid-19 (Bourguignon et al., 2022), where \(X\in\mathbb{R}^{+}\) and \(Y\in\mathbb{R}^{+}\) are two random variables representing the number of confirmed Covid-19-related deaths and Covid-19 cases with no death result, respectively. The sum \(X+Y\) represents the number of confirmed Covid-19 cases. It should be noted that stochastic representations are important since they may justify some models arising naturally in real situations, as described above. The distribution of \(Z\) has been studied by several authors especially when \(X\) and \(Y\) are independent random variables and come from the same family of distributions. To the best of our knowledge, there are no previous works when \(X\) and \(Y\) belong to different families. Malik (1967) and Ahuja (1969) both discussed the case when \(X\) and \(Y\) are independent random variables following gamma distributions with shape parameters \(\alpha>0\) and \(\beta>0\), and same scale parameter \(\theta\). In a similar way, if \(X\sim\mathrm{Gamma}(\alpha,\theta_{1})\) and \(Y\sim\mathrm{Gamma}(\beta,\theta_{2})\) are independent gamma variables, then \(Z\) is distributed according to a Libby-Novick distribution (Libby and Novick, 1982). For a recent discussion of some extensions of this idea, one may refer to Jones and Balakrishnan (2021). Lijoi et al. (2005) considered the ratio using inverse Gaussian random variables instead of gamma random variables, and termed it as normalized inverse Gaussian distribution. Specifically, the normalized inverse Gaussian distribution is obtained by the stochastic representation (1.1) with \(X\) and \(Y\) being independent inverse Gaussian random variables with scale parameter \(1\) and shape parameters \(\alpha>0\) and \(\beta>0\). Recently, new families of distributions have been introduced for modeling bounded quantities. Some of the bounded distributions in the literature are derived from standard distributions by mathematical transformation like \(Z=\exp(-W)\), \(Z=W/(1-W)\) or \(Z=1/(1-W)\), where \(W\in\mathbb{R}^{+}\). For example, the following transformation gives rise to distributions on the unit interval: if \(Z=\mathrm{e}^{-W}\), where \(W\sim\mathrm{Exponentiated-Exponential}(\alpha,\beta)\), \(W\sim\mathrm{Gamma}(\alpha,\beta)\) and \(W\sim\mathrm{Lindley}(\alpha,\beta)\), implies \(Z\sim\mathrm{Kumaraswamy}(\alpha,\beta)\)(Kumaraswamy, 1980), \(Z\sim\mathrm{Unit-Gamma}(\alpha,\beta)\)(Grassia, 1977) and \(Z\sim\mathrm{Log-Lindley}(\alpha,\beta)\)(Gomez-Deniz, 2014), respectively, where \(\alpha,\beta>0\). As far as we know, the Kumaraswamy, Unit-Gamma and Log-Lindley distributions among others do not have a stochastic representation as in (1.1). The aim of this paper is to propose an easy way of deriving the exact pdf of \(X\) that is involved in the ratio \(Z=X/(X+Y)\) when \(X\) and \(Y\) are independent random variables. The random variables \(X\) and \(Y\) do not need to belong to the same family of distributions. We emphasize here that the techniques used in this work can be slightly modified to determine the distribution of \(X\) in the independent ratio \(Z=X/Y\), which for sake of conciseness are omitted here. As the stochastic representation in (1.1) is important, since it may justify some models arising naturally in certain real situations, we can use the proposed approach to find the stochastic representation in (1.1) for several models known in the literature. For example, it allows us to develop an EM-algorithm for estimating the parameters of the distribution of \(Z\). We further propose three new models for bounded data and study them in detail. The rest of this paper proceeds as follows. In Section 2, we develop the main results and study some special cases in detail. Then, some brief closing remarks are made in Section 3. ## 2 Main results and examples Suppose \(X\) and \(Y\) are independent random variables, and that \(Y\) and \(Z\) have known distributions such that the probability density function (PDF) of \(Y\) admits the following decomposition: \[f_{Y}(sx)=\mathbb{A}(s)\mathbb{B}(x)\mathbb{C}(sx),\quad x,s>0, \tag{2.1}\] where \(\mathbb{A},\mathbb{B}\) and \(\mathbb{C}\) are some positive-real functions. Then, the main problem addressed here is in developing mathematical tools for finding the distribution of \(X\) for a wide class of distributions. **Theorem 1**.: Under the conditions (1.1) and (2.1), if \[\mathbb{C}(x)=\exp(-\lambda x^{\theta}),\quad x>0,\ \lambda,\theta>0, \tag{2.2}\] the density of \(X\) is given by \[f_{X}(x)=\frac{\lambda\theta}{x^{2-\theta}\mathbb{B}(x)}\,\mathcal{L}^{-1} \left\{\frac{1}{\mathbb{A}(s^{1/\theta})(s^{1/\theta}+1)^{2}}\,f_{Z}\left( \frac{1}{s^{1/\theta}+1}\right)\right\}(\lambda x^{\theta}),\] where \(\mathcal{L}^{-1}\) is the inverse Laplace transform. From here on, in all the examples to follow, we suppose that \(X,Y\) and \(Z\) are related through (1.1), and that \(X\) and \(Y\) are independent. **Example 1**.: When \(Y\sim\exp(\lambda)\), \(\lambda>0\), and \(Z\sim\text{Kumaraswamy}(a,b)\), \(a>0,b=1,2,\ldots\), that is, \(Z\) has PDF given by \[f_{Z}(z)=abz^{a-1}(1-z^{a})^{b-1},\quad 0<z<1. \tag{2.3}\] Indeed, since \(f_{Y}(sx)=\mathbb{A}(s)\mathbb{B}(x)\mathbb{C}(sx)\), with \[\mathbb{A}(s)=1,\quad\mathbb{B}(x)=\lambda\quad\text{and}\quad \mathbb{C}(sx)=\exp(-\lambda sx),\] from Theorem 1 (with \(\theta=1\)), we readily have \[f_{X}(x)=\frac{1}{x}\,\mathcal{L}^{-1}\left\{\frac{1}{(s+1)^{2}}\,f_{Z}\left( \frac{1}{s+1}\right)\right\}(\lambda x). \tag{2.4}\] On the other hand, a binomial expansion provides \(f_{Z}(z)=ab\sum_{k=0}^{b-1}\binom{b-1}{k}(-1)^{k}z^{a(k+1)-1}\). Using this expansion and the linearity of \(\mathcal{L}^{-1}\), we can write (2.4) as \[f_{X}(x)=ab\,\frac{1}{x}\sum_{k=0}^{b-1}\binom{b-1}{k}(-1)^{k} \mathcal{L}^{-1}\left\{\left(\frac{1}{s+1}\right)^{a(k+1)+1}\right\}(\lambda x). \tag{2.5}\] Now, by employing the well-known formula (see Erdelyi and Bateman, 1954): \[\mathcal{L}^{-1}\{(\alpha s+\beta)^{-p}\}(t)=\frac{1}{\alpha\Gamma(p)}\,\left( \frac{t}{\alpha}\right)^{p-1}\exp\left(-\frac{\beta t}{\alpha}\right),\quad p >0, \tag{2.6}\] the right-hand side of (2.5) is \[f_{X}(x)=ab\,\frac{1}{x}\sum_{k=0}^{b-1}\binom{b-1}{k}(-1)^{k}\,\frac{(\lambda x)^ {a(k+1)}}{\Gamma(1+a(k+1))}\,\exp(-\lambda x).\] Hence, the density of \(X\) in this case is given by \[f_{X}(x) =ab\lambda^{a(k+1)}\sum_{k=0}^{b-1}\binom{b-1}{k}(-1)^{k}\,\frac{x^ {a(k+1)-1}}{\Gamma(1+a(k+1))}\,\exp(-\lambda x).\] \[=\sum_{k=0}^{b-1}\binom{b-1}{k}(-1)^{k}\,\frac{1}{k+1}\,f_{T_{k}} (x),\quad x>0,\] where \(T_{k}\sim\mathrm{Gamma}(a(k+1),\lambda)\). Thus, the PDF of \(X\) is a finite sum of weighted gamma distributions. **Example 2**.: When \(Y\sim\mathrm{Gamma}(\beta,\lambda)\), \(\lambda>0\), and \(Z\sim\mathrm{Bbeta}(\alpha,\beta,\rho,\delta)\), \(\alpha,\beta>0\), \(\rho\geqslant 0\) and \(\delta\in\mathbb{R}\), is a random variable following the bimodal beta (Bbeta) distribution (see Vila et al., 2022) with density \[f_{Z}(z)=\frac{\rho+(1-\delta x)^{2}}{KB(\alpha,\beta)}\,x^{\alpha-1}\,(1-x)^ {\beta-1},\quad 0<z<1,\] where \(K=1+\rho-2\delta\alpha/(\alpha+\beta)+\delta^{2}\alpha(\alpha+1)/[(\alpha+ \beta)(\alpha+\beta+1)]\). Indeed, since \(f_{Y}(sx)=\mathrm{A}(s)\mathbb{B}(x)\mathbb{C}(sx)\), with \[\mathrm{A}(s)=s^{\beta-1},\quad\mathbb{B}(x)=\frac{\lambda^{\beta}}{\Gamma( \beta)}\,x^{\beta-1}\quad\text{and}\quad\mathbb{C}(sx)=\exp(-\lambda sx),\] from Theorem 1 (with \(\theta=1\)), we readily have \[f_{X}(x)=\frac{\Gamma(\beta)}{\lambda^{\beta-1}x^{\beta}}\,\mathcal{L}^{-1} \left\{\frac{1}{s^{\beta-1}(s+1)^{2}}\,f_{Z}\left(\frac{1}{s+1}\right)\right\} (\lambda x), \tag{2.7}\] where \[\frac{1}{s^{\beta-1}(s+1)^{2}}\,f_{Z}\left(\frac{1}{s+1}\right)=\sum_{k=0}^{2 }\frac{\pi_{k}}{\Gamma(\alpha+\beta+k)}\,(s+1)^{-(\alpha+\beta+k)}, \tag{2.8}\] \(\pi_{0}=(1+\rho)/K,\pi_{1}=-2\alpha\delta/[(\alpha+\beta)K]\) and \(\pi_{2}=\alpha(\alpha+1)\delta^{2}/[(\alpha+\beta)(\alpha+\beta+1)K]\). Note that \(\pi_{0}+\pi_{1}+\pi_{2}=1\). Using (2.8) in (2.7), we obtain \[f_{X}(x) =\frac{\Gamma(\beta)}{\lambda^{\beta-1}x^{\beta}}\,\sum_{k=0}^{2 }\frac{\pi_{k}}{B(\alpha+k,\beta)}\,\mathcal{L}^{-1}\left\{(s+1)^{-(\alpha+ \beta+k)}\right\}(\lambda x)\] \[=\sum_{k=0}^{2}\pi_{k}\,\frac{\lambda^{\alpha+k}}{\Gamma(\alpha+k )}\,x^{\alpha+k-1}\exp(-\lambda x),\] where, in the last equality, we have used (2.6). Thus, the PDF of \(X\) in this case can be written as a finite (generalized) mixture of three Gamma distributions with different shape parameters. **Remark 1**.: Notice that the PDF of \(Z\) in Example 2 can be written as \[f_{Z}(z)=\pi_{0}f_{Z_{0}}(z)+\pi_{1}f_{Z_{1}}(z)+\pi_{2}f_{Z_{2}}(z),\] where \(Z_{k}\sim\operatorname{Beta}(\alpha+k,\beta)\), \(k=0,1,2\), and \(\pi_{0},\pi_{1},\pi_{2}\) being as defined in Example 2. Upon setting \(\delta=0\) in Example 2, the following result well-known in the literature is deduced. **Example 3** (Malik (1967); Ahuja (1969); Jones and Balakrishnan (2021)).: If \(Y\sim\operatorname{Gamma}(\beta,\lambda)\), \(\lambda>0\), and \(Z\sim\operatorname{Beta}(\alpha,\beta)\), \(\alpha,\beta>0\), then \(X\sim\operatorname{Gamma}(\alpha,\lambda)\). **Example 4**.: When \(Y\sim\operatorname{Gamma}(\beta,\lambda)\), \(\beta,\lambda>0\), and \(Z\) is a random variable following the Topp-Leone distribution with density \[f_{Z}(z)=2vz^{v-1}(1-z)(2-z)^{v-1},\quad 0<z<1,\ v=1,2,\ldots.\] Indeed, by taking \(\operatorname{A}(s),\operatorname{B}(x)\) and \(\mathbb{C}(sx)\) as in Example 2, from Theorem 1 (with \(\theta=1\)), we have the validity of the identity in (2.7), with \(Z\) following the Topp-Leone distribution. Note that this identity can be expressed equivalently as \[f_{X}(x)=\frac{2v\Gamma(\beta)}{\lambda^{\beta-1}x^{\beta}}\,\mathcal{L}^{-1} \left\{\frac{(2s+1)^{v-1}}{s^{\beta-2}(s+1)^{2v+1}}\right\}(\lambda x).\] Using a binomial expansion, the expression on the right hand side becomes \[\frac{2v\Gamma(\beta)}{\lambda^{\beta-1}x^{\beta}}\sum_{k=0}^{v-1 }\binom{v-1}{k}2^{k}\mathcal{L}^{-1}\left\{\frac{1}{s^{\beta-k-2}(s+1)^{2v+1} }\right\}(\lambda x)\\ =2v\Gamma(\beta)\sum_{k=0}^{v-1}\binom{v-1}{k}2^{k}\lambda^{2v-k- 1}x^{2v-k-2}\,\frac{1F_{1}(2v+1;\beta+2v-k-1;-\lambda x)}{\Gamma(\beta+2v-k-1 )},\] where, in the last line, Proposition A.1 has been used. Thus, the PDF of \(X\) is a finite sum in this case of weighted distributions that include confluent hypergeometric functions. **Example 5**.: When \(Y\sim\exp(\lambda)\), \(\lambda>0\), and \(Z\) is a random variable having the density \[f_{Z}(z)=\frac{\lambda\beta^{c+1}}{(\beta+c)\Gamma(c)}\,\frac{1}{z^{2}}\left\{ \Gamma(c+1)\left[\lambda\left(\frac{1}{z}-1\right)+\beta\right]^{-(c+1)}+ \Gamma(c+2)\left[\lambda\left(\frac{1}{z}-1\right)+\beta\right]^{-(c+2)} \right\},\] where \(0<z<1\) and \(c,\beta>0\). Indeed, since \(f_{Y}(sx)=\operatorname{A}(s)\operatorname{B}(x)\mathbb{C}(sx)\), with \[\operatorname{A}(s)=1,\quad\operatorname{B}(x)=\lambda\quad\text{and}\quad \mathbb{C}(sx)=\exp(-\lambda sx),\] from Theorem 1 (with \(\theta=1\)), we readily have \[f_{X}(x) =\frac{1}{x}\,\mathcal{L}^{-1}\left\{\frac{1}{(s+1)^{2}}\,f_{Z} \left(\frac{1}{s+1}\right)\right\}(\lambda x)\] \[=\frac{\lambda\beta^{c+1}}{(\beta+c)\Gamma(c)}\,\frac{1}{x}\, \mathcal{L}^{-1}\left\{\Gamma(c+1)(\lambda s+\beta)^{-(c+1)}+\Gamma(c+2)( \lambda s+\beta)^{-(c+2)}\right\}(\lambda x).\] Upon using (2.6), the last equation becomes \[f_{X}(x)=\frac{\beta^{c+1}}{(\beta+c)\Gamma(c)}\,x^{c-1}(1+x)\exp(-\beta x), \quad x>0;\] that is, \(X\) is a random variable having the weighted Lindley distribution (Ghitany et al., 2011). **Remark 2**.: Note that the density of \(Z\) in Example 5 can be expressed as \[f_{Z}(z)=pf_{T_{0}}(z)+(1-p)f_{T_{1}}(z),\] where \(p=\beta/(\beta+c)\) and \(T_{j}=1/(L_{j}+1)\), \(L_{j}\sim\mathrm{Lomax}(c+j,\beta/\lambda)\), \(j=0,1\). Further, \[\mathbb{E}(T_{j}^{r})=\frac{c+j}{c+j-r+4}\,\left(\frac{\beta}{\lambda}\right)^ {c+j}\,_{2}F_{1}\left(c+j+1,c+j-r+4;c+j-r+5;1-\frac{\beta}{\lambda}\right),\] provided \(c+j-r+4>0\). **Example 6**.: When \(Y\sim\mathrm{GG}(a_{2},d_{2},\theta)\), \(a_{2},d_{2},\theta>0\), is a random variable following the generalized gamma (GG) distribution with density \[f_{Y}(y)=\frac{\theta}{a_{2}^{d_{2}}\Gamma\left(\frac{d_{2}}{\theta}\right)} \,y^{d_{2}-1}\exp\left[-\left(\frac{y}{a_{2}}\right)^{\theta}\right],\quad y >0,\] and let \(Z\) is a random variable with density \[f_{Z}(z)=\frac{\theta a_{1}^{d_{2}}a_{2}^{d_{1}}}{B\left(\frac{d_{1}}{\theta},\frac{d_{2}}{\theta}\right)}\,\frac{z^{d_{1}-1}(1-z)^{d_{2}-1}}{\left\{(a_{2 }z)^{\theta}+[a_{1}(1-z)]^{\theta}\right\}^{(d_{1}+d_{2})/\theta}},\ \ \mathrm{where}\ \ 0<z<1,\ a_{1},d_{1}>0. \tag{2.9}\] Indeed, since \(f_{Y}(sx)=\mathbb{A}(s)\mathbb{B}(x)\mathbb{C}(sx)\), with \[\mathbb{A}(s)=s^{d_{2}-1},\quad\mathbb{B}(x)=\frac{\theta}{a_{2}^{d_{2}} \Gamma\left(\frac{d_{2}}{\theta}\right)}\,x^{d_{2}-1}\quad\mathrm{and}\quad \mathbb{C}(sx)=\exp\left[-\left(\frac{sx}{a_{2}}\right)^{\theta}\right],\] from Theorem 1 (with \(\lambda=a_{2}^{-\theta}\)), we readily have \[f_{X}(x)=\frac{\lambda\theta}{x^{2-\theta}\mathbb{B}(x)}\,\mathcal{L}^{-1} \left\{\frac{1}{\mathbb{A}(s^{1/\theta})(s^{1/\theta}+1)^{2}}\,f_{Z}\left( \frac{1}{s^{1/\theta}+1}\right)\right\}(\lambda x^{\theta}).\] But, \(x^{2-\theta}\mathbb{B}(x)=\theta x^{d_{2}-\theta+1}/[a_{2}^{d_{2}}\Gamma(d_{2} /\theta)]\) and \[\frac{1}{\mathbb{A}(s^{1/\theta})(s^{1/\theta}+1)^{2}}\,f_{Z}\left(\frac{1}{s ^{1/\theta}+1}\right)=\frac{\theta a_{2}^{d_{1}}}{a_{1}^{d_{1}}B\left(\frac{d _{1}}{\theta},\frac{d_{2}}{\theta}\right)}\,\left[\left(\frac{a_{2}}{a_{1}} \right)^{\theta}+s\right]^{-(d_{1}+d_{2})/\theta}.\] Then, by using (2.6), we find \[f_{X}(x) =\] \[=\] that is, \(X\sim\mathrm{GG}(a_{1},d_{1},\theta)\). Note that, by taking \(\theta=1\), the PDF of \(Z\) in (2.9) becomes the Libby-Novick distribution (see Ahmed, 2021; Libby and Novick, 1982). Also, by taking \(d_{1}=d_{2}=\theta\), \(a_{1}=1\) and \(a_{2}=\beta\) in Example 6, we get the following example. **Example 7**.: When \(Y\sim\mathrm{Weibull}(\theta,\beta)\), \(\theta,\beta>0\), and \(Z\sim\mathrm{UW}2(\theta,\beta)\) is a random variable following a unitary Weibull distribution Type 2 (UW2) (see Reyes et al., 2023) with density \[f_{Z}(z)=\frac{\theta\beta^{\theta}z^{\theta-1}(1-z)^{\theta-1}}{[(\beta z)^{ \theta}+(1-z)^{\theta}]^{2}},\quad 0<z<1.\] Then, \(X\sim\mathrm{Weibull}(\theta,1)\). **Theorem 2**.: Under the conditions in (1.1) and (2.1), if \[\mathbb{C}(x)=(p+qx)\exp(-\lambda x),\quad x>0,\ q,\lambda>0,p\geqslant 0, \tag{2.10}\] the density of \(X\) is given by \[f_{X}(x)=\frac{\lambda^{3}}{qx^{(\lambda p/q)+2}\mathbb{B}(x)}\,\int_{0}^{x} \xi^{\lambda p/q}\mathcal{L}^{-1}\left\{\frac{1}{\mathbb{A}\,\left(\frac{s}{ \lambda}\right)(s+\lambda)^{2}}\,f_{Z}\left(\frac{\lambda}{s+\lambda}\right) \right\}(\xi)\mathrm{d}\xi,\] whenever \([x^{(\lambda p/q)+2}\mathbb{B}(x)f_{X}(x)]|_{x=0^{+}}=0\). **Example 8**.: When \(Y\) is a random variable following the weighted Lindley distribution (see Ghitany et al., 2011) with parameter vector \((b,1)\) and density \[f_{Y}(y)=\frac{1}{(1+b)\Gamma(b)}\,y^{b-1}(1+y)\exp(-y),\quad y>0,\ b>0,\] and \(Z\) is a random variable with density \[f_{Z}(z)=\frac{(a+b+1)}{(1+a)(1+b)B(a,b)}\,z^{a-1}\left(1-z\right)^{b-1}\left[ (a+b)\,z(1-z)+1\right],\quad 0<z<1,\ a>0. \tag{2.11}\] Indeed, since \(f_{Y}(sx)=\mathbb{A}(s)\mathbb{B}(x)\mathbb{C}(sx)\), with \[\mathbb{A}(s)=s^{b-1},\quad\mathbb{B}(x)=\frac{1}{(1+b)\Gamma(b)}\,x^{b-1} \quad\text{and}\quad\mathbb{C}(sx)=(1+sx)\exp(-sx),\] from Theorem 2 (with \(p=q=\lambda=1\)), we get \[f_{X}(x)=\frac{1}{x^{3}\mathbb{B}(x)}\int_{0}^{x}\xi\mathcal{L}^{-1}\left\{\frac {1}{\bar{\mathrm{A}}(s)(s+1)^{2}}\,f_{Z}\left(\frac{1}{s+1}\right)\right\}(\xi) \mathrm{d}\xi, \tag{2.12}\] provided \([x^{3}\mathbb{B}(x)f_{X}(x)]|_{x=0^{+}}=0\). But, \(x^{3}\,\mathbb{B}(x)=x^{b+2}/[(1+b)\Gamma(b)]\) and \[\frac{1}{\bar{\mathrm{A}}(s)(s+1)^{2}}\,f_{Z}\left(\frac{1}{s+1}\right) =\frac{(a+b+1)}{(1+a)(1+b)B(a,b)}\left[\frac{1}{(s+1)^{a+b}}+(a+b )\,\frac{s}{(s+1)^{a+b+2}}\right]\] \[=\frac{(a+b+1)}{(1+a)(1+b)B(a,b)}\left[\frac{1}{(s+1)^{a+b}}+(a+b )s\mathcal{L}\left\{\frac{\xi^{a+b+1}\exp(-\xi)}{\Gamma(a+b+2)}\right\}(s) \right],\] where, in the last line, we have used (2.6). Now, by applying the inverse Laplace transform on both sides of the above equality and using (2.6) together with property (B.2), we obtain \[\mathcal{L}^{-1}\left\{\frac{1}{\bar{\mathrm{A}}(s)(s+1)^{2}}\,f _{Z}\left(\frac{1}{s+1}\right)\right\}(\xi)\] \[=\frac{(a+b+1)}{(1+a)(1+b)B(a,b)}\left[\mathcal{L}^{-1}\left\{(s+ 1)^{-(a+b)}\right\}(\xi)+(a+b)\mathcal{L}^{-1}\left\{s\mathcal{L}\left\{\frac {\xi^{a+b+1}\exp(-\xi)}{\Gamma(a+b+2)}\right\}(s)\right\}(\xi)\right]\] \[=\frac{1}{(1+a)(1+b)\Gamma(a)\Gamma(b)}\,\xi^{a+b-1}[(a+b+1)+(a+b +1)\xi-\xi^{2}]\exp(-\xi).\] Using the above identities in (2.12), we get \[f_{X}(x) =\frac{1}{(1+a)\Gamma(a)}\,\frac{1}{x^{b+2}}\,\int_{0}^{x}\xi^{a+ b}[(a+b+1)+(a+b+1)\xi-\xi^{2}]\exp(-\xi)\mathrm{d}\xi\] \[=\frac{1}{(1+a)\Gamma(a)}\,x^{a-1}(1+x)\exp(-x),\] whenever \([x^{3}\mathbb{B}(x)f_{X}(x)]|_{x=0^{+}}=0\). Note that this condition is equivalent to \([x^{b+2}f_{X}(x)]|_{x=0^{+}}=0\), which is satisfied if we consider \(f_{X}\) as in the above equation. Then, we conclude that \(X\) has weighted Lindley distribution with parameter vector \((a,1)\). **Remark 3**.: A simple observation shows that the density of \(Z\) in (2.11) can be written as a mixture of beta distributions as \[f_{Z}(z)=pf_{Z_{0}}(z)+(1-p)f_{Z_{1}}(z),\] where \(p=(a+b+1)/[(a+1)(b+1)]\) and \(Z_{j}\sim\mathrm{Beta}(a+j,b+j)\), \(j=0,1\). **Theorem 3**.: Under the conditions in (1.1) and (2.1), if \[\mathbb{C}(x)=(1+\theta x)^{-p},\quad x>0,\ \theta,p>0, \tag{2.13}\] the density of \(X\) is given by \[f_{X}(x)=\frac{1}{x\mathbb{B}(x)}\,\mathcal{G}_{p}^{-1}\left\{\frac{1}{s^{p} \mathbb{A}\,\left(\frac{1}{\theta s}\right)}\left(\frac{\theta s}{1+\theta s} \right)^{2}f_{Z}\left(\frac{\theta s}{1+\theta s}\right)\right\}(x),\] where \(\mathcal{G}_{p}^{-1}\) is the inverse of the generalized Stieltjes transform \(\mathcal{G}_{p}\) (see Schwarz, 2005). **Example 9**.: When \(Y\) is a random variable having the generalized beta-prime distribution with shape parameters \(\alpha_{2},\beta_{2}>0\) and density \[f_{Y}(y)=\frac{\lambda_{2}^{\alpha_{2}}y^{\alpha_{2}-1}(1+\lambda_{2}y)^{-( \alpha_{2}+\beta_{2})}}{B(\alpha_{2},\beta_{2})},\quad y>0,\] and \(Z\) has density \[f_{Z}(z)=K\,\frac{z^{\alpha_{1}-1}}{(1-z)^{\alpha_{1}+1}\,}\,_{2}F_{1}\left( \alpha_{1}+\beta_{1},\alpha_{1}+\alpha_{2};\alpha_{1}+\alpha_{2}+\beta_{1}+ \beta_{2};1-\frac{\lambda_{1}}{\lambda_{2}}\left(\frac{z}{1-z}\right)\right), \quad 0<z<1, \tag{2.14}\] where \(K=B(\alpha_{1}+\alpha_{2},\beta_{1}+\beta_{2})\lambda_{1}^{\alpha_{1}}/[B( \alpha_{1},\beta_{1})B(\alpha_{2},\beta_{2})\lambda_{2}^{\alpha_{1}}]\) and \(\alpha_{1},\beta_{1}>0\). Indeed, observe that \(f_{Y}(sx)=\mathbb{A}(s)\mathbb{B}(x)\mathbb{C}(sx)\), with \[\mathbb{A}\,(s)=s^{\alpha_{2}-1},\quad\mathbb{B}(x)=\frac{\lambda_{2}^{\alpha _{2}}}{B(\alpha_{2},\beta_{2})}\,x^{\alpha_{2}-1}\quad\text{and}\quad\mathbb{ C}(sx)=(1+\lambda_{2}sx)^{-(\alpha_{2}+\beta_{2})}.\] As \(x\mathbb{B}(x)=\lambda_{2}^{\alpha_{2}}x^{\alpha_{2}}/B(\alpha_{2},\beta_{2})\) and (for \(p=\alpha_{2}+\beta_{2}\) and \(\theta=\lambda_{2}\)) \[\frac{1}{s^{p}\mathbb{A}\left(\frac{1}{\theta s}\right)}\left( \frac{\theta s}{1+\theta s}\right)^{2}f_{Z}\left(\frac{\theta s}{1+\theta s}\right)\] \[\qquad\qquad=\frac{B(\alpha_{1}+\alpha_{2},\beta_{1}+\beta_{2}) \lambda_{1}^{\alpha_{1}}\lambda_{2}^{\alpha_{2}}}{B(\alpha_{1},\beta_{1})B( \alpha_{2},\beta_{2})}\,s^{\alpha_{1}-\beta_{2}}\,_{2}F_{1}\left(\alpha_{1}+ \beta_{1},\alpha_{1}+\alpha_{2};\alpha_{1}+\alpha_{2}+\beta_{1}+\beta_{2};1- \lambda_{1}s\right),\] from Theorem 3 (with \(p=\alpha_{2}+\beta_{2}\) and \(\theta=\lambda_{2}\)), we readily obtain \[f_{X}(x)=\frac{B(\alpha_{1}+\alpha_{2},\beta_{1}+\beta_{2})\lambda_{1}^{\alpha _{1}}}{B(\alpha_{1},\beta_{1})x^{\alpha_{2}}}\,\mathcal{G}_{p}^{-1}\left\{s^{ \alpha_{1}-\beta_{2}}\,_{2}F_{1}\left(\alpha_{1}+\beta_{1},\alpha_{1}+\alpha _{2};\alpha_{1}+\alpha_{2}+\beta_{1}+\beta_{2};1-\lambda_{1}s\right)\right\}(x). \tag{2.15}\] By using the known formula (see Erdelyi and Bateman, 1954, p. 233) \[\mathcal{G}_{\rho}\left\{x^{\nu-1}(a+x)^{-\mu}\right\}(y)=\frac{\Gamma(\nu) \Gamma(\mu-\nu+\rho)}{\Gamma(\mu+\rho)a^{\mu}}\,y^{\nu-\rho}\,_{2}F_{1} \left(\mu,\nu;\mu+\rho;1-\frac{y}{a}\right),\quad\rho>\nu-\mu,\] with \(\mu=\alpha_{1}+\beta_{1},\nu=\alpha_{1}+\alpha_{2},a=1/\lambda_{1},\rho=p=\alpha_{2 }+\beta_{2}\) and \(y=s\), we write the argument of \(\mathcal{G}_{\rho}^{-1}\) in (2.15) as \[s^{\alpha_{1}-\beta_{2}}\,_{2}F_{1}\left(\alpha_{1}+\beta_{1}, \alpha_{1}+\alpha_{2};\alpha_{1}+\alpha_{2}+\beta_{1}+\beta_{2};1-\lambda_{1}s\right) \\ =\frac{\Gamma(\alpha_{1}+\beta_{1}+\alpha_{2}+\beta_{2})}{\Gamma( \alpha_{1}+\alpha_{2})\Gamma(\beta_{1}+\beta_{2})\lambda_{1}^{\alpha_{1}+ \beta_{1}}}\,\mathcal{G}_{p}\left\{x^{\alpha_{1}+\alpha_{2}-1}\left(\frac{1}{ \lambda_{1}}+x\right)^{-(\alpha_{1}+\beta_{1})}\right\}(s). \tag{2.16}\] Then, by combining (2.15) and (2.16), we obtain \[f_{X}(x)=\frac{\lambda_{1}^{\alpha_{1}}x^{\alpha_{1}-1}\left(1+\lambda_{1}x \right)^{-(\alpha_{1}+\beta_{1})}}{B(\alpha_{1},\beta_{1})};\] that is, \(X\) follows the generalized beta-prime distribution with shape parameters \(\alpha_{1},\beta_{1}>0\). **Remark 4**.: For two independent variables \(X\) and \(Y\) having generalized beta-prime distributions, the density \(f_{Z}\) equivalent to (2.14) was determined earlier by Bekker et al. (2009). By combining the formula (23) of Schwarz (2005) with Theorem 3, the following result follows. **Corollary 2.1**.: Under the conditions in (1.1) and (2.1), if \(\mathbb{C}(x)\) is as in (2.13), then the density of \(X\) is given by \[f_{X}(x)=\frac{\Gamma(p)}{x\mathbb{B}(x)}\,\mathcal{L}^{-1}\left\{t^{1-p} \mathcal{L}^{-1}\left\{\frac{1}{s^{p}\mathbb{A}\left(\frac{1}{\theta_{S}} \right)}\left(\frac{\theta_{S}}{1+\theta_{S}}\right)^{2}f_{Z}\left(\frac{ \theta_{S}}{1+\theta_{S}}\right)\right\}(t)\right\}(x).\] Corollary 2.1 provides an alternative formula in case the inverse generalized Stieltjes transform of Theorem 3 is difficult to find. ## 3 Concluding Remarks In the recent literature concerning bounded models, many papers have assumed a known distribution for \(Z\), but usually it is difficult to understand how \(X\) and \(Y\) were obtained. In this paper, a new technique based on inverse integral transforms approach is suggested for finding the exact distribution of the random variable \(X\) that is involved in the independent ratio \(Z=X/(X+Y)\). This procedure has been discussed in detail for some cases and illustrated with many known as well as some new (see Examples 5, 6 and 8) bounded models. **Acknowledgements** Roberto Vila gratefully acknowledge financial support from CNPq, CAPES and FAP-DF, Brazil. Marcelo Bourguignon gratefully acknowledges partial financial support of the Brazilian agency Conselho Nacional de Desenvolvimento Cientifico e Tecnologico (CNPq: grant 304140/2021-0). Disclosure statementThere are no conflicts of interest to disclose.
2303.05631
A $k$-medoids Approach to Exploring Districting Plans
Researchers and legislators alike continue the search for methods of drawing fair districting plans. A districting plan is a partition of a state's subdivisions (e.g. counties, voting precincts, etc.). By modeling these districting plans as graphs, they are easier to create, store, and operate on. Since graph partitioning with balancing populations is a computationally intractable (NP-hard) problem most attempts to solve them use heuristic methods. In this paper, we present a variant on the $k$-medoids algorithm where, given a set of initial medoids, we find a partition of the graph's vertices to admit a districting plan. We first use the $k$-medoids process to find an initial districting plan, then use local search strategies to fine-tune the results, such as reducing population imbalances between districts. We also experiment with coarsening the graph to work with fewer vertices. The algorithm is tested on Iowa and Florida using 2010 census data to evaluate the effectiveness.
Jared Grove, Suely Oliveira, Anthony Pizzimenti, David E. Stewart
2023-03-10T00:17:59Z
http://arxiv.org/abs/2303.05631v1
# A \(k\)-medoids Approach to Exploring Districting Plans ###### Abstract Researchers and legislators alike continue the search for methods of drawing fair districting plans. A districting plan is a partition of a state's subdivisions (e.g. counties, voting precincts, etc.). By modeling these districting plans as graphs, they are easier to create, store, and operate on. Since graph partitioning with balancing populations is a computationally intractable (NP-hard) problem most attempts to solve them use heuristic methods. In this paper, we present a variant on the \(\mathbf{k}\)-medoids algorithm where, given a set of initial medoids, we find a partition of the graph's vertices to admit a districting plan. We first use the \(\mathbf{k}\)-medoids process to find an initial districting plan, then use local search strategies to fine-tune the results, such as reducing population imbalances between districts. We also experiment with coarsening the graph to work with fewer vertices. The algorithm is tested on Iowa and Florida using 2010 census data to evaluate the effectiveness. Redistricting, Gerrymandering, Algorithms, Computer Districting **Statements and Declarations** We wish to acknowledge the support of the National Science Foundation for Anthony Pizzimenti through their grant NSF-1407216. ## 1 Introduction ### Districting Plans Every 10 years the United States goes through a reapportionment process, where states may gain or lose seats in the House of Representatives based on population changes. States with more than one representative may need to redistrict, or redraw the lines for their congressional districts based on how the populations have shifted within the state or because the number of representatives for that state changed. When a state redistricts, lines must be drawn to divide the state into \(k\) districts, one for each Congressional seat. This can be represented mathematically by using a graph or network structure, where the state is a graph \(G=(V,E)\) where \(V\) is the set of vertices and \(E\) is the set of edges. Each state is composed of many voter tabulation districts (VTDs) that need to be assigned to districts, and some states require that larger existing political structures remain intact. For example, Iowa requires that counties are not split apart. We will refer to the basic building blocks of a districting plan as tabulation blocks (TBs), which can represent counties, VTDs, precincts, or any combination of them. If there are \(N\) TBs, they will be represented in \(G\) as vertices \(\{\,v_{i}\mid i=1,\ldots,N\,\}=V\). The edges in \(G\), \(\{v_{i},\,v_{j}\}\in E\), will represent a shared geographical border between vertices \(v_{i}\) and \(v_{j}\). As an example of county level TBs, the state map and graphical representation of Iowa can be seen in Figure 1. To build a districting plan, each vertex \(v_{j}\) needs to be assigned to a district \(v_{j}\in D_{i}\) in the districting plan \(\{\,D_{i}\mid i=1,\ldots,k\,\}\) which is a partition of the TBs. The TBs must be assigned to districts such that the districts fit defined criteria for contiguity, compactness, and population equality. The criteria for contiguity requires a district to be a single connected region without any holes. The criteria for compactness is less strictly defined, in that a district is considered compact when it covers the smallest possible geographic region. This is often interpreted to mean that a district should be as close to square Figure 1: County map of Iowa given in (1a) and graphical representation of Iowa given in (1b) or circular as possible. A variety of compactness measures have been used previously, including Polsby-Popper [1], Reock [2], Schwartzberg [3], Convex Hull [4], X-Symmetry [4], and Length-Width Ratio [5]. All listed measures return a compactness value between 0 and 1 with 1 being considered compact and 0 being not compact. Non-compact districts are often long and winding with a bizarre shape [6], and are often rejected because non-compactness is a hallmark of gerrymandered districts. In this work we will be using the Convex Hull as our compactness measure as it has been used in many court cases [7] and, as shown in [8], is less affected by choice of map projection. The Convex Hull measure is defined by the ratio of district area to the area of the minimum convex polygon bounding the district. We will report the compactness value as the mean of the compactness values of each district in the districting plan. Lastly, population equality requires that each district has approximately equal populations. Thus an ideal district would have a population equal to the total population of the state divided by the number of districts \(k\): \[\mathit{Pop}^{*}=\frac{1}{k}\sum_{i=1}^{N}\mathit{Pop}(v_{i}), \tag{1}\] where \(\mathit{Pop}(v_{i})\) is the population of the TB represented by vertex \(v_{i}\). In order to evaluate a districting plan we will compare the population of each district, \(\mathit{Pop}(D_{i})=\sum_{j:v_{j}\in D_{i}}\mathit{Pop}(v_{j})\) to \(\mathit{Pop}^{*}\). The deviation of a districting plan will be the maximum absolute percent deviation of the districts: \[\mathit{Dev}=\max_{i\in\{1,\ldots,k\}}\left(\frac{|\mathit{Pop}(D_{i})- \mathit{Pop}^{*}|}{\mathit{Pop}^{*}}\cdot 100\%\right) \tag{2}\] We aim to create an algorithm to generate districting plans with \(\mathit{Dev}<1\), which is equivalent to a districting plan that has maximum population deviation of less than 1%. We will do this only using the geographic data and population information. ### Gerrymandering While redistricting is a necessary process that can allow the people to be properly represented in Congress, it can also be manipulated by the agents that draw the lines. This is known as gerrymandering and can be done to retain political power, take power away from opponents, or to further various other political goals. In the worst cases, political officials are essentially able to select their voters, rather than voter selecting their officials. The most obvious gerrymanders are often identified by the bizarreness of their shape [6], for example long narrow districts that wind throughout the state. These types of districts are often easily identifiable using compactness measures. However, some gerrymanders are much less obvious to the eye and can be much harder to identify. To avoid this problem various algorithms have been proposed to create districting plans, effectively removing politicians and their agents from the redistricting process. However, algorithm based districting does not necessarily create politically neutral districts as the geographic and population data can correlate with political affiliation. Generating districting plans using methods that reduce political and other bias can also help identify gerrymandering [9]. A gerrymandered districting plan would have political outcomes (that is, election outcomes) well outside the range of election outcomes for districting plans created without political bias. Generating large numbers of districting plans without political bias, and then determining the expected election outcomes for these generated plans shows the range of outcomes to be expected where there is no gerrymander. Expected election outcomes for a given districting plan well outside this range then becomes strong evidence of a gerrymander ### Other Approaches The majority of computer-based districting approaches are either sampling or optimization based methods. This is because many of the formulations for this type of problem have been shown to be NP-Complete (Subset Sum [10]). Sampling based methods are generally concerned with generating many plans quickly. Often times these are flood fill or multi-kernel methods that start by randomly selecting several initial centers to iteratively build districts from. Typically these will completely build one district, then the second, and so on until there are the proper number of districts. This process has been discussed and implemented many times [11; 12; 13; 14]. A potential issue with this method is that it can create enclaves, groups of TBs that cannot be added to any district, and implementations have to monitor this and will restart if enclaves appear. A further variation of the basic sampling method is mentioned in [15], growing the districts in simultaneously instead of sequentially. This should keep the shape of the districts more consistent compared to the sequential version. A similar method [7], considers all TBs as their own district and combines them until the correct number of districts has been achieved. The second family of approaches is based on optimization, and seeks to find the best plan under some objective function. These can be roughly subdivided into two more categories; clustering and heuristics. Clustering algorithms typically follow a \(k\)-means or warehouse location approach to assign TBs to cluster centers while minimizing the sum of squares distances from the TBs to the assigned cluster center. Several approaches using these techniques can be found, [16; 17; 18; 19]. When following a clustering approach special care needs to be taken to ensure that the dividing lines do not break up the TBs that make up the state; some of these legal technicalities are ignored by certain implementations of these methods (see, for example, [17; 18]). Heuristic algorithms often involve using a mixed integer linear program to model districting and find an optimal solution according to some criteria, often minimizing population deviation. They can be solved with various local searches [20; 21; 22], tabu search [22; 23], genetic algorithms [24], simulated annealing [22; 25], and many more heuristic approaches. While these can do a great job of finding plans according to their objective function, there is always the issue of agreeing on what the plans should optimize for: should they be focused on minimizing cut edges in the graph, getting the populations exactly equal, or some other criteria? A third type of algorithm that seems to have less representation in the literature uses random walks over the set of districting plans [25; 26]. Random walks move from one districting plan to another in an attempt to understand what the solution space of districting plans looks like. These types of methods need many good quality initial seeds in order to effective sample the solution space. Combining them with either the sampling or optimization methods can provide these random walk methods with the starting points they need to explore the space [26]. ### Our Approach We use a \(k\)-medoids algorithm to approach this problem. It is a sampling method that borrows ideas from some of the optimization methods. We begin from the graph representation of the state and follow the multikernel procedure mentioned in Section 1.3. However, instead of following the sequential building pattern used by most implementation we loosely follow an alternating building pattern mentioned in [15]. The basics of the \(k\)-medoids approach is to start with \(k\) TBs as initial district centers, then alternate between assigning TBs to the nearest centers and computing the centers of districts. It is similar to the \(k\)-means method for clustering [16; 17; 18; 19]. However, instead of using the physical distances between units, we use graph-based distances. Furthermore, in \(k\)-medoids, the centers, or medoids, are required to be vertices on the graph as opposed to any point in space. Following the \(k\)-medoids process we move into a local search phase. We use the traditional local search strategies, flip and swap neighborhoods, along with a tabu criteria to fine-tune solutions. In states with many TBs we also implement a coarsening strategy to work with fewer TBs, then follow an uncoarsening schedule to allow for more fine-tuning as better solutions are found. To the authors' knowledge, the first publication of this coarsening/uncoarsening strategy for districting plans is Magkeby and Mosessoon [27]. We explore the implemented algorithms in more detail in Section 2. ## 2 Algorithms The base of the algorithm described here is a \(k\)-medoids algorithm, which alternates between assigning points to clusters based on proximity to a medoid and computing the medoids of the clusters. However, in the most basic form, \(k\)-medoids algorithms fail to create districts with equal populations. Several additional steps are added to the basic \(k\)-medoids framework to account for this issue and improve overall performance. Before describing the algorithm as a whole, it is first useful to walk through these additional steps. These include various forms of local search as well as graph coarsening and uncoarsening. ### Local Search With the \(k\)-medoids process used here, deviation does not strictly decrease, so finding a districting plan with a low deviation is not guaranteed. To increase the likelihood of finding a good solution we look to fine tune the output of the \(k\)-medoids algorithm by making small changes to the resultant districting plans. These small changes can be thought of as finding neighboring districting plans. Methods of identifying neighboring districting plans are known as neighborhood functions. We consider three types of neighborhood functions - Flip, Swap, and a Combination Search (CMB). The Flip neighborhood function generates all districting plans that have exactly one TB assigned to a different district. The Swap neighborhood function generates all districting plans that have exactly two TBs that have mutually changed districts. The CMB neighborhood function generates all districting plans that are in either the Flip neighborhood or Swap neighborhood. In general, we will abbreviate an arbitrary neighborhood function as \(\mathit{NF}\). An issue that arises with these neighborhood functions is that the number of neighboring districting plans generated can be very large, making it infeasible to enumerate them and find the neighboring districting plan that would have the lowest deviation. To get around this, we only generate neighboring districting plans according to specific criteria. From the initial districting plan we consider all pairs of neighboring districts \(\{D_{m},D_{\ell}\}\) with \(\mathit{Pop}(D_{m})>\mathit{Pop}(D_{\ell})\). From these we find the pair of districts with the largest population disparity. We use these districts differently for each method: * Find the set of all vertices \(v_{i}\in D_{m}\) that share a geographical border with any vertex in \(D_{\ell}\) and that are not articulation points of the graph of \(D_{m}\); articulation points are vertices that when removed from a graph increase the number of connected components. This will be stored as \(\mathit{FLIP}=\{\,(v_{i},v_{-1})\,\}\) where \(v_{-1}\) is a dummy vertex with \(\mathit{Pop}(v_{-1})=0\). * Find the set of all pairs of vertices \(\mathit{SWAP}=\{\,(v_{i},v_{j})\mid v_{i}\in D_{m},\,v_{j}\in D_{\ell}\,\}\) such that neither \(v_{i}\) nor \(v_{j}\) are articulation points of their respective districts, \(v_{i}\) shares a geographical border with a vertex in \(D_{\ell}\backslash\{v_{j}\}\), and \(v_{j}\) shares a geographical border with a vertex in \(D_{m}\backslash\{v_{i}\}\). * \(\mathit{CMB}=\mathit{FLIP}\,\cup\mathit{SWAP}\). The set of pairs of vertices that satisfy either Flip or Swap. The sets of pairs of vertices derived from the above rules represent the districting plans generated according to the corresponding neighborhood function, which we will denote \(\mathit{NH}\). If no districting plans were generated, we select the pair of neighboring districts with the next highest population disparity until there is at least one districting plan generated. Once a districting plan is generated, the local search considers all generated districting plans and selects the one with the minimum population disparity. This is done by finding \[(v_{i}^{*},v_{j}^{*})=\operatorname*{arg\,min}_{(v_{i},v_{j})\in \mathit{NH}}\max\bigg{(}\bigg{|} \frac{\left[\mathit{Pop}(D_{m})-\mathit{Pop}(v_{j})+\mathit{ Pop}(v_{j})\right]-\mathit{Pop}^{*}}{\mathit{Pop}^{*}}\bigg{|}, \tag{3}\] \[\bigg{|} \frac{\left[\mathit{Pop}(D_{\ell})+\mathit{Pop}(v_{i})-\mathit{ Pop}(v_{j})\right]-\mathit{Pop}^{*}}{\mathit{Pop}^{*}}\bigg{|}\bigg{)}\] This pair of vertices represents the districting plan with the minimum population disparity. To construct the districting plan we move \(v_{i}\) to \(D_{\ell}\) and \(v_{j}\) to \(D_{m}\), if \(j\) is not the dummy node \(-1\). This can be seen in Figure 2 for Flip and Figure 3 for Swap. Starting from this new districting plan, we repeat the process of finding the pair of neighboring districts with the largest disparity, generating neighboring districting plans, finding (\(v_{i}^{*}\), \(v_{j}^{*}\)) according to Equation (3), and changing districting plans until a districting plan with \(\mathit{Dev}<1\) is found or until \(\mathit{LI}\) iterations of local search are reached, where \(\mathit{LI}\) is a user defined maximum number of allowed local search iterations. We noticed that these neighborhood functions could cycle one TB between a few districts, getting stuck in a loop. To get around this we also included a tabu list in the search process, with tabu tenure set as \(0.1\times\mathit{LI}\). This means that after each successful neighborhood move is made, we store the move in the tabu list, and cannot repeat it for \(0.1\times\mathit{LI}\) iterations. The entire Local Search process is presented in Algorithm 1. ``` [MISSING_PAGE_POST] with Florida there are 67 counties and 9435 VTDs. The original graph has \(N\) vertices to represent the TBs. The user can provide a parameter \(uc_{0}\in(0,1]\) in order to coarsen \(G\) to a graph \(G_{0}\) which has \(\lfloor\,uc_{0}N\,\rfloor\) vertices. To reduce the number of vertices, the coarsening process randomly selects two neighboring vertices. If the sum of the populations of these two vertices is less than \(\mathit{Pop}^{*}\), the two neighboring (parent) vertices are combined, resulting in a new (child) vertex. The child vertex will inherit edges from the union of the edges of the parent vertices so that the child vertex has the same neighbors as the parent vertices. The population of the child vertex is the sum of the parents' populations. As we coarsen the graph, we store the parent vertices, their populations, edges (or neighbors), and the order they are combined in, to ensure we can reconstruct the original graph, \(G\). This coarsening process is repeated until there are only \(\lfloor\,uc_{0}N\,\rfloor\) vertices in the graph. This process can be seen in Figure 4. In order to rebuild the original graph \(G\) from a coarsened graph \(G_{0}\) we use the following uncoarsening process. First we find the most recently created child vertex, remove any edges in \(G_{0}\) that connect to the child vertex and then remove the child vertex from \(G_{0}\). We split the child vertex into it's parent vertices and add both of parent vertices, along with adding any edges that the parent vertices were a part of, into \(G_{0}\). We ensure that the parent vertices retain their original populations and both inherit the district assignment of the child vertex. We repeat this until the graph has \(N\) nodes again, this will be the original graph \(G\). While the coarsening process can help speed up the \(k\)-medoids process by working with fewer vertices, it has the trade-off of working with larger blocks and does not allow very precise fine tuning, especially in the local search process. To allow for more fine tuning, instead of a single value \(uc_{0}\), the user can provide an uncoarsening schedule with \(q\) uncoarsening steps \[UC:0<uc_{0}<uc_{1}<\ldots<uc_{q-1}<uc_{q}=1.\] This will first coarsen \(G\) to \(G_{0}\) with \(\lfloor\,uc_{0}N\,\rfloor\) vertices. Then, instead of uncoarsening to \(G\), uncoarsen to \(G_{1}\), a graph with \(\lfloor\,uc_{1}N\,\rfloor\) vertices. At these partially coarsened steps the local search procedures can be applied to allow for more fine tuning of the solutions. Continue this process of uncoarsening graph \(G_{i}\) to \(G_{i+1}\) for \(i\in\{0,\ldots,q-1\}\) and doing a local search until \(G_{q}\) is reached. Since \(G_{q}\) is the same as \(G\), the original graph will have been reconstructed. One final local search will be run and the uncoarsening process is complete. This process can be seen in Figure 5. Since coarsening may not be needed in every situation, the uncoarsening schedule \[UC0:1\] Figure 4: Coarsening Example with \(uc_{0}=0.6\). Beginning with graph \(G\) which has \(N=10\) vertices (4a) and \(uc_{0}=0.6\), coarsen to \(G_{0}\) with \(\lfloor\,uc_{0}N\,\rfloor=6\) vertices. Randomly select pairs of neighboring vertices and combine them until there are only \(6\) vertices remaining. First vertices \(v_{4}\) and \(v_{5}\) are combined to form child vertex \(c_{1}\) (4b), then vertices \(v_{4}\) and \(v_{5}\) are combined to form child vertex \(c_{1}\) (4c), then vertices \(c_{1}\) and \(v_{9}\) are combined to form child vertex \(c_{3}\) (4d), lastly \(v_{7}\) and \(v_{10}\) are combined to form child vertex \(c_{4}\) and \(G_{0}\) is reached in (4e). Colors and hatching are used to show coarsened vertices, with darker colors representing more parent vertices, northeast lines representing vertices with two parent vertices, and northwest lines representing vertices with three parent vertices can be used. This has \(uc_{0}=uc_{q}=1\) and the graph will be coarsened to \(\lfloor 1\times N\rfloor=N\) vertices, which is equivalent to not coarsening. ### k-medoids The final updated \(k\)-medoids algorithm begins with the selection of an uncoarsening schedule (\(UC\)), a maximum number of \(k\)-medoids iterations (\(MI\)), a neighborhood function (\(NF\)), and a maximum number of local search iterations (\(LI\)). Then \(G\) is then coarsened to \(G_{0}\) and \(k\) vertices are randomly selected from \(G_{0}\) to be the intial medoids. Each of these new medoids are considered a district, while the remaining vertices are considered unassigned. To assign vertices to districts, the algorithm first determines the district with the smallest population. Then performs one iteration of a Breadth First Search [28] from each of the vertices on the border of this minimum population district to find adjacent vertices. These adjacent vertices are assigned to the district as they are found, provided they are not already assigned to a district and their addition will not cause the district population to exceed \(\mathit{Pop}^{*}\). If there are no vertices that can be added to the district, the algorithm then moves on to the next smallest district. If no vertices can be added to the adjacent district Figure 5: Uncoarsening Example with \(UC:0.6,0.8,1\) on \(G_{0}\) from Figure 4 (5a) with \(q=2\) uncoarsening steps. From \(G_{0}\) assign each vertex to a district (5b). First uncoarsening step removes most recently created child vertex, \(c_{4}\), from graph and add the parent vertices \(v_{7}\) and \(v_{10}\) along with their respective edges (5c). Repeat until there are \(\lfloor\,uc_{1}N\,\rfloor=\lfloor\,0.8\times 10\,\rfloor=8\) vertices and \(G_{1}\) has been found (5d), then reassign vertices to districts through local search (5e). Repeat until \(G_{2}\) has been found and the graph is uncoarsened without exceeding \(\mathit{Pop}^{*}\) then each unassigned vertex is added to the adjacent district with the smallest population. When all vertices are assigned to districts, the algorithm then determines the medoid, \(m\), of each district. This new medoid is selected by first using Broder's Algorithm to randomly cut the edges of a given district until it forms a tree \(T\). We let \(P_{T,m}\) be the set of paths in \(T\) that originate at \(m\). For each path \(p\in P_{T,m}\) we compute the length, \(C_{p}\), by counting the number of edges along the path. Then we find the path \(p^{*}\) such that \(C_{p^{*}}>\sum_{p\in P_{T,m}\setminus p^{*}}C_{p}\), if it exists. Next, we find the vertex neighboring \(m\) along \(p^{*}\) and make this vertex the new medoid by reassigning \(m\). We repeat until there is no path \(p^{*}\), in that case the current medoid is the new medoid. This new medoid is sensitive to which edges are cut in Broder's Algorithm and since the algorithm determines the new medoid on \(T\), the medoid may not appear to be visually centered within the district. An example of the selection process for a new medoids is diagrammed in Figure 6. Once a new medoid has been found for each district, compute \(\mathit{Dev}\) for the districting plan. If \(\mathit{Dev}\) is below 1 or the algorithm has completed \(\mathit{MI}\) iterations, the \(k\)-medoids process is terminated. Otherwise, all vertices are unassigned from their districts, the iteration variable is incremented, and the district assignment process begins again. Since this process can cause the deviation to increase between iterations, the districting plan with the lowest deviation across all iterations is stored. Once the algorithm has terminated, \(LI\) iterations of local search are performed on the districting plan with the lowest deviation. The user defined uncoarsening schedule is then followed to alternate between uncoarsening and performing a local search until a local search is performed on \(G_{p}\). The complete algorithm is presented in Algorithm 2. In addition to the main goal of minimizing the population deviation, this method ensures the generated districting plans are contiguous. In the \(k\)-medoids process the Breadth First Search construction forces the initial plan to be contiguous as TBs can only be added to adjacent districts. As each new set of medoids is found the Breadth First Search is repeated and the districting plans remain contiguous. In the local search process both Flip and Swap moves maintain contiguity. A Flip will not move a TB if it is an articulation point, thus keeping the districts contiguous. A Swap move will not move a pair \((v_{i},v_{j})\) if either TBs are articulation points of their respective districts or if they do not share a border with any other TB in the other district, thus keeping the districts contiguous. Since the CMB method is just the best between Flip and Swap moves, it also maintains contiguity. Lastly the uncoarsening process also maintains contiguity, as when a child node is split the parent nodes are assigned to the same district as the child node. The final districting plan criteria is compactness, but this is not something we directly optimize for in our method. Growing the districts from medoids in the \(k\)-medoids process should keep the districts relatively compact, but we only measure the compactness at the very end of the algorithm after the plan with the lowest deviation has been found. This lesser focus on compactness comes from the Indiana Citizens Redistricting Commission's 2021 public mapping competition where compactness was considered less important than keeping cities and counties intact [29]. ## 3 Tests and Results ### Tests For testing we have chosen to use both Iowa and Florida to evaluate our method. Iowa because it is one of the simplest nontrivial cases, having only 99 TBs and \(k=4\) districts. We chose Florida for a more challenging case with 9435 VTDs and \(k=27\) districts. Furthermore, Florida has been used previously in the literature [7] and can allow us to more directly compare to the method of Chen and Rodden. To maintain existing state boundaries we added a preprocessing step that combines all VTDs in a county, with total county population less than \(\mathit{Pop}^{*}\), to a single TB. With this preprocessing step we reduced Florida to 4700 TBs while keeping 60 of 67 counties intact. The population data we used comes from the 2010 census by the United State Census Bureau [30], [31] and the shape files we used to make the graphs also come from the United States Census Bureau [32]. As a further note, we will only consider a pair of TBs neighbors if they share a border with nonzero length; TBs that only share a corner point are not considered neighbors. The tests are all run on a single CPU (AMD 2950X at 3.5 GHz) with the algorithms implemented in Python 3.8.5. First, we test parameters to see which allows our model to perform the best. Next, we look at the results with the best parameters. Lastly, we compare to the method used by Chen and Rodden. [7]. #### 3.1.1 Parameter Selection In building the algorithm we evaluated 90 possible parameter sets where each set consisted of a neighborhood function, a maximum number of local search iterations, and an uncoarsening schedule. We tested three neighborhood functions (\(\mathit{NF}\in\{\text{Flip},\text{Swap},\text{CMB}\}\)), five maximum local search iteration values (\(\mathit{LI}\in\{100,250,500,750,1000\}\)), and six uncoarsening schedules (\(UC\in\{UC1,UC2,UC3,UC4,UC5,UC6\}\)) defined here: ``` 1:Choose \(UC\), \(\mathit{MI}\), \(\mathit{LI}\), and NF 2:Coarsen the graph to \(G_{0}\) 3:Randomly select \(k\) vertices to be the initial medoids 4:while\(\mathit{Dev}>1\) and Iterations \(<\mathit{MI}\)do 5:while there are vertices not assigned to a district do 6: WorkingDistrict \(\leftarrow\) district with minimum population 7: U \(\leftarrow\) Set of all unassigned vertices adjacent to WorkingDistrict 8: Add vertices of U to WorkingDistrict provided district population does not exceed \(\mathit{Pop}^{*}\) 9:if no more vertices can be added to any district then 10: Add each unassigned vertex to adjacent district with minimum population 11:endif 12:endwhile 13: Compute medoid of each district, these will be medoids in next iteration 14:endwhile 15:Perform \(\mathit{LI}\) iterations of Local Search with \(\mathit{NF}\) on districting plan with smallest deviation 16:for i = 1,..., q do 17: Uncoarsen local search output to \(G_{i}\) 18: Perform \(\mathit{LI}\) iterations of local search with \(\mathit{NF}\) on \(G_{i}\) 19:endfor ``` **Algorithm 2**\(k\)-Medoids Using each of these parameter sets we ran the resulting algorithm 18 times on Florida, each time with a fixed number of \(k\)-medoids iterations (\(\mathit{MI}=100\)). The summaries of results for these tests can be seen in Tables 1,2,3. The best found parameters are used in the full tests, if multiple parameters perform well we used both. Examining the results for the uncoarsening schedules in Table 1, we can see that \(UC6\) outperforms the other uncoarsening schedules with a minimum deviation of \(4.28\%\), whereas no other uncoarsening schedules produce districting plans with under \(10\%\) deviation. While UC6 is slower than the other uncoarsening schedules, we feel that the lower average deviation and minimum deviation values more than make up for the extra 60-120 seconds of runtime. Even though UC3 is modestly faster than UC5, we claim that UC5 is the second best uncoarsening schedule because it has the next lowest mean deviation after UC6 and approximately matches the minimum deviation of UC3. With this in mind our full tests have been run with uncoarsening schedules UC5 and UC6. Examining the neighborhood function results in Table 2, we can see that there is very little difference in the runtimes between the three neighborhood functions. However, there is a clear difference in terms of deviation, with CMB attaining lower deviation in the mean and minimum cases. While we expected CMB to perform better, as it picks between the better of Flip and Swap, the difference between Flip and Swap was surprising. With this in mind our full tests have been run with CMB as the neighborhood function. Examining the results for maximum number of local search iterations in Table 3, we see that the \(750\) and \(1000\) iterations cases outperform the other options. The \(750\) and \(1000\) iteration cases attain the lowest deviation values of those tested and are only about \(30-40\) seconds slower than the fastest cases. While the \(500\) iterations case achieves similar deviation values to the \(750\) and \begin{table} \begin{tabular}{c c c c c} \hline \hline Uncoarsening & Mean & Min & Mean & Min \\ Schedule & _Dev_ & _Dev_ & Runtime & Runtime \\ (UC) & (\%) & (\%) & (s) & (s) \\ \hline UC1 & 97.206 & 13.686 & 173.903 & 97.630 \\ UC2 & 92.494 & 14.207 & 192.596 & 107.234 \\ \hline UC3 & 91.716 & 10.318 & 200.839 & 101.113 \\ UC4 & 93.272 & 10.153 & 258.924 & 144.469 \\ \hline UC5 & 86.719 & 10.897 & 225.245 & 107.370 \\ UC6 & 85.920 & 4.285 & 298.919 & 153.151 \\ \hline \hline \end{tabular} Uncoarsening Schedule: Uncoarsening Schedule being tested; Mean _Dev_: mean deviation across all tests; Min _Dev_: minimum deviation across all tests; Mean Runtime: mean time to coarsen to \(G_{0}\) and run \(k\)-medoids algorithm - including all uncoarsening and local searches; Min Runtime: minimum time to coarsen to \(G_{0}\) and run \(k\)-medoids algorithm - including all uncoarsening and local searches \end{table} Table 1: Uncoarsening Schedule Tests 1000 iteration cases, the runtimes are much slower. With this in mind our full tests have been run with 750 and 1000 maximum local search iterations. #### Results We tested the algorithm on Iowa and Florida with slightly different parameter sets. For both states we used 100 iterations of \(k\)-medoids (\(\mathit{MI}=100\)), CMB as the neighborhood function (\(\mathit{NF}=CMB\)), and two different maximum local search iteration values (\(\mathit{LI}\in\{750,1000\}\)). Since Iowa has so few TBs, the algorithm performs well without any uncoarsening schedule so we chose to not coarsen Iowa at all (\(UC=UC0\)). Since Florida has many more TBs \begin{table} \begin{tabular}{c c c c c} \hline Neighborhood & Mean & Min & Mean & Min \\ Function & \(\mathit{Dev}\) & \(\mathit{Dev}\) & Runtime & Runtime \\ (\(\mathit{NF}\)) & (\%) & (\%) & (s) & (s) \\ \hline Flip & 89.375 & 34.173 & 213.650 & 97.630 \\ \hline Swap & 135.545 & 49.255 & 226.173 & 98.584 \\ \hline CMB & 48.744 & 4.285 & 235.389 & 98.182 \\ \hline \end{tabular} Neighborhood Function: Neighborhood Function being tested; Mean _Dev_: mean deviation across all tests; Min _Dev_: minimum deviation across all tests; Mean Runtime: mean time to coarsen to \(G_{0}\) and run \(k\)-medoids algorithm - including all uncoarsening and local searches; Min Runtime: minimum time to coarsen to \(G_{0}\) and run \(k\)-medoids algorithm - including all uncoarsening and local searches \end{table} Table 2: Neighborhood Function Tests \begin{table} \begin{tabular}{c c c c c} \hline Local Search & Mean & Min & Mean & Min \\ Iterations & \(\mathit{Dev}\) & \(\mathit{Dev}\) & Runtime & Runtime \\ (\(\mathit{LI}\)) & (\%) & (\%) & (s) & (s) \\ \hline 100 & 101.981 & 19.446 & 173.893 & 97.630 \\ \hline 250 & 91.352 & 15.368 & 229.778 & 116.340 \\ \hline 500 & 89.099 & 9.995 & 308.498 & 163.629 \\ \hline 750 & 86.525 & 4.285 & 189.575 & 125.957 \\ \hline 1000 & 85.396 & 7.446 & 216.219 & 138.463 \\ \hline \end{tabular} Local Search Iterations: Maximal number of local search iterations being tested; Mean _Dev_: mean deviation across all tests; Min _Dev_: minimum deviation across all tests; Mean Runtime: mean time to coarsen to \(G_{0}\) and run \(k\)-medoids algorithm - including all uncoarsening and local searches \end{table} Table 3: Local Search Iterations Tests than Iowa, it produces a much more complicated graph. To handle the complexity of the resultant graph, we used two different uncoarsening schedules (\(UC\in\{UC5,UC6\}\)). For each parameter set for each state we ran the resulting algorithm 100 times - the summary of results for these tests are available in Table 4. We split the results into two parts, the first being the results from performing the \(k\)-medoids part of the algorithm on the fully coarsened graph with no local search, the second being the results obtained after performing all of the uncoarsening and local searches. From Table 4 we can see that our method works very well for Iowa. Without local search the \(k\)-medoids algorithm is able to attain a mean deviation of about 1.1%, just over the legally allowed 1% deviation. Including local search decreases the mean deviation to about 0.69%, just below the legal limit. The local search also slightly decreases in the mean compactness value (only a difference of \(\sim 0.01\)). There is hardly any difference between the two parameter settings for Iowa, with the 750 iterations case finding slightly lower deviation values, slightly higher compactness scores, and running a bit slower than the 1000 iterations case. Our method is less consistent for the more complicated case of Florida. While including local search decreases the mean deviation value by more than 100%, the minimum deviation values found were still higher than 1%. As with the Iowa case, we can see that adding local search decreases the mean compactness score. Comparing the parameter settings we can see that the final districting plans produced using UC6 tend to have lower deviations than final plans produced using UC5. We can also see that districting plans produced with 1000 iterations of local search tend to have lower deviations than plans produced with 750 iterations of local search. Here we also note the random nature of this algorithm, as the minimum deviation in Table 4 is 10.876% for UC6 with 1000 iterations of local search, while the minimum deviation found in the Section 3.1.1 parameter setting tests was 4.285% for UC6 with 750 iterations of local search. With our method consistently finding districting plans with deviation below 1% on Iowa, we added in an extra step to find more good districting plans over the course of a single run. Anytime the algorithm finds a districting plan that has a deviation less than 5% during the \(k\)-medoids phase, we save it. Then we perform the local search step on all of these additional districting plans as well. The results of these tests are in Table 5. With this additional step we are able to find another nine districting plans that have a mean deviation of about 0.6% each time we run our algorithm. Moreover, these extra districting plans come at a cost of about 2.77 seconds each, slightly lower than the approximately 3.46 seconds for the first districting plan. Overall this leads to finding about ten districting plans below 1% deviation in just under thirty seconds. ### Comparison to Alternative Method Next we wanted a direct comparison to a different, but similar, method. We implemented our own version of the algorithm described by Chen and Rodden \begin{table} \begin{tabular}{l r r r r r r r r r} \hline \hline & & & \multicolumn{2}{c}{\(k\)-medoids} & \multicolumn{2}{c}{Local Search} & \multicolumn{2}{c}{Algorithm} & Total \\ State & \(LI\) & UC & Mean & Mean & Mean & Min & Mean & Runtime & Runtime \\ \cline{3-10} & & & \multicolumn{1}{c}{_Dev_(\%)} & \multicolumn{1}{c}{Comp} & \multicolumn{1}{c}{_Dev_(\%)} & \multicolumn{1}{c}{_Dev_(\%)} & \multicolumn{1}{c}{Comp} & \multicolumn{1}{c}{(s)} & \multicolumn{1}{c}{(s)} \\ \hline IA & 750 & UC0 & 1.126 & 0.773 & 0.693 & 0.181 & 0.763 & 0.718 & 3.478 \\ IA & 1000 & UC0 & 1.171 & 0.769 & 0.699 & 0.271 & 0.760 & 0.690 & 3.450 \\ \hline FL & 750 & UC5 & 152.221 & 0.707 & 44.059 & 13.915 & 0.626 & 276.853 & 469.880 \\ FL & 750 & UC6 & 163.727 & 0.676 & 40.769 & 12.511 & 0.612 & 328.559 & 512.829 \\ \hline FL & 1000 & UC5 & 153.110 & 0.707 & 42.043 & 12.061 & 0.616 & 321.880 & 523.083 \\ FL & 1000 & UC6 & 169.677 & 0.675 & 36.707 & 10.876 & 0.607 & 391.102 & 579.940 \\ \hline \end{tabular} \end{table} Table 4: Results for \(k\)-medoids algorithm on Iowa and Florida in [7]. We ran tests on this method to directly compare to the best versions of \(k\)-medoids. Since coarsening allows the local search to occur many times, directly comparing local search iterations between the two methods would be an uneven number of searches. To make up for this difference, we allowed Chen and Roddens' method take \(LI\times|UC|\) iterations. This way they each use the same number of total local search iterations. We compare the methods using the same tests from Section 3.1.2. From Table 6 we can see that our \(k\)-medoids method produces districts with lower deviation than the other method in the same number of iterations. For example, in the Iowa tests our method has a mean deviation below \(1\%\), while the other method has a mean deviation of about \(7\%\). In the Florida tests our method has a mean deviation of about \(40\%\), while the other method has a mean deviation of about \(145\%\). Since our method runs faster than the Chen and Rodden method (\(\sim 20-30\) seconds for Iowa and \(\sim 300-500\) seconds for Florida), our method is able to find more districting plans faster. With our method also tending to have lower deviation values, our method is more likely to find a good districting plan in less time. Furthermore, if we account for the extra districting plans found with by local searching on any plan found with \(\mathit{Dev}<5\%\), our method is able to find about ten districting plans with \(\mathit{Dev}<1\%\) in the approximately the same time it takes the other method to finish. However, the Chen and Rodden method does produce districts that are slightly more compact than ours, with the mean compactness values being \(0.02\) to \(0.09\) higher. This indicates to us that our method outperforms the Chen and Rodden method. ### Current Districting Plans For a baseline comparison we look at what the official districting plans for Iowa and Florida are from the 2010 Census. The official plan Iowa used in 2021 can be seen in Figure 6(a). It has a maximum population deviation of \(0.005\%\) and compactness of \(0.78\). Iowa law \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline & & & Mean & Mean & Mean & Mean & Total \\ State & _LI_ & UC & Additional & _Dev_ & Comp & Runtime & Runtime \\ & & & Districts & Per (\%) & Per & Per (s) & (s) \\ \hline IA & 750 & UC0 & 9.290 & 0.667 & 0.755 & 2.775 & 28.577 \\ IA & 1000 & UC0 & 8.830 & 0.671 & 0.775 & 2.775 & 28.642 \\ \hline \hline \end{tabular} State: state tests were done on; \(LI\): number of local search iterations; UC: uncoarsening schedule followed; Mean Additional Districts: mean number of districting plans with deviation below \(5\%\) found during \(k\)-medoids process; Mean _Dev_ Per: mean deviation of each additional districting plan after local search is performed; Mean Comp Per: mean of compactness value for each additional districting plan after local search is performed; Mean Runtime Per: mean time to perform a local search on each additional districting plan and summarize results; Total Runtime: mean time to read in data, run \(k\)-medoids algorithm, perform local searches on additional districting plans and summarize all results \end{table} Table 5: Additional District Results \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{State} & \multirow{2}{*}{_LI_} & \multirow{2}{*}{UC} & \multicolumn{2}{c}{Local Search} & \multicolumn{2}{c}{Algorithm} & \multicolumn{1}{c}{Total} \\ & & & & Mean & Min & Mean & Runtime & Runtime \\ \cline{5-8} \cline{8-10} & & & & \(Dev(\%)\) & \(Dev(\%)\) & Comp & (s) & (s) \\ \hline KMED & IA & 750 & UCO & 0.693 & 0.181 & 0.763 & 0.718 & 3.478 \\ CR & IA & 750 & UCO & 7.162 & 0.426 & 0.782 & 28.155 & 29.663 \\ \hline KMED & IA & 1000 & UCO & 0.699 & 0.271 & 0.760 & 0.690 & 3.450 \\ CR & IA & 1000 & UCO & 6.967 & 0.104 & 0.784 & 34.914 & 36.388 \\ \hline KMED & FL & 750 & UC5 & 44.059 & 13.915 & 0.626 & 276.853 & 469.880 \\ CR & FL & 4500 & UC5 & 143.884 & 65.195 & 0.692 & 706.139 & 728.052 \\ \hline KMED & FL & 750 & UC6 & 40.769 & 12.511 & 0.612 & 328.559 & 512.829 \\ CR & FL & 7500 & UC6 & 147.769 & 41.759 & 0.697 & 1080.773 & 1102.027 \\ \hline KMED & FL & 1000 & UC5 & 42.043 & 12.061 & 0.616 & 321.880 & 523.083 \\ CR & FL & 6000 & UC5 & 147.096 & 50.079 & 0.694 & 774.962 & 797.021 \\ \hline KMED & FL & 1000 & UC6 & 36.707 & 10.876 & 0.607 & 391.102 & 579.940 \\ CR & FL & 10000 & UC6 & 143.654 & 51.167 & 0.699 & 1201.172 & 1223.036 \\ \hline \hline \end{tabular} \end{table} Table 6: Comparison of \(k\)-medoids Algorithm with Chen and Rodden method requires all districting plans to have deviation less than 1% and ours are able to achieve the legal requirement, but with deviation slightly higher than the official Iowa districting plan. Additionally, our districting plans have very comparable compactness values, averaging about 0.76. The official plan for Florida used in 2021 can be seen in Figure 6(b). It has a maximum population deviation of 0.0007% and compactness of 0.767, which equates to districts with populations that differed by at most 1 person. However, this was done using a finer grain TB than we used, breaking the state into Census Blocks. For Florida there are 488553 Census Blocks, allowing for much more fine tuning at the cost of less intuitive borders. This plan has a deviation at least 4.2847 lower than the best plan our method has created and has a compactness value about 0.1 higher than the plans produced by our method. ## 4 Conclusions and Future Work Our method works very well on the simple case of Iowa, and performs reasonably on the more challenging case of Florida. In both cases it outperforms the method used by Chen and Rodden in terms of both time and deviation achieved. Our method is able to combine many of the smaller ideas used throughout the years into one method. We also introduce a seemingly novel method to build initial districts by allowing the smallest district to expand next at each step. Further work needs to be done working on the remaining states and with the updated census data. Furthermore, we could add additional methods to potentially improve our algorithm. Examples would be the addition of multiple object moves in the local search step from [33], the implementation of geographs for faster local search [34; 35], forcing the local search to only make moves that strictly decrease the deviation, different selection criteria for which pair of districts to use in the local search phase. Figure 7: Districting plans used by Iowa (6(a)) and Florida (6(b)) in 2021 We could also add in a specific piece to create minority-majority districts in states where they are required. An approach for this would be to merge neighboring TBs with high minority populations before any clustering is done. This would force the minority communities of interest to stay intact and could lead to the minority-majority districts. ## Declarations ### Competing Interests On behalf of all authors, the corresponding author states that there is no conflict of interest. ### Funding We wish to acknowledge the support of the National Science Foundation for Anthony Pizzimenti through their grant NSF-1407216. ### Availability of data and materials The census data analyzed during the current study are available from the census bureau's website. The Florida data is at [https://data.census.gov/cedsci/table?text=P1&g=0400000US12%24700000](https://data.census.gov/cedsci/table?text=P1&g=0400000US12%24700000) &y=2010&tid=DECENNIALPL2010.P1 and the Iowa data is at [https://data.census.gov/cedsci/table?text=P1&g=0400000US19%24700000](https://data.census.gov/cedsci/table?text=P1&g=0400000US19%24700000) &y=2010&tid=DECENNIALPL2010.P1. Alternatively, the data can be reached from [https://data.census.gov](https://data.census.gov), searching for table P1, selecting the year 2010, selecting Geography tab \(\rightarrow\) Voting District \(\rightarrow\) (Iowa or Florida) \(\rightarrow\) All Voting Districts (VTD). The Shape files used during the current study are available from [https://www2.census.gov/geo/tiger/TIGER2012/VTD/](https://www2.census.gov/geo/tiger/TIGER2012/VTD/). The code and results data that support the findings of this study are available from the corresponding author upon request.
2305.19196
Responsible Composition and Optimization of Integration Processes under Correctness Preserving Guarantees
Enterprise Application Integration deals with the problem of connecting heterogeneous applications, and is the centerpiece of current on-premise, cloud and device integration scenarios. For integration scenarios, structurally correct composition of patterns into processes and improvements of integration processes are crucial. In order to achieve this, we formalize compositions of integration patterns based on their characteristics, and describe optimization strategies that help to reduce the model complexity, and improve the process execution efficiency using design time techniques. Using the formalism of timed DB-nets - a refinement of Petri nets - we model integration logic features such as control- and data flow, transactional data storage, compensation and exception handling, and time aspects that are present in reoccurring solutions as separate integration patterns. We then propose a realization of optimization strategies using graph rewriting, and prove that the optimizations we consider preserve both structural and functional correctness. We evaluate the improvements on a real-world catalog of pattern compositions, containing over 900 integration processes, and illustrate the correctness properties in case studies based on two of these processes.
Daniel Ritter, Fredrik Nordvall Forsberg, Stefanie Rinderle-Ma
2023-05-30T16:40:18Z
http://arxiv.org/abs/2305.19196v2
Responsible Composition and Optimization of Integration Processes under Correctness Preserving Guarantees ###### Abstract Enterprise Application Integration deals with the problem of connecting heterogeneous applications, and is the centerpiece of current on-premise, cloud and device integration scenarios. For integration scenarios, structurally correct composition of patterns into processes and improvements of integration processes are crucial. In order to achieve this, we formalize compositions of integration patterns based on their characteristics, and describe optimization strategies that help to reduce the model complexity, and improve the process execution efficiency using design time techniques. Using the formalism of timed DB-nets -- a refinement of Petri nets -- we model integration logic features such as control- and data flow, transactional data storage, compensation and exception handling, and time aspects that are present in reoccurring solutions as separate integration patterns. We then propose a realization of optimization strategies using graph rewriting, and prove that the optimizations we consider preserve both structural and functional correctness. We evaluate the improvements on a real-world catalog of pattern compositions, containing over 900 integration processes, and illustrate the correctness properties in case studies based on two of these processes. keywords: Enterprise application integration, enterprise integration patterns, optimization strategies, pattern compositions, petri nets, responsible programming, trustworthy application integration + Footnote †: journal: ## 1 Introduction In a highly digitized and connected world, in which enterprises get more and more intertwined with each other, the integration of applications scattered across on-premises, cloud and devices is crucial for enabling innovation, improved productivity, and more accessible information [1]. This is facilitated by process technology based on integration building blocks called integration patterns [2; 3; 4; 5]. The composition of these patterns into integration processes can result in complex models that are often vendor-specific, informal and ad-hoc [5]; optimizing such integration processes is often desirable, but hard. In most cases complex process control flows are further complicated by data flow, transactional data storage, compensation, exception handling, and time aspects [6]. In previous work [7], we found that already simple integration processes show improvement potential, e.g., when considering data dependencies that allow for (sub-)process parallelization. In order to consider such improvements, it is crucial to also consider data flow in the model, but approaches for verification and formal analysis of "realistic data-aware" integration processes are currently missing, as recent surveys on event data [8; 9], workflow management [10], and in particular application integration [5] report. Such approaches are needed in order to formally prove the structural and functional correctness of compositions of patterns and their optimizations, which in turn is needed to enable a responsible development of integration scenarios where integration processes behave as intended. To enable such approaches for the process modeller, we propose a _responsible composition and optimization (ReCO) process_ for patterns, that covers the following objectives: (i) inherently correct structural process representation, (ii) means for representing and proving functional process execution correctness, (iii) semantic integration pattern aspects of control and data flow, transactional data storage, compensation, exception handling, and time, (iv) automatic identification and application of optimizations, and (v) correctness-preserving process changes. We argue that existing approaches do not fully support responsible integration pattern composition and optimization with correctness preserving guarantees (cf. related work in Sec. 9). Figure 1 visualises the challenges that need to be overcome in order to enable such a responsible composition and optimization process achieving objectives (i)-(v). Integration developers and experts provide semantically meaningful pattern realizations in expressive specialised languages such as timed db-nets [6] (\(\overline{\leavevmode\hbox to17.49pt{\vbox to17.49pt{\pgfpicture\makeatletter \hbox to 0.0pt{\pgfsys@beginscope{}\definecolor{pgfstrokecolor}{rgb}{0,0,0} \pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@color@rgb@fill{0}{0}{0}{0} \pgfsys@setlinewidth{0.4}{0.4}{0.4}{0.4}{0.4}{0.4}{0.4}{0.4}{0.4}{0.4}{0.4}{0.4} {0.4}{0.4}{0.4}{0.4}{0.4}{0.4}{0.4}{0.4}{0.4}{0.4}{0.4}{0.4}{0.4}{0.4}{0.4} {0.4}{0.4}{0.4}{0.4}{0.4}{0.4}{0.4}{0.4}{0.4}{0.4}{0.4}{0.4}{0.4}{0.4}{0.4} {0.4}{0.4}{0.4}{0.4}{0.4}{0.4}{0.4}{0.4}{0.4}{0.4}{0.4}{0.4}{0.4}{0.4}{0.4} {0.4}{0.4}{0.4}{0.4}{0.4}{0.}{0.4}{0.}{0.4}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.} {0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.} {0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.} {0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.} {0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.} {0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.} {0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.} {0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.{} 0.0.{}}{0.{}0.{}0.{}0.{}0.{}0.{}0.{}0.{}0.{}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0. 0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.{}0.{} 0.{}0.{}0.{}0.{}0.{}0.{}0.{}0.{}0.{}0.{}0.{}0.{}0.{}0.{}{0.}{0.}{0.}{0.}{0.}{0.}{0. 0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0. 0.}{0.}{0.{}0.{}0.{}0.{}0.{}0.{}0.{}0.{}0.{}0.{}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0. 0.}{0.{}}{0.0.{}}{0.{}0.{}0.{}0.{}0.{}.{}0.{}0.{}0.{}0.{}.{}0.{}0.{} 0.{}0.{}0.{}.{}0.{}}{0.{}0.{}.{}0.{}.{}}{0.{}}{0.{}.{}}{0.{}.{}}{0.{}}{0.{} 0.0.{}}{0.{}}{0.{}}{0.0.{}}{0.{}}{0.0.{}}{0.{}.{}}{0.{}}{0.{}.{}}{0.{}} {0.0.{}}{0.{}}{0.0.{}}{0.{}}{0.{}}{0.{}}{0.0.{}}{0.{}}{0.0.{}}{0.{} 0.{}}{0.0.{}}{0.{}}{0.0.{}}{0.{}}{0.0.{}}{0.{}}{0.{}}{0.0.{}}{0.{}}{0.0.{}}{0. 0.{}}{0.0.{}}{0.{}}{0.{}}{0.{}}{0.{}}{0.0.{}}{0.{}}{0.{}}{0.0.{}}{0.{} 0.{}}{0.{}}{0.{}}{0.0.{}}{0.{}}{0.{}}{0.0.{}}{0.{}}{0.0.{}}{0.0.{}}{0. 0.{}}{0.0.{}}{0.{}}{0.{}}{0.0.{}}{0.{}}{0.0.{}}{0.{}}{0.{}}{0.{}}{0.0.{}}{0. 0.{}}{0.{}}{0.{}}{0.0.{}}{0.0.{}}{0.0.{}}{0.{}}{0.{}}{0.{}}{0.0.{}}{0. 0.{}}{0.{}}{0.0.{}}{0.{}}{0.{}}{0.0.{}}{0.{}}{0.{}}{0.0.{}}{0.{}}{0.0.{}}{0. 0.{}}{0.{}}{0.0.{}}{0.0.{}}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0. {}}{0.{}}{0.{}}{0.{}}{0.{}}{0.0.}{}{0.}{}{0.}{}.{}}{0.{}}{0.0.{}}{0.{}}{0. 0.{}}{0.{}}{0.0.}{0.}{}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0.}{0. {}}{0.0.}{0.}{0.}{0.}{0.}{0.}{}{0.}{0. of the pattern definitions is currently not specified (3), and the modeling and execution semantics layers are not connected (4). While improvements are conveniently defined on the higher level modeling layer for automatic changes to processes [7] (5), in manual modeling and automatic improvement cases, it is currently not possible to verify that the improved process has the same behaviour as the original process with respect to the execution semantics of the composed integration patterns (6). This work aims to fill these gaps, based on the following research questions that guide the design and development of a responsible composition and optimization process: * How can the user be supported and guided during pattern composition and process modeling? * When are pattern compositions correct? * How to responsibly determine and apply optimizations? Question Q1 is related to objective (i), Q2 to objectives (ii)-(iii), and Q3 to (iv)-(v). In the conference version of this paper [7], we provided the foundations for Q1 (and partially Q3). Pattern compositions were represented as typed pattern graphs, based on pattern characteristics and contracts, which inherently guarantee structurally correct compositions, and thus guide and support the user. Furthermore, the graph-based representation of integration patterns allows for the realization of optimizations as graph rewriting rules. Our evaluation showed that effective improvements could be identified and applied to real-world integration processes, while structural correctness was preserved. However, functional correctness was not considered, meaning that process changes might not be _responsible_ (cf. objectives (ii)-(iii), (v)). In this work, we extend our user-facing and structural correctness guaranteeing graph-based representation with an execution semantics using timed DB-nets [6]. To support the same notion of correctness based on pattern contracts as in [7], we define a new notion of _open_ timed db-nets that are capable of representing the data exchange between patterns. We then show how they can be composed, and specify their execution semantics. By interpreting pattern compositions in graph representation as compositions of open timed db-nets, and by proving that the translation results in structurally correct and semantically well-behaved nets, we can answer Question Q2. All in all, this makes automatic optimization of integration processes feasible, but now also taking functional correctness into account, thus answering question Q3 fully. Hence this enables the study of ReCO for the first time. MethodologyWe follow the principles of design science research methodology by Peffers et al. [12] to answer the research questions above: _"Activity 1: Problem identification and motivation"_ is based on a literature review and the assessment of vendor driven solutions [5], as well as quantitative analysis of integration pattern characteristics of EAI building blocks (existing catalogs of 166 integration patterns) and process improvements [7], resulting in requirements to a suitable formalism. We then address _"Activity 2: Define the objectives for a solution"_ by formulating objectives (i)-(v). For _"Activity 3: Design and development"_, we create several artifacts/contributions to answer questions Q1-Q3 and realize objectives (i)-(v): * a specification of an extensible structural correctness enforcing representation that allows for efficient application of improvements, * an extension of the definition of the formalism of execution semantics by inter-pattern data exchange analysis capabilities based on open timed db-nets (cf. 3, 6), * an interpretation procedure of graph representation as open timed db-nets (cf. 4), and * optimization realizations on the graph representation leveraging the interpretation to prove their correctness (cf. 5). Outline and history of this paperThis paper extends our previous conference paper [6] with several new contributions. Firstly, this version of the paper is structured along the ReCO process, which is introduced in Sec. 2, together with an integration process modeling example. In Sec. 3, we analyze integration pattern characteristics, which are relevant for developing our formalisms, and collect optimization strategies for integration processes. We specify pattern compositions with inherent structural correctness together with an abstract cost model in Sec. 4. Section 5, which is a new section, extends the timed db-net formalism [6] to open nets to introduce compositional aspects. In Sec. 6, which is also completely new, we combine the two formalisms through interpretation and synchronization techniques. We specify realizations of optimizations in Sec. 7. The new section Sec. 7.7 shows that the optimisations preserve functional correctness. In Sec. 8, we evaluate the optimizations on a large body of integration processes, and apply the ReCO process to two integration processes. We discuss related work in Sec. 9 and conclude in Sec. 10. ## 2 Responsible Composition and Optimization Process for Patterns In order to reason about responsible pattern composition and optimization, one first needs to consider formalizations of _patterns_, _compositions_, and _composition improvements_. In this Figure 1: End-to-end perspective from integration process modeling to verifiable execution semantics and automatic, correctness-preserving improvements (current gaps or missing aspects in red color). section, we introduce a pattern composition and optimization process that covers all of these topics, and we describe the process modeling in more detail by an integration process example. Finally, we specify the problem of responsible pattern composition and optimization. ### Responsible Composition and Optimization Process Our approach to a responsible pattern composition and optimization process is described in Fig. 2. We briefly discuss the main concepts involved. **Patterns**: Patterns are provided as modeling building blocks by the integration platform. Integration experts and platform developers start developing patterns by defining their semantics, and connect it to a formal representation that allows to formally check the pattern model (_pattern formalization process_ in Fig. 2). In addition, the realization of a pattern can be configured and simulated as well as tested. **Composition**: A process is modeled and configured by a user/-modeler in a process modeling tool by composing single, semantically correct, integration patterns (_compose patterns_ in Fig. 2), from which the model is then translated into a formal representation by the integration platform. With a thorough understanding of pattern characteristics within a process, the process can be formalized with inherent structural correctness (_check composition structure_ in Fig. 2). Semantic correctness of the whole composition of patterns can be checked through a notion of composition added to the pattern formalization (_check composition semantics_ in Fig. 2). **Improvements**: The foundation of structurally and semantically correct processes allows for proposed improvements that preserve the processes' correctness (_improve compositions_ in Fig. 2). The improvements could be defined by integration experts, but also other users in a formalism provided by the integration platform that allows for process rewriting suitable to the underlying formalisms (_define rewrite rule_ in Fig. 2). The improvements are automatically matched (_match & apply rewrite rules_ in Fig. 2), and if applicable, it leads to the application of an improvement (_rewrite composition_ in Fig. 2). We now introduce the composition stage by example of an integration process that could be improved using ReCO. ### Potential Process Optimization by Example Many organizations have started to connect their on-premise applications such as Customer Relationship Management (CRM) systems with cloud applications such as SAP Cloud for Customer (COD) using integration processes similar to the one shown in Fig. 3. A _CRM Material_ is sent from the CRM system via EDI (more precisely the SAP IDOC transport protocol) to an integration process running on SAP Cloud Platform Integration (CPI) [13]. The integration process enriches the message header (MSG.HDR) with additional information based on a document number for reliable messaging (i.e., AppID), which allows redelivery of the message in an exactly-once service quality [4]. The IDOC structure is then mapped to the COD service description and sent to the COD receiver. Already in this simple integration process, an obvious improvement can be applied: the data-independent Content Enricher and Message Translator patterns [2] could be executed in parallel. Importantly, such a change does not alter the behaviour of the integration process. In this paper, we seek to find a mechanism to combine the inherently, structurally correct pattern composition formalism from [7] with the work on timed db-nets [6] that allow for semantically correct definitions of integration patterns, and to prove that improvements are correctness-preserving. The interaction between the user/modeler and the integration system requires a ReCO process that addresses the objectives (i)-(v). ## 3 Background and Requirements In this section, we give a brief background on application integration patterns and their optimizations, by analyzing recurring pattern characteristics as well as collecting existing optimizations as strategies. We also derive and discuss requirements for a suitable formalism for pattern compositions in the context of the optimization strategies. ### Integration Pattern Characteristics Enterprise integration patterns (EIPs) [2] with recent additions [4; 5] form a suitable and important abstraction when implementing application integration scenarios. Besides their original differentiation in functional categories such as _message channels_, _message routers_, _message transformations_, and _messaging endpoints_ among others, there are more subtle means of Figure 2: Responsible pattern composition and optimization process (white colored boxes denote the main steps, grey colored box shows the _pattern formalization process_ from [6], to which we bridge through translation of single patterns and compositions) classifying patterns by _pattern characteristics_ that consider the control and data flow within and between integration patterns, and thus help greatly when formalizing pattern compositions. We analyzed all patterns from the literature [2; 4; 5] regarding their control and data flow characteristics. Our findings are summarised in Fig. 4. The characteristics of _channel_ and _message cardinality_ (CC and MC, respectively) are ubiquitous and can be found in every pattern. We also identified a number of optional and non-exclusive characteristics: if the pattern _changes message contents_ (CHG), if it is _message generating_ (MG), if it has _conditions_ (CND), and if it has _programs_ / _complex expressions_ (PRG). Additionally, _time_ and _storage_ (also found in [6]) are important requirements, especially for our later considerations on runtime semantics, but on the level of compositions, they are not very relevant due to their pattern local nature, and are subsumed under control and data flow. Together these characteristics summarize all relevant control and data flow properties of the considered integration patterns. In this work, composition and structure becomes relevant for checking structural correctness properties, while data processing and exchange characteristics are required mostly for a composition's semantic correctness. Both notions of correctness are especially relevant for modeling as well as any improvements to the composition (e.g., in the form of optimizations). #### 3.1.1 Ubiquitous Characteristics Every pattern has both a channel and a message cardinality, covering control and data flow aspects that are relevant for composing patterns. Channel CardinalityThe channel cardinality specifies the number of incoming and outgoing message channels of an integration pattern. It is especially important for the structural correctness of a pattern composition. The relative number of patterns of each channel cardinality can be found in Fig. 5. A _zero-to-one_ or _one-to-zero_ cardinality is exclusively found in _start_- and _end_ components of a composition, like message endpoints (e.g., commutative endpoint [5]) or integration adapters (e.g., event-driven consumer [2]). Most of the transformation patterns and some of the routing patterns are _message processors_ that have a channel cardinality of _one-to-one_ (e.g., aggregator, splitter [2]). The remainder of the routing patterns are either _forks_ with cardinality _one-to-many_ (e.g., multicast), _conditional forks_ with _one-to-many_ (_cond._) (e.g., content-based router [2]), or _joins_ with cardinality _many-to-one_ (e.g., join router [5]). We found no _many-to-many_ patterns at all. Message cardinalitySimilar to the channel cardinality, the message cardinality specifies the number of incoming and outgoing messages. However, the message cardinality is relevant for specifying the data transfer between patterns in a composition. As summarised in Fig. 5, we found that most of the integration patterns receive or require one and emit one message (e.g., message translator, message signer). There are also patterns that require one message and emit several messages (e.g., splitter, multicast) and similarly receive many and emit only one (e.g., aggregator). Finally, there are patterns that require zero and emit one or vice versa (e.g., event-driven consumer or producer). Note that Figure 4: Categorizing integration pattern characteristics according to control and data flow Figure 3: Replicate material from SAP Business Suite (a _hybrid integration_ scenario in BPMN) there are no patterns with a message cardinality of _one-to-many (cond.)_ or _many-to-many_. #### 3.1.2 Optional Characteristics We found that certain data flow characteristics are only present in some patterns. The number of patterns that have each identified characteristic can be found in Fig. 6. Message GeneratingA pattern is message generating, if it does not simply pass or alter the received message, but creates a completely new message (e.g., aggregator, splitter). The new message might preserve data and structure from a received message, but will have a new message identifier. If a newly generated message is composed out of several received messages, the lineage to the original messages is preserved by adding a message history [2]. We identified only three message generating patterns. Changing Message ContentIn some cases it might be interesting to distinguish between integration patterns that actually change the content of received messages (i.e., mostly, but not exclusively, message transformation patterns) and read-only patterns, which either read data from the message for evaluating a condition (e.g., content-based router) or do not look into the message at all (e.g., multicast [5], claim check [2]). We found 29 message changing patterns. ConditionsA condition is a read-only, user-defined function that returns a Boolean valuation. Conditions are mostly used in routing patterns, which decide on route or no-route, when receiving a message. We found 40 patterns that require a condition to function. Program/Complex ExpressionsA program or complex expression is an arbitrary, user-defined function that might change the content of a message by local or remote program execution (incl. remote procedure and database calls), which we found in 34 patterns. As can be seen in Fig. 6, there are more patterns requiring an expression than there are patterns changing the content of a message. Consequently, there is a small number of read-only patterns that require more complex processing, potentially with side-effects (e.g., load balancer [5]). NoneSince the identified characteristics are optional and non-exclusive, there is also a significant number of 36 patterns that do not have any of the characteristics (e.g., the detour [5], wire tap [2]). #### 3.1.3 Summary Analysing all known application integration patterns from the literature [2; 4; 5], we identified and categorized several control and data flow characteristics, relevant for the structural composition of those patterns and their internal and inter-pattern data flow. Channel cardinality and message cardinality are relevant to all patterns, but other characteristics are optional and non-exclusive. ### Static Optimization Strategies We consider improvements very important in the context of EAI, and thus we briefly survey recent attempts to optimize composed EIPs, in order to motivate the need to formalize their semantics. As a result, we derive so far unexplored prerequisites for optimizing compositions of EIPs. A more detailed collection of optimization strategies can be found in a non-mandatory supplementary material [14]. Figure 5: Channel and message cardinalities (“zero-to-one” includes “one-to-zero”) Figure 6: Further characteristics and configurations #### 3.2.1 Identifying Optimization Strategies Since a formalization of the EAI foundations in the form of integration patterns for static optimization of "data-aware" pattern processing is missing [5], we conducted a horizontal literature search [34] to identify optimization techniques in related domains. For EAI, the domains of business processes, workflow management and data integration are of particular interest. The results of our analysis are summarized in Tab. 1. Out of the resulting 616 hits, we selected 18 papers according to the search criteria "data-aware processes", and excluded work on unrelated aspects. This resulted in the _seven_ papers listed in Tab. 2. The mapping of techniques from related domains to EAI was done by for instance taking the idea of projection push-downs [16; 23; 25; 26; 35] and deriving the early-filter or early-mapping technique in EAI. We categorized the techniques according to their impact (e.g., structural or process, data-flow) in context of the objectives for which they provide solutions. In the following subsections, we now briefly discuss the optimization strategies listed in Tab. 2, in order to derive prerequisites needed for optimizing compositions of EIPs. To relate to their practical relevance and coverage so far (in the form of evaluations on "real-world" integration scenarios), we also discuss existing "data-aware" message processing solutions for each group of strategies. #### 3.2.2 Process Simplification We grouped together techniques whose main goal is reducing model complexity (i.e., number of patterns) under the heading of process simplification. The cost reduction of these techniques can be measured by pattern processing time (latency, i.e., time required per operation) and model complexity metrics [37]. Process simplification can be achieved by removing redundant patterns like _Redundant Subprocess Removal_ (e.g., removing one of two identical sub-flows), _Combine Sibling Patterns_ (e.g., removing one of two identical patterns), or _Unnecessary Conditional Fork_ (e.g., removing redundant branching). As far as we know, the only practical study of combining sibling patterns can be found in Ritter et al. [11], showing moderate throughput improvements. The simplifications requires a formalization of patterns as a control graph structure, which helps to identify and deal with the structural changes. Previous work targeting process simplification include Bohm et al. [35] and Habib, Anjum and Rana [23], who use evolutionary search approaches on workflow graphs, and Vrhovnik et al. [26], who use a rule formalization on query graphs. #### 3.2.3 Data Reduction The reduction of data can be facilitated by pattern push-down optimizations of message-element-cardinality-reducing patterns, which we call _Early-Filter_ (for data; e.g., removing elements from the message content), _Early-Mapping_ (e.g., applying message transformations), as well as message-reducing optimization patterns like _Early-Filter_ (for messages; e.g., removing messages), _Early-Aggregation_ (e.g., combining multiple messages), _Early-Claim Check_ (e.g., storing content and claiming it later without passing it through the pipeline), and _Early-Split_ (e.g., cutting one large message into several smaller ones). Measuring data reduction requires a cost model based on the characteristics of the patterns, as well as the data and element cardinalities. For example, the practical realizations for multimedia [36] and hardware streaming [11] show improvements especially for early-filter, split and aggregation, as well as moderate improvements for early-mapping. This requires a formalization that is able to represent data or element flow. Data reduction optimizations target message throughput improvements (i.e., processed messages per time unit), however, some have a negative impact on the model complexity. Previous work on data reduction include Getta [25], based on relational algebra, and Niedermann, Radeschutz and Mitschang [16], who define optimizations algorithmically for a graph-based model. #### 3.2.4 Parallelization Parallelization of processes can be facilitated through transformations such as _Sequence to Parallel_ (e.g., duplicating a pattern or sequence of pattern processing), or, if not beneficial, reverted, e.g., by _Merge Parallel_. For example, good practical results have been shown for vectorization [30] and hardware parallelization [11]. Formalizing such operations again require a control graph structure. Although the main focus of parallelization is message throughput, heterogeneous variants also improve \begin{table} \begin{tabular}{p{113.8pt} p{113.8pt} p{113.8pt} p{113.8pt} p{113.8pt}} \hline \hline Keyword & hits & selected & Selection criteria & Selected Papers \\ \hline Business Process Optimization & 159 & 3 & data-aware processes & survey [15], optimization patterns [16; 17] \\ Workflow Optimization & 396 & 6 & data-aware processes & instance scheduling [18; 19; 20], scheduling and partitioning for interaction [21], scheduling and placement [22], operator merge [23] \\ Data Integration Optimization & 61 & 2 & data-aware processes optimization, (no schema-matching) & instance scheduling, parallelization [24], ordering, materialization, arguments, algebraic [25] \\ Added & n/a & 13 & expert knowledge & business process [26], workflow survey [10; 27], data integration [28], distributed applications [29], EAI [4; 5; 11; 30], placement [31; 32], resilience [33] \\ Overall & 616 & 23 & & \\ \hline \hline \end{tabular} \end{table} Table 1: Optimizations in related domains — horizontal search latency. In both cases, parallelization requires additional patterns, which negatively impacts the model complexity, whereas merging parallel processes improves the model complexity and latency. Previous work on pattern parallelization include Zhang et al. [24], who defines a service composition model, to which algorithmically defined optimizations are applied. #### 3.2.5 Pattern Placement For all of the data reduction optimizations (cf. OS-2) "Pushdown to Endpoint" can be applied by extending the placement to the message endpoints, which improves message throughput and reduces the complexity of the integration process, while increasing the complexity at the message endpoints. For example, good practical results have been shown for vectorization [30] and cost efficient, security-aware placement [31; 32]. #### 3.2.6 Reduce Interaction The resilience and robustness of integration processes is crucial - especially in the cloud. Dependencies to resources used by an integration process (e.g., database, message queuing) and the message endpoints (e.g., mobile, cloud) has to be dealt with during the message processing. Optimizations like _Ignore Failing Endpoints_ and _Reduce Requests_ help dealing with potentially unreliable network communication, and allow for smart network usage and reaction to exceptional situations or unavailabilities. This requires requires a formalism that is able to represent data flow and time. These optimizations target more stable processes, improved latency and potentially higher throughput. #### 3.2.7 Summary We found several optimizations in related domains like data integration, business process and workflow, and re-interpreted them in the context of EAI. The optimizations have different effects regarding relevant categories like throughput, latency and model complexity, which we used to categorize them into three disjoint classes of optimization strategies, namely process simplification, data reduction and parallelization. For most of the optimizations we identified implementations that report a successful application to problems in the EAI domain. ### Discussion: Requirements for Formalizing Integration Pattern Compositions Based on our analyses of characteristics of single patterns and optimization strategies for pattern compositions, we briefly discuss requirements for the formalization of pattern compositions and their improvements as optimizations. These requirements are listed and set into context to the closest related approaches known from the application and data integration domains in Tab. 3. We also give examples of optimization strategies (cf. OS-x) that are co-enabled when fulfilling the requirements. We found that a fundamental support for control flow is mandatory for pattern compositions, due to the pipes-and-filters nature of integration scenarios [2; 5] (e.g., co-enabling OS-1). Hence, a suitable formalism representing pattern compositions requires a formal representation (**REQ-1: Formal representation of control flow**), e.g., as found in first formalization attempts using Coloured Petri Nets by Fahland and Gierds [38] and our recent work on timed db-nets [6], which essentially denotes an improvement of previous work regarding formal representation and analysis properties like data flow, time, transactional data storage and exceptions. The work by Bohm et al. [35] stems from the data integration domain, and thus only has a weak notion of control flow and none of integration patterns. As also found in [6], the known integration patterns require different aspects of time like timeouts, delays, and time-based rate limits to be functional (**REQ-2: Formal treatment of time**), e.g., co-enabling OS-5. The concept of pipes-and-filters also requires support for the flow and processing of messages (**REQ-3: Formal representation of data flow**), which is also supported by the closest known formalizations, using Coloured Petri nets in [38], plus an extension to db-nets [39] in [6], and a data dependency tree in [35] (e.g., co-enabling OS-2). \begin{table} \begin{tabular}{l l|c c c|c} \hline Strategy & Optimization & Throughput & Latency & Complexity & Practical Studies \\ \hline OS-1: Process & Redundant Sub-process Removal [35] & & \(\uparrow\) & \(\uparrow\) & - \\ Simplification & Combine Sibing Patterns [23; 35] & & \(\uparrow\) & \(\uparrow\) & ([11]) \\ & unnecessary conditional fork [26; 35] & \(\nearrow\) & \(\uparrow\) & \(\uparrow\) & - \\ \hline OS-2: Data & Early-Filter [16; 23; 25; 26; 35] & \(\uparrow\) & & & [11] \\ Reduction & Early-Mapping [23; 25; 35] & \(\uparrow\) & & & [11; 36] \\ & Early-Aggregation [23; 35; 25] & \(\uparrow\) & & & [36] \\ & Claim Check [25; 35] & \(\uparrow\) & & \(\searrow\) & - \\ & Early-Split [11] & \(\uparrow\) & \(\searrow\) & & [11; 36] \\ \hline OS-3: Parallelization & Sequence to parallel [16; 24; 26; 35] & \(\uparrow\) & \(\searrow\) & [11; 30] \\ & Merge parallel sub-processes [16; 24; 26; 35] & & \(\uparrow\) & \(\uparrow\) & [11] \\ & Heterogeneous parallelization [22] & \(\uparrow\) & & \(\searrow\) & - \\ \hline OS-4: Pattern & Pushdown to Endpoint & \(\uparrow\) & & (\(\uparrow\)) & [30; 31; 32] \\ Placement & (extending OS-2) & & & & \\ \hline OS-5: Reduce & Ignore Failing Endpoints [4; 5; 33] & & \(\uparrow\) & - \\ Interaction & Reduce Requests [26] & \(\nearrow\) & \(\uparrow\) & & - \\ & \(\uparrow\): improvement, \(\nearrow\): slight improvement, \(\searrow\): slight deterioration, \(\searrow\): no effect & & \\ \end{tabular} \end{table} Table 2: Optimization Strategies in the Context of Integration Processes Another batch of requirements from related work [6] target properties that were only recently formally represented, and thus are not considered in [38; 35]. Let us briefly summarize these pattern-level requirements in the context of this work. To be operational, some of the patterns like commutative and idempotent receivers as well as aggregator require the ability to persistently store data (**REQ-4: Formal treatment of databases and transactions**), e.g., co-enabling OS-2. Finally, potential exceptions have to be handled and compensated for within a pattern and on a composition level (**REQ-5: Formal treatment of exceptions and compensations**). To represent and reason about pattern compositions, an adequate formalism must include the ability to structurally and semantically compose patterns (**REQ-6: Formalism must be compositional**), co-enabling OS-1-5. This core requirement is only partially supported in related work: in the work on Petri nets [38; 6], composition is assumed through the composable nature of Petri nets, but no formal construction of such compositions is given. Similarly, [35] introduces a composition of data integration operations, but again without a formal construction. Once patterns are composed, the compositions will be subject to frequent changes such as extensions, adaptation due to changing legal requirements, or improvements and optimizations. In this work we focus on optimizations representing a comprehensive set of change operations that introduce change to pattern compositions. For a formal analysis of such changes, the optimizations themselves shall be represented in a formal way, such that compositions and their change operations can be formally analyzed (**REQ-7: Formal treatment of improvements (of control and data flow)**), co-enabling OS-1-5. The notion of change is partially considered in data integration by [35], but not in recent application integration work [38; 6]. Finally, a suitable formal representation of pattern compositions shall allow for a structural and semantic analysis of the correctness of a composition. This requirement also contains the notion of remaining correct after applying changes to the compositions, e.g., in the form of optimizations (**REQ-8: Formal specification of preserving correctness (structurally and semantically) of compositions**), co-enabling OS-1-5. In the context of structural composition correctness, we identified the channel cardinality characteristic as decisive correctness criteria based on the control flow. For example, a content-based router with a cardinality of 1:\(n\) channels, can only be connected to 1 input and \(n\) output channels, otherwise the composition is structurally incorrect. The semantic correctness has to go deeper into the fundamental execution semantics of integration patterns as defined in [6], and thus we build on top of that formalism. However, neither of the current approaches [38; 6; 35] addresses the requirement of structural and semantic correctness for compositions, nor do they guarantee a general correctness-preserving property when changing compositions. ## 4 Graph-based Pattern Compositions We now introduce a formalization of pattern compositions, and an abstract cost model for them. This is needed in order to rigorously reason about optimizations. ### Integration Pattern Graphs Taking requirement REQ-1 of having a formal representation of control flow from Sec. 3.3 into account, it is natural to model pattern compositions as extended control flow graphs [40], as we do in Definition 1. This gives a high level modelling language that is easy to use and understand, and is close to informal notation used by practitioners [4; 5]. To take requirement REQ-3 of having a formal representation of data flow into account, we will further enrich the vertices of the graph with additional information in Definitions 2 and 3. Requirements REQ-2, REQ-4 and REQ-5 are pattern local [6], and thus not relevant at the abstraction level of pattern compositions. They will become important when we consider the runtime semantics of compositions later in Sec. 5 and 6. Control flow graphs can easily be composed into larger graphs, and hence requirement REQ-6 of composability is fulfilled. Requirements REQ-7 and REQ-8 will be addressed in Sec. 7, of course building on the definitions in this section. Before we get to the definition of the kind of graph we need to model pattern compositions, let us fix some notation: a directed graph is given by a set of nodes \(P\) and a set of edges \(E\subseteq P\times P\). For a node \(p\in P\), we write \(\bullet p=\{p^{\prime}\in P\mid(p^{\prime},p)\in E\}\) for the set of direct predecessors of \(p\), and \(p\bullet=\{p^{\prime\prime}\in P\mid(p,p^{\prime\prime})\in E\}\) for the set of direct successors of \(p\). **Definition 1** (Integration pattern type graph).: _An integration pattern typed graph (IPTG) is a directed graph with set of nodes \(P\) and set of edges \(E\subseteq P\times P\), together with a function \(type:P\to T\), where \(T=\{\)start, end, message processor, fork, structural join, condition, merge, external call\(\}\). An IPTG (\(P,E,type\)) is correct if_ \begin{table} \begin{tabular}{l l c c c} \hline ID & Requirement & Fahland et al. [38] & Ritter et al. [6] & Böhm et al. [35] \\ \hline REQ-1 & Control flow (pipes and filter) & \(\checkmark\) & \(\checkmark\) & (\(\checkmark\)) \\ REQ-2 & Time & & \(\checkmark\) & \\ \hline REQ-3 & Data flow & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) \\ \hline \hline REQ-4 & Database, Transactions & & \(\checkmark\) & \\ REQ-5 & Exceptions, Compensations & & \(\checkmark\) & \\ \hline REQ-6 & Compositional & (\(\checkmark\)) & (\(\checkmark\)) & (\(\checkmark\)) \\ REQ-7 & Improvements (control and data flow) & & & (\(\checkmark\)) \\ REQ-8 & Preserving correctness & & & \\ \multicolumn{4}{c}{\(\checkmark\): covered, (\(\checkmark\)): partially covered, \(\checkmark\) not covered or out of scope} \\ \end{tabular} \end{table} Table 3: Formalization requirements * _there exists_ \(p_{1}\)_,_ \(p_{2}\in P\) _with_ \(\text{type}(p_{1})=\text{start}\) _and_ \(\text{type}(p_{2})=\text{end}\)_;_ * _if_ \(\text{type}(p)=\text{start}\) _then_ \(|\bullet p|=0\)_, and if_ \(\text{type}(p)=\text{end}\) _then_ \(|p\bullet|=0\)_._ * _if_ \(\text{type}(p)\in\{\text{fork},\text{condition}\}\) _then_ \(|\bullet p|=1\) _and_ \(|p\bullet|>1\)_, and if_ \(\text{type}(p)=\text{join}\) _then_ \(|\bullet p|>1\) _and_ \(|p\bullet|=1\)_;_ * _if_ \(\text{type}(p)\in\{\text{message processor, merge}\}\) _then_ \(|\bullet p|=1\) _and_ \(|p\bullet|=1\)_;_ * _if_ \(\text{type}(p)\in\{\text{external call}\}\) _then_ \(|\bullet p|=2\) _and_ \(|p\bullet|=2\)_;_ * _The graph_ \((P,E)\) _is connected and acyclic._ In the definition, we think of \(P\) as a set of extended integration patterns that are connected by message channels represented as edges in \(E\), as in a pipes and filter architecture. The function _type_ records what type of pattern each node represents. The first correctness condition says that an integration pattern has at least one source and one target, while the next three states the cardinality of the involved patterns coincide with the in- and out-degrees of the nodes in the graph representing them. The last condition states that the graph represents one integration pattern, not multiple unrelated ones, and that messages do not loop back to previous patterns. To represent the data flow, i.e., the basis for the optimizations and requirement REQ-3, the control flow has to be enhanced with (a) the data that is processed by each pattern, and (b) the data exchanged between the patterns in the composition. The data processed by each pattern (a) is described as a set of pattern characteristics: **Definition 2** (Pattern characteristics).: _A pattern characteristic assignment for a graph \((P,E)\) is a function \(\text{char}:P\to 2^{PC}\), assigning to each vertex a subset of the set_ \[\begin{split} PC&=(\{MC\}\times\mathbb{N}^{2})\cup \{\text{ACC}\}\times\{\text{ro},\text{rw}\})\cup(\{MG\}\times\mathbb{B})\cup \\ &((\text{CND})\times 2^{\text{Bep}})\cup(\{PRG\}\times\text{Exp}) \cup\\ &(\{S\}\times\text{Exp})\cup(\{\text{[}QRY\}\times\mathbb{Z}^{ \text{Exp}})\cup(\{\text{ACTN}\}\times\mathbb{Z}^{\text{Exp}})\cup\\ &(\{TM\}\times(\mathbb{Q}^{\geq 0}\times(\mathbb{Q}^{\geq 0} \cup\{\infty\}))\enspace,\end{split}\] _where \(\mathbb{B}\) is the set of Booleans, \(\text{BExp}\) and \(\text{Exp}\) the sets of Boolean and program expressions, respectively, and MC, ACC, MG, CND, PRG, S, QRY, ACTN, TM some distinct symbols._ The property and value domains in Definition 2 are based on the pattern characteristics identified in Sec. 3.1, and could of course be be extended if future patterns required it. We briefly explain the intuition behind the characteristics: the characteristic \((\text{MC},n,k)\) represents a message cardinality of \(n\):\(k\), \((\text{ACC},x)\) the message access, depending on if \(x\) is read-only ro or read-write rw, and the characteristic \((\text{MG},y)\) represents whether the pattern is message generating depending on the Boolean \(y\). A \((\text{CND},\{c1,...,cn\})\) represents the conditions \(c_{1}\),..., \(c_{n}\) used by the pattern to route messages, and \((\text{PRG},p)\) represents a program \(p\) used by the pattern (e.g., for message translation). The storage aspects are denoted by a schema \((\text{S},(p_{s}))\) created by a program \(p_{s}\), expressions \((\text{QRY},\{q1,...,qn\})\) denoting a set of distinct queries \(q1\),..., \(qn\), and a set of actions \((\text{ACTN},\{a1,...,an\})\) with distinct \(a1\),..., \(an\). Finally, \((\text{TM},(\tau_{s},\tau_{e}))\) represents a timing window from \(\tau_{s}\) to \(\tau_{e}\). **Example 1**.: _The characteristics of a content-based router CBR is \(\text{char}(CBR)=\{\text{(MC, 1:1), (ACC, ro), (MG, false), (CND, \(\text{cnd}_{1}\),..., \(\text{cnd}_{n-1}\})\), \((\text{PRG},\text{nu1}1)\), \((\text{S},\,\text{nu1}1)\), \((\text{QRY},\,\emptyset), \((\text{ACTN},\,\emptyset), \((\text{TM},0,0))\), because of the workflow of the router: it receives exactly one message, then evaluates up to \(n-1\) routing conditions \(\text{cnd}_{1}\) up to \(\text{cnd}_{n-1}\) (one for each outgoing channel), until a condition matches. The original message is then rerouted read-only (in particular, the router is not message generating) on the selected output channel, or forwarded to the default channel, if no condition matches. The router does not require programs, storage or time configurations._ The data exchange between the patterns (b) is based on input and output contracts (similar to data parallelization contracts in [41]). These contracts specify how the data is exchanged in terms of required message properties of a pattern during the data exchange: **Definition 3** (Pattern contract).: _A pattern contract assignment for a graph \((P,E)\) is a function \(\text{contr}:P\to\text{CPT}\times\mathbb{Z}^{EL}\), assigning to each vertex a function of type_ \[\text{CPT}=\{\text{signed},\text{encrypted},\text{encoded}\}\to\{\text{yes}, \text{no},\text{any}\}\] _and a subset of the set_ \[EL=(\{\text{HDR}\}\times 2^{D})\cup(\{\text{PL}\}\times 2^{D})\cup(\{\text{ATTCH} \}\times 2^{D})\] _where \(D\) is a set of data elements (the concrete elements of \(D\) will vary with the application domain). We represent the function of type CPT by its graph, leaving out the attributes that are sent to any, when convenient._ The set \(\text{CPT}\) in a contract represents integration concepts, while the set \(EL\) represents data elements and the structure of the message: its headers \((\text{HDR},H)\), its payload \((\text{PL},Y)\) and its attachments \((\text{ATTCH},A)\). Each pattern will have an inbound and an outbound pattern contract, describing the format of the data it is able to receive and send respectively -- the role of pattern contracts is to make sure that adjacent inbound and outbound contracts match. **Example 2**.: _A content-based router is not able to process encrypted messages. Recall that its pattern characteristics included a collection of routing conditions: these might require read-only access to message elements such as certain headers \(h_{1}\) or payload elements \(e_{1}\), \(e_{2}\). Hence the input contract for a router mentioning these message elements is_ \[\begin{split} in\text{Contr}(CBR)=(\{(\text{encrypted},\text{no}) \},\{(\text{HDR},\{h_{1}\}),\\ (\text{PL},\{e_{1},e_{2}\})\})\enspace.\end{split}\] _Since the router forwards the original message, the output contract is the same as the input contract._ **Definition 4**.: _Let \((C,E)\in CPT\times 2^{EL}\) be a pattern contract, and \(X\subseteq CPT\times 2^{EL}\) a set of pattern contracts. Write \(X_{CPT}=[C^{\prime}\mid(\exists E^{\prime})\ (C^{\prime},E^{\prime})\in X]\) and \(X_{EL}=\{E^{\prime}\mid(\exists C^{\prime})\ (C^{\prime},E^{\prime})\in X\}\). We say that \((C,E)\)_matches_\(X\), in symbols \(\mathsf{match}((C,E),X)\), if following condition holds:_ \[(\forall x)(C(x)\neq any\implies\] \[(\forall C^{\prime}\in X_{CPT})(C^{\prime}(x)=C(x)\lor C^{ \prime}(x)=any))\wedge\] \[(\forall(m,Z)\in E)(Z=\bigcup_{(m,Z)\in X_{EL}}Z^{\prime})\enspace.\] \(\Box\) We are interested in an inbound contract \(K_{\text{in}}\) matching the outbound contracts \(K_{1},\ldots,K_{n}\) of its predecessors. In words, this is the case if (i) for all integration concepts that are important to \(K_{\text{in}}\), all contracts \(K_{i}\) either agree, or at least one of \(K_{\text{in}}\) or \(K_{i}\) accepts any value (_concept correctness_); and (ii) together, \(K_{1},\ldots,K_{n}\) supply all the message elements that \(K_{\text{in}}\) needs (_data element correctness_). Since pattern contracts can refer to arbitrary message elements, a formalization of an integration pattern can be quite precise. On the other hand, unless care is taken, the formalization can easily become specific to a particular pattern composition. In practice, it is often possible to restrict attention to a small number of important message elements (see Example 3 below), which makes the formalization manageable. Putting everything together, we formalize pattern compositions as integration pattern typed graphs with pattern characteristics and inbound and outbound pattern contracts for each pattern: **Definition 5**.: _An integration pattern contract graph (IPCG) is a tuple_ \[(P,E,type,char,inContr,outContr)\] _where \((P,E,type)\) is an IPTG, \(char:P\to 2^{PC}\) is a pattern characteristics assignment, and inContr \(:\prod_{p\in P}(CPT\times 2^{EL})^{\mathsf{sp}\mid}\) and \(outContr:\prod_{p\in P}(CPT\times 2^{EL})^{\mathsf{sp}\mid}\) are pattern contract assignments -- one for each incoming and outgoing edge of the pattern, respectively -- called the inbound and outbound contract assignment respectively. It is correct, if the underlying IPTG \((P,E,type)\) is correct, and inbound contracts matches the outbound contracts of the patterns' predecessors, i.e. if for every \(p\in P\)_ \[type(p)=start\vee\mathsf{match}(inContr(p),\{outContr(p^{\prime})\mid p^{ \prime}\in\bullet p\})\enspace.\] _Two IPCGs are_ isomorphic _if there is a bijective function between their patterns that preserves edges, types, characteristics and contracts. \(\Box\)_ **Example 3**.: _Figures 7(a) and 7(b) show IPCGs representing an excerpt of the motivating example from the introduction. Figure 7(a) represents the IPCG of the original scenario with a focus on the contracts, and Fig. 7(b) denotes an already improved composition showing the characteristics and giving an indication on the pattern latency. In Fig. 7(a), the input contract inContr\((CE)\) of the content encircher pattern \(CE\) requires a non-encrypted message and a payload element DOCNUM The content encircher makes a query to get an application ID \(\mathsf{AppID}\) from an external system, and appends it to the message header. Hence the output contract outContr\((CE)\) contains \((HDR,\{\mathsf{AppID}\})\). The content encircher then emits a message that is not encrypted or signed. A subsequent message translator MT requires the same message payload, but does not care about the appended header. It adds another payload \(\mathsf{RcvID}\) to the message. Comparing inbound and outbound pattern contracts for adjacent patterns, we see that this is a correct IPCG._ _One improvement of this composition is depicted in Fig. 7(b), where the independent patterns \(CE\) and MT have been parallelized. To achieve this, a read-only structural fork with channel cardinality \(1\):\(n\) in the form of a multicast MC has been added. The inbound and outbound contracts of MC are adapted to fit into the composition. After the concurrent execution of \(CE\) and \(MT\), a join router JR brings the messages back together again and feeds the result into an aggregator AGG that restores the format that ADPT\({}_{\tau}\) expects. We see that the resulting IPCG is still correct, so this would be a sound optimization. \(\blacksquare\)_ ### Abstract Cost Model In order to decide if an optimization is an improvement or not, we want to associate abstract costs to integration patterns. We do this on the pattern level, similar to the work on data integration operators [42]. The cost of the overall integration pattern can then be computed as the sum of the cost of its constituent patterns. Costs are considered parametrized by the cardinality of data inputs \(|d_{\text{in}}|\) (\(1\leq i\leq n\), if the pattern has in-degree \(n\)), data outputs \(|d_{\text{out}}|\) (\(1\leq j\leq m\), if the pattern has out-degree \(m\)), and external resource data sets \(|d_{r}|\). The costs can also refer to the pattern characteristics. **Definition 6** (Cost model).: _A cost assignment for an IPCG \(G=(P,E,type,char,inContr,outContr)\) is an function \(cost(p):\mathbb{N}^{n}\times\mathbb{N}^{t}\times\mathbb{N}^{t}\to\mathbb{Q}\) for each \(p\in P\), where \(p\) has in-degree \(n\), out-degree \(k\) and \(r\) external connections. The cost \(cost(G):\mathbb{N}^{N}\times\mathbb{N}^{k}\times\mathbb{N}^{R}\to\mathbb{Q}\) of the IPCG pattern graph \(G\), where \(N\) is the sum of the in-degrees of its patterns, \(K\) the sum of their out-degrees, and \(R\) the sum of their external connections, is defined to be the sum of the costs of its constituent patterns:_ \[cost(G)(d_{\text{in}},d_{\text{out}},d_{r})=\\ \sum_{p\in P}cost(p)(|d_{\text{in}}(p)|,|d_{\text{out}}(p)|,|d_{r} (p)|)\enspace,\] _where we suggestively have written \(|d_{\text{in}}(p)|\) for the projection from the tuple \(d_{\text{in}}\) corresponding to \(p\), similarly for \(|d_{\text{out}}(p)|\) and \(|d_{r}(p)|\). \(\blacksquare\)_ We have defined the abstract costs of the patterns discussed in this work in Tab. 4 -- these will be used in the subsequent evaluation. We now explain the reasoning behind them. Routing patterns such as content based routers, message filters and aggregators mostly operate on the input message, and thus have an abstract cost related to its element cardinality \(|d_{\text{in}}|\). For example, the abstract cost of the CBR is \(cost(CBR)=\frac{\sum_{i=1}^{n+|d_{\text{in}}|}d_{\text{out}}|}{2}\), since it evaluates on average \(\frac{n-1}{2}\) routing conditions on the input message. More complex routing patterns such as aggregators evaluate correlation and completion conditions, as well as an aggregation function on the input message, and also on sequences of messages of a certain length from an external resource. Hence the cost of an aggregator is \(cost(AGG)=2\times|d_{\text{in}}|+\frac{\sum_{i=1}^{n+|d_{\text{in}}|}d_{\text{ out}}|}{2}\), where \(len(seq)\) denotes the length of a Message Sequence [2] as for example used by an aggregator pattern. In contrast, message transformation patterns like content filters and enrichers mainly construct an output message, hence their costs are determined by the output cardinality \(|d_{\text{out}}|\). For example, content enrichers create a request message from the input message with cost \(|d_{\text{in}}|\), conducts an optional resource query \(|d_{r}|\), and creates and enriches the response with cost \(|d_{\text{out}}|\). Finally, the cost of message creation patterns such as external calls, receivers, and senders arise from costs for transport, protocol handling, and format conversion, as well as decompression. Hence the cost depends on the element cardinalities of input and output messages \(|d_{\text{in}}|\), \(|d_{\text{out}}|\). **Example 4**.: _We return to the claimed improved composition in Example 3. The latency of the composition \(G_{1}\) in Fig. 7(a), calculated from the constituent pattern latencies, is \(cost(G_{1})=t_{CE}+t_{MT}\) with latency \(t_{p}\) and pattern \(p\). The latency improvement potential given by switching to the composition \(G_{2}\) in Fig. 7(b) is given by \(cost(G_{2})=\max(t_{CE},t_{MT})+t_{MC}+t_{IR}+t_{AGG}\). Obviously it is only beneficial to switch if \(cost(G_{2})<cost(G_{1})\), and this condition depends on the concrete values involved. At the same time, the model complexity increases by three nodes and edges._ \begin{table} \begin{tabular}{l l l} \hline Pattern \(p\) & Abstract Cost \(cost(p)\) & Factors \\ \hline Content-based Router [2] & \(\frac{\sum_{i=1}^{n+|d_{\text{in}}|}d_{\text{out}}|}{2}\) & \(n\)=\(\hbar\)channel conditions, half of them evaluated on average \\ Message Filter [2] & \(|d_{\text{in}}|\) & input data condition \(|d_{\text{in}}|\) \\ Aggregator [2] & \(2\times|d_{\text{in}}|+\frac{|d_{\text{in}}|+|d_{\text{out}}|}{avg(len(seq))}\) & correlation, and completion conditions \(|d_{\text{in}}|\), aggregation function \(\frac{|d_{\text{in}}|+|d_{\text{out}}|}{avg(len(seq))}\) and length of a sequence length(seq) \(>=2\), and (transacted) resource \(d_{r}\) \\ Claim Check [2] & \(2\times|d_{r}|\) & resource insert and get \(|d_{r}|\) \\ Splitter [2] & \(|d_{\text{out}}|\) & output data condition \(|d_{\text{out}}|\) \\ Multicast, Join Router [5] & \(\sum_{i=0}^{n}cost(\text{procunit}_{r})\) & costs of processing units \(cost(\text{procunit}_{r})\), e.g., threading in software, for \(n\) channels \\ \hline Content Filter [2] & \(|d_{\text{out}}|\) & output data creation \(|d_{\text{out}}|\) \\ Mapping [2] & \(|d_{\text{in}}|+|d_{\text{out}}|\) & output data creation \(|d_{\text{out}}|\) from input data \(|d_{\text{in}}|\) \\ Content Enricher [2] & \(|d_{\text{in}}|+|d_{r}|+|d_{\text{out}}|\) & request message creation on \(|d_{\text{in}}|\), resource query \(|d_{r}|\), response data enrich \(|d_{\text{out}}|\) \\ \hline External & \(|d_{\text{out}}|+|d_{\text{in}}|\) & request \(|d_{\text{out}}|\) and reply data \(|d_{\text{in}}|\) \\ Call [5] & & \\ Receive [2] & \(|d_{\text{in}}|\) & input data \(|d_{\text{in}}|\) \\ Send [2] & \(|d_{\text{out}}|\) & output data \(|d_{\text{out}}|\) \\ \hline \end{tabular} \end{table} Table 4: Abstract costs of relevant patterns Figure 7: An IPCG of an excerpt of the motivating example ## 5 A Semantics Using Timed DB-nets Integration pattern graphs model the structural composition of integration patterns, but not their dynamics, i.e. how data actually flow through the system. To model this, we use timed db-nets [6], an extension of db-nets [39] with an explicit notion of time (addressing REQ-2). The formalism of db-nets in turn is a refinement of colored Petri nets [43] with primitives for the net to query and update persistent data stores (addressing REQ-4). Exceptions are built into the framework in the form of rollbacks (addressing REQ-5). To make the definition compositional, we have to extend the notion of timed db-nets to timed db-nets with boundaries that can be reasoned about separately, and then plugged together to form larger timed db-nets. ### Open Timed DB-nets In this section, we formally define the mathematical structure we use to give a runtime semantics to pattern graphs. We first recall the definition of timed db-nets, and then extend them to open timed db-nets, in order to make the definition compositional. #### 5.1.1 Ordinary Timed DB-nets A timed db-net has three layers: a persistence layer describing the underlying database of the net, a logic layer describing the queries that can be made of the persistence layer, and a control layer describing how tokens of data flow through the net, executing queries. See Ritter et al. [6] for motivation and a more gentle definition. **Definition 7** (timed db-net [6]): _A timed db-net is a tuple \((\mathfrak{D},\mathcal{P},\mathcal{L},\mathcal{N})\) where:_ * \(\mathfrak{D}\) _is a type domain -- a finite set of data types, each of the form_ \(D=(\Delta_{D},\Gamma_{D})\)_, where_ \(\Delta_{D}\) _is a value domain, and_ \(\Gamma_{D}\) _is a finite set of domain-specific predicate symbols._ * \(\mathcal{P}\) _is a_ \(\mathfrak{D}\)_-typed_ _persistence layer_, i.e., a pair_ \((\mathcal{R},E)\)_, where_ \(\mathcal{R}\) _is a_ \(\mathfrak{D}\)_-typed database schema, and_ \(E\) _is a finite set of first-order_ \(F(\mathfrak{D})\) _constraints over_ \(\mathcal{R}\)_._ * \(\mathcal{L}\) _is a_ \(\mathfrak{D}\)_-typed_ _data logic layer_ _over_ \(\mathcal{P}\)_, i.e., a pair_ \((Q,A)\)_, where_ \(Q\) _is a finite set of_ \(FO(\mathfrak{D})\) _queries over_ \(\mathcal{P}\)_, and_ \(A\) _is a finite set of actions over_ \(\mathcal{P}\)_._ * \(\mathcal{N}\) _is a_ \(\mathfrak{D}\)_-typed_ _control layer_ _over_ \(\mathcal{L}\)_, i.e., a tuple_ \((P,T,F_{in},\)__\(F_{out},F_{rb},\)__\(\mathsf{color},\mathsf{query},\mathsf{guard},\mathsf{action})\)_, where:_ 1. \(P=P_{c}\uplus P_{v}\) _is a finite set of places, partitioned into so-called control places_ \(P_{c}\) _and view places_ \(P_{v}\)_,_ 2. \(T\) _is a finite set of transitions,_ 3. \(F_{in}\) _is an input flow from_ \(P\) _to_ \(T\)_,_ 4. \(F_{out}\) _and_ \(F_{rb}\) _are respectively output and roll-back flows from_ \(T\) _to_ \(P_{c}\)_,_ 5. \(\mathsf{color}\) _is a color assignment over_ \(P\) _(mapping_ \(P\) _to a Cartesian product of data types),_ 6. \(\mathsf{query}\) _is a query assignment from_ \(P_{v}\) _to_ \(Q\) _(mapping the results of_ \(Q\) _as tokens of_ \(P_{v}\)_),_ 7. \(\mathsf{guard}\) _is a transition guard assignment over_ \(T\) _(mapping each transition to a formula over its input inscriptions), and_ 8. \(\mathsf{action}\) _is an action assignment from_ \(T\) _to_ \(A\) _(mapping some transitions to actions triggering updates over the persistence layer)._ * \(\tau:T\rightarrow\mathbb{Q}^{20}\times(\mathbb{Q}^{20}\cup\{\infty\})\) _is a timed transition guard, mapping each transition_ \(t\in T\) _to a pair of values_ \(\tau(t)=(v_{1},v_{2})\)_, where_ \(v_{1}\) _is a non-negative rational number, and_ \(v_{2}\) _is a non-negative rational number, or the special constant_ \(\infty\)_._ \(\square\)__ We adopt the following graphical conventions for drawing the control layer of a timed db-net: places are depicted as round nodes -- view places are labelled by a database icon \(\overline{\mathsf{O}}\) with queries written in \(\mathsf{green}\) -- and transitions as rectangles. Rollback arcs are depicted with an "\(\mathsf{x}\)": \(\overline{\mathsf{O}}\)\(\overline{\mathsf{O}}\)\(\overline{\mathsf{O}}\)\(\overline{\mathsf{O}}\). Actions are written in \(\mathsf{blue}\), and guards are written in square brackets next to the transition, and we adopt the following conventions for a timed transition guard \(\tau\) and a transition \(t\): _(i)_ if \(\tau(t)=(0,\infty)\), then no temporal label is shown for \(t\) (this is often the default choice for \(\tau(t)\)); _(ii)_ if \(\tau(t)\) is of the form \((v,v)\), we attach label "\(@_{V}\)" to \(t\); _(iii)_ if \(\tau(t)\) is of the form \((v_{1},v_{2})\) with \(v_{1}\neq v_{2}\), we attach label "\(@_{V}\)\(\langle v_{1},v_{2}\rangle\)" to \(t\). **Example 5**: _Fig. 8 shows an timed db-net realisation of an aggregator. The intention is that messages arrive at the place \(\mathit{ch}_{in}\). The database is then queried using the \(\mathit{Q}_{mys}\) query via a view place, and if it already contains the message, it is updated via the UpdateSeq action at transition \(T_{1}\). If it does not contain the message, the CreateSeq action is triggered at \(T_{2}\) instead, and the sequence number gets passed to the \(\mathit{ch}_{timer}\) place, whose output transition \(T_{3}\) will be enabled after 30 seconds, triggering the \(\mathit{Th}_{out}\)Set action. This will enable transition \(T_{4}\), with the effect that a token containing the data from the completed sequence, queried via \(\mathit{Q}_{sqs}\) from the database, will move into \(\mathit{ch}_{out}\)._ #### 5.1.2 Open Timed DB-nets We now describe timed db-nets that are open, in the sense that they have "ports" for communicating with the outside world: the idea being that tokens can be received and sent on these ports, similar to in the existing literature on open Petri nets [44; 45; 46]. **Definition 8** (Open timed db-net): _An open timed db-net is a pair \(A=(N_{A},B_{A})\), where \(N_{A}=(\mathfrak{D},\mathcal{P},\mathcal{L},\mathcal{N},\tau)\) is a timed db-net with control layer_ \[\mathcal{N}=(P_{c}\uplus P_{v},T,F_{in},F_{out},F_{rb},\mathsf{color},\mathsf{ query},\mathsf{guard},\mathsf{action})\] _and \(B_{A}=(I_{A},O_{A})\in\mathsf{List}\,P_{c}\times\mathsf{List}\,P_{c}\) are lists of control places, called the input and output boundaries respectively, such that \(F_{in}(o,t)=\emptyset\) for every \(o\in O_{A}\), and \(F_{out}(t,i)=F_{rb}(t,i)=\emptyset\) for every \(i\in I_{A}\). The input (output) boundary configuration of \(A\) is given by the corresponding list of colours of the input (output) boundary places of \(A\), and we write_ \[N_{A}:\mathsf{color}(I_{A})\rightarrow\mathsf{color}(O_{A})\] _(where \(\mathsf{color}(X)=[\mathsf{color}(x)\mid x\in X]\)) to indicate that \(A=(N_{A},(I_{A},O_{A}))\) is an open timed db-net with the given boundary configurations._ Note in particular that an open timed db-net with empty boundaries is by definition an ordinary timed db-net. **Example 6**.: _Fig. 9 shows an open timed db-net realisation of a join router (joining messages containing integer data). The input boundary are the places \(ch_{in_{1}}\) and \(ch_{in_{2}}\) and the output boundary the place \(ch_{out}\). We draw the input boundary using dashed places on the left of the image, and the output similarly on the right (in general, a place can be part of both the input and the output boundary, but this will not occur in any nets constructed from pattern graphs)._ _Similarly, the timed db-net realisation of an aggregator in Fig. 8 from Example 5 can be made into an open timed db-net by declaring the boundaries to be \(ch_{in}\) and \(ch_{out}\) respectively._ ### Execution Semantics for Open Timed DB-nets We define the execution semantics of a given open timed db-net as a labelled transition system, where the states are snapshots of the db-net and the labelled transitions are given by firings, as well as transitions that can create and consume tokens at the input and output boundary respectively. A snapshot of an open timed db-net \(\mathcal{B}\) is a snapshot \((\mathcal{I},m)\) of \(\mathcal{B}\) considered as an ordinary timed db-net (i.e. we forget about the boundaries), which in turn consists of a compliant database instance \(\mathcal{I}\) and a marking \(m\); see [6] for the precise definitions. Given an open timed db-net \(\mathcal{B}\) with boundaries \((I_{\mathcal{B}},O_{\mathcal{B}})\), and a \(\mathcal{B}\)-snapshot \(s_{0}\) (the _initial \(\mathcal{B}\)-snapshot_), we construct a labelled transition system \(\Gamma^{\mathcal{B}}_{s_{0}}=(S,s_{0},\rightarrow)\) as follows: \(S\) is the infinite set of \(\mathcal{B}\)-snapshots, and \(\rightarrow_{\bigcirc}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! ### Composition of Open Timed DB-nets It is straightforward to compose timed db-nets in parallel, i.e. in such a way that there is no interaction between the component nets. Given open timed db-nets \[\mathcal{A}:[c_{1},\ldots,c_{n}]\rightarrow[d_{1},\ldots,d_{m}]\] \[\mathcal{B}:[c^{\prime}_{1},\ldots,c^{\prime}_{n^{\prime}}] \rightarrow[d^{\prime}_{1},\ldots,d^{\prime}_{nn^{\prime}}]\] with the same type domains, persistence layers and data logic layers1, we define an open timed db-net Footnote 1: To compose timed db-nets with different underlying layers, we first rename any unintended clashing names, and then take the union of the layers and embed the nets into their now common layers. \[\mathcal{A}\otimes\mathcal{B}:[c_{1},\ldots,c_{n},c^{\prime}_{1},\ldots,c^{ \prime}_{nn^{\prime}}]\rightarrow[d_{1},\ldots,d_{m},d^{\prime}_{1},\ldots,d^{ \prime}_{nn^{\prime}}]\] again with the same type domain, persistence layer data logic layer, but whose places and transitions are the disjoint union of the places and transitions in \(\mathcal{A}\) and \(\mathcal{B}\) respectively. This gives a tensor product or parallel composition of nets, with unit \(I:[]\rightarrow[]\) the empty timed db-net (necessarily with empty boundary). Visually, we are stacking the control layers of \(\mathcal{A}\) and \(\mathcal{B}\) next to each other. When the boundaries are compatible, i.e., when the input boundary configuration is the same as the output boundary configuration, we can also define a sequential composition of nets. This will be achieved by "gluing" the two nets together along their common boundary, formally expressed by quotienting the set of places in the construction of the composite net. **Definition 9** (Sequential composition of open nets).: _Let \(\mathcal{A}:\texttt{color}(I_{A})\rightarrow\texttt{color}(O_{A})\) and \(\mathcal{B}:\texttt{color}(I_{B})\rightarrow\texttt{color}(O_{B})\) be open timed db-nets with the same type domains, persistence layers and data logic layers, and such that \(\texttt{color}(O_{A})=\texttt{color}(I_{B})\). Write \(O_{A}=[o_{1},\ldots,o_{n}]\) and \(I_{B}=[i_{1},\ldots,i_{n}]\) - note that \(O_{A}\) and \(I_{B}\) must have the same length, since \(\texttt{color}(O_{A})=\texttt{color}(I_{A})\). We define the composition_ \[\mathcal{A}\mathbin{\sharp}\mathcal{B}:\texttt{color}(I_{A})\rightarrow \texttt{color}(O_{B})\] _to again have the same type domain, persistence layer and data logic layer as \(\mathcal{A}\) and \(\mathcal{B}\), and control layer_ \[\mathcal{N}_{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{ \texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{ \texttt{ \texttt{ \texttt{ \texttt{ }}}}}}}{}{}}{}{}{}{}{}{}{}{{}}{{{{{{{{{{{{{{{{{{{{{{{{{ } } }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}} \ \ \ \ \ \ \\\\\}\ \ \ \ \ \ \ \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\}\ ### CPN Tools Prototype We prototypically implemented our formalism so as to experimentally test the correctness of pattern compositions via simulation, following the idea described in (6, Sect. 5). We have chosen CPN Tools v4.0.12 for modeling and simulation. As compared to other PN tools like Renew v2.53, CPN tools supports third-party extensions that can address the persistence and data logic layers of our formalism. Moreover, CPN Tools handles sophisticated simulation tasks over models that use the deployed extensions. To support db-nets, our extension4 adds support for defining view places together with corresponding SQL queries as well as actions, and realizes the full execution semantics of db-nets using Java and a PostgreSQL database. Footnote 2: CPN Tools, visited 5/23: [https://cpmtools.org/](https://cpmtools.org/) Footnote 3: Renew, visited 5/2023: [http://www.renew.de/](http://www.renew.de/) Footnote 4: CPN Tools extension for timed db-net with boundaries and pattern models is available for download, visited 5/2023: [https://github.com/dritter-hd/db-net-eip-patterns](https://github.com/dritter-hd/db-net-eip-patterns). ## 6 Interpreting IPCGs as Open Timed DB-nets In this section we define the interpretation of integration pattern contract graphs as timed db-nets with boundaries. ### Interpretation of Single Patterns We assign an open timed db-net \(\llbracket p\rrbracket\) for every node \(p\) in a integration pattern contract graph. Recall that an integration pattern contract graph has input and output contracts \(\mathit{inContr}:\prod_{p\in p\in p}(\mathit{CPT}\times 2^{EL})^{\mathbf{w} \not\in\mathbb{I}}\) and \(\mathit{outContr}:\prod_{p\in p}(\mathit{CPT}\times 2^{EL})^{\mathbf{w} \not\in\mathbb{I}}\) respectively. If the cardinality of \(p\) is \(k:m\), then the open timed db-net will be of the form \[\llbracket p\rrbracket:\bigotimes_{i=1}^{k}\mathit{inContr}_{i}(p)_{EL}\to \bigotimes_{j=1}^{m}\mathit{outContr}_{j}(p)_{EL}\] This incorporates the data elements of the input and output contracts into the boundary of the timed db-net, since these are essential for the dataflow of the net. In Sec. 6.2.1, we will also incorporate the remaining concepts from the contracts such as signatures, encryption and encodings into the interpretation. The shape of the timed db-net \(\llbracket p\rrbracket\) depends on \(\mathit{type}(p)\) only, i.e., we give one interpretation for each pattern type: Start and end pattern typesWe interpret a start pattern \(p_{\text{start}}\) as the open timed db-net \(\llbracket p_{\text{start}}\rrbracket:I\to\text{color}_{\text{out}}(p_{\text{ start}})\) shown in Fig. 11(a). Similarily, Fig. 11(b) shows the interpretation of an end pattern \(p_{\text{end}}\) as an open timed db-net \(\llbracket p_{\text{end}}\rrbracket:\text{color}_{\text{in}}(p_{\text{end}})\to I\). Non-conditional fork patternsWe interpret a non-conditional fork pattern \(p_{\text{fork}}\) with cardinality \(1:n\) as the open timed db-net \(\llbracket p_{\text{fork}}\rrbracket:\text{color}_{\text{in}}(p_{\text{fork}}) \to\bigotimes_{j=1}^{n}\text{color}_{\text{out}}(p_{\text{fork}})_{j}\) shown in Fig. 12. Non-conditional join patternsWe interpret a non-conditional join pattern \(p_{\text{join}}\) with cardinality \(m:1\) as the open timed db-net \(\llbracket p_{\text{join}}\rrbracket:\bigotimes_{j=1}^{m}\text{color}_{\text {in}}(p_{\text{join}})_{j}\to\text{color}_{\text{out}}(p_{\text{join}})\) shown in Fig. 13. Conditional fork patternsWe interpret a conditional fork pattern \(p_{\text{fork}}\) of cardinality \(1:n\) with conditions \(\mathit{cond}_{1},\,\ldots,\,\mathit{cond}_{n-1}\) in its pattern characteristic assignment as the open timed db-net \(\llbracket p_{\text{fork}}\rrbracket:\text{color}_{\text{in}}(p_{\text{fork}}) \to\bigotimes_{j=1}^{n}\text{color}_{\text{out}}(p_{\text{fork}})_{j}\) shown in Fig. 14. Figure 11: Start and end patterns Figure 10: Sequential composition of timed db-net realizations of a join router and an aggregator Note that net is constructed so that the conditions are evaluated in order -- the transition corresponding to condition \(k\) will only fire if condition \(k\) is true, and conditions \(1\), \(\ldots\), \(k-1\) are false. The last transition will fire if all conditions evaluate to false. Message processor patternsWe interpret an message processor pattern \(p_{\text{mp}}\) with storage schema \(S\), actions \(A\), query \(Q\), condition \(cond\), time \(\tau\) and program \(f\) in its pattern characteristic assignment as the open timed db-net \(\llbracket p_{\text{mp}}\rrbracket:\text{color}_{\text{in}}(p_{\text{mp}}) \rightarrow\text{color}_{\text{out}}(p_{\text{mp}})\) shown in Fig. 15. If the condition \(cond\) and timing window \(\tau\) are satisfied, the incoming message possibly gets enriched by data from the query \(Q\) and action \(A\) might be triggered, before the program \(f\) transforms the data into possibly multiple messages, collected in a list. These get emitted one by one. Of course, not all features need to be used by all message processor patterns (e.g., no storage for control-time delayed). guarded according to the input contract by creating a new place \(ch^{\prime}_{in}\) and a new transition from \(ch^{\prime}_{in}\) to \(ch_{in}\), which conditionally forwards tokens whose properties match the contract. The new place \(ch^{\prime}_{in}\) replaces \(ch_{in}\) as an input place. Dually, for each output place \(ch_{out}\) we create a new place \(ch^{\prime}_{out}\) and a new transition from \(ch_{out}\) to \(ch^{\prime}_{out}\) which ensures that all tokens satisfy the output contract, via a new transition for each combination of output contract values. The new place \(ch^{\prime}_{out}\) replaces \(ch_{out}\) as an output place. Formally, the construction is as follows: **Definition 10**.: _Let \(\mathcal{X}:\otimes_{i<n}c_{i}\rightarrow\otimes_{i<n}c^{\prime}_{i}\) be an open timed dbnet, and \(\vec{C}=IC_{1},\ldots,IC_{m},OC_{1},\ldots,OC_{n}\) be a list of integration concepts with \(IC_{i},OC_{j}\in CPT\). Define the open timed db-net_ \[\mathcal{X}_{CPT(\vec{C})}:\otimes_{i<m}(c_{i}\times\{yes,no\}^{3})\rightarrow \otimes_{i<n}(c^{\prime}_{i}\times\{yes,no\}^{3})\] _with the same type domains, persistence layers and data logic layers as \(\mathcal{X}\), but with control layer_ \[\mathcal{N}^{\prime}=(P^{\prime},T^{\prime},F^{\prime}_{in},F^{\prime}_{out}, F^{\prime}_{rib},\texttt{color}^{\prime},\texttt{query},\texttt{guard}^{\prime},\texttt{ action}^{\prime})\] _with_ \[P^{\prime} =P\uplus[ch^{\prime}_{in,1},\ldots,ch^{\prime}_{in,m},ch^{\prime}_ {out,1},\ldots,ch^{\prime}_{out,1}]\] \[T^{\prime} =T\uplus T_{in}\uplus T_{out}\] _where \(T_{in}=\{t_{in,1},\ldots,t_{in,m},\}\) and_ \[T_{out}=\{t_{out,1,\hat{P}},\ldots,t_{out,\hat{P}}\}\in\{yes,no\}^{3}\}\] Figure 16: Interpretation of a merge pattern Figure 17: Interpretation of an external call pattern. Figure 15: Interpretation of a message processor pattern. \[F^{\prime}_{in}(x)=\begin{cases}F_{in}(p,t)&\text{if }x=(\mathsf{in}_{P}(p), \mathsf{in}_{T}(t))\\ \{(y,y_{sigma},y_{encr},y_{enc})\}&\text{if }x=(ch_{in,i},t_{in,i})\\ y\}&\text{if }x=(ch_{out,i},t_{out,i})\\ &\text{and }\vec{b}=(b_{sign},b_{encr},b_{enc})\\ &\text{with }(OC_{j})_{CPT}(p)\in\{b_{p},any\}\\ \emptyset&\text{otherwise}\end{cases}\] \[F^{\prime}_{in}(x)=\begin{cases}F_{in}(p,t)&\text{if }x=(\mathsf{in}_{P}(p), \mathsf{in}_{T}(t))\\ \ell(y)&\text{if }x=(ch_{out,i},t_{out,i})\\ \ell(y,\vec{b})&\text{if }x=(ch_{out,i},t_{out,i}\vec{b})\\ &\text{and }\vec{b}=(b_{sign},b_{encr},b_{enc})\\ &\text{with }(OC_{j})_{CPT}(p)\in\{b_{p},any\}\\ \emptyset&\text{otherwise}\end{cases}\] \[F^{\prime}_{in}(x)=\begin{cases}F_{in}(p,t)&\text{if }x=(\mathsf{in}_{P}(p), \mathsf{in}_{T}(t))\\ \emptyset&\text{otherwise}\end{cases}\] \[\text{guard}^{\prime}(\mathsf{in}_{T}(t)=\text{guard}(t)\] \[\text{guard}^{\prime}(t_{in,i})=\bigwedge_{\{p\mid(IC_{i})_{ CPT}(p)\neq any\}}y_{p}=(IC_{i})_{CPT}(p)\] \[\text{guard}^{\prime}(t_{out,i})=\top\] \[\text{action}^{\prime}=[\text{action},t_{i}\mapsto-,t^{\prime}_{j} \mapsto-]\] \[\tau^{\prime}=[\tau,t_{i}\mapsto[0,\infty],t^{\prime}_{j}\mapsto [0,\infty]]\] The pattern contract construction in Definition 10, can again be realized as template translation on an inter pattern level, as shown in Fig. 18. On the input side, a token token \((y,b_{sign},b_{encr},b_{enc})\) only enables the transition \(t_{in,i}\) if \((b_{sign},b_{encr},b_{enc})\) fulfils the input contract \(IC_{i}\), in which case the metadata is stripped and only the message \(y\) passed to the actual pattern. On the output side, any of the boundary transitions \(t_{out,i,(b_{sign},b_{new},b_{new})}\) may fire and enrich the data \(y\) with metadata \((b_{sign},b_{encr},b_{enc})\), ready to be passed to the next pattern. Let us consider two examples to gain an understanding of the construction. **Example 8**: _Fig. 19 shows the translation of a message translator pattern \(\mathit{MT}\) with input contract \(\{(ENC,no),(ENCR,no),(SIGN,any)\}\) and output contract \(\{(ENC,no),(ENCR,no),(SIGN,any)\}\). The input transition \(T^{\prime}\) hence checks the guard \([x,no,any,no]\), and if it matches, the token is forwarded to the actual message translator. After the transformation, the resulting message msg\({}^{\prime}\) is not encrypted, the signing is invalid, and not encoded, and thus emits \((x,no,no,no)\)._ **Example 9**: _A join router structurally combines many incoming to one outgoing message channel without accessing the data. Consequently, both input and output contracts have any for all properties. Figure 20 shows the result of the boundary construction for the join router. The input boundary does not enforce CPT constraints, and thus no guards are defined for the transitions. The output boundary, however, supplies all 8 different permutations of \(\{\)yes,no\(\}\) for the three CPT properties._ #### 6.2.2 Synchronising Pattern Compositions and correctness of the translation We are now in a position to define the full translation of a correct integration pattern contract graph \(G\). For the translation to be well-defined, we need only data element correctness of the graph. Concept correctness is used to show that in the nets in the image of the translation, tokens can always flow from the translation of the start node to the translation of the end node. **Theorem 1**: _Let a correct integration pattern contract graph \(G\) be given. For each node \(p\), consider the timed db-net_ \[\llbracket p\rrbracket_{CPT(in\_{CPT}(p),out\_{CPT}(p))}:\bigotimes_{i=1}^{k }color_{in}(p)_{i}\rightarrow\bigotimes_{j=1}^{m}color_{out}(p)_{j}\] _Use the graphical language [47] enabled by Lemma 1 to compose these nets according to the edges of the graph. The resulting timed db-net is then well-defined, and has the option to complete, i.e., from each marking reachable from a marking with a token in some input place, it is possible to reach a marking with a token in an output place._ Proof.: Since the graph is assumed to be correct, all input contracts match the output contracts of the nets composed with it, which by the data element correctness means that the boundary configurations match, so that the result is well-defined. To see that the constructed net also has the option to complete, first note that the interpretations of basic patterns in Sec. 6.1 do (in particular, one transition is always enabled in the translation of a conditional fork pattern in Fig. 14, and the aggregate transition will always be enabled after the timeout in the translation of a merge pattern in Fig. 16). By the way the interpretation is defined, all that remains to show is that if \(N\) and \(N^{\prime}\) have the option to complete, then so does \(N_{CPT(\vec{C})}\circ N^{\prime}_{CPT(\vec{C})}\), if the contracts \(\vec{C}\) and \(\vec{C}^{\prime}\) match. Assume a marking with a token in an input place of \(N^{\prime}\). Since \(N^{\prime}\) has the option to complete, a marking with a token in an output place of \(N^{\prime}\) is reachable, and since the contracts match, this token will satisfy the guard imposed by the \(N_{CPT(\vec{C})}\) construction. Hence a marking with a token in an input place of \(N\) is reachable, and the statement follows, as \(N\) has the option to complete. ## 7 Optimization Strategy Realization In this section we formally define the optimizations from the different strategies identified in Tab. 2 in the form of a rule-based graph rewriting system (addressing REQ-7). This gives a formal framework in which different optimizations can be compared. We begin by describing the graph rewriting framework, and subsequently apply it to define the optimizations. ### Graph Rewriting Graph rewriting provides a visual framework for transforming graphs in a rule-based fashion. A graph rewriting rule is given by two embeddings of graphs \(L\leftrightarrow K\leftrightarrow R\), where \(L\) represents the left hand side of the rewrite rule, \(R\) the right hand side, and \(K\) their intersection (the parts of the graph that should be preserved by the rule). A rewrite rule can be applied to a graph \(G\) after a match of \(L\) in \(G\) has been given as an embedding \(L\hookrightarrow G\); this replaces the match of \(L\) in \(G\) by \(R\). The application of a rule is potentially non-deterministic: several distinct matches can be possible [48]. Visually, we represent a rewrite rule by a left hand side and a right hand side graph colored green and red: green parts are shared and represent \(K\), while the red parts are to be deleted in the left hand side, and inserted in the right hand side respectively. For instance, the following rewrite rule moves the node \(P_{1}\) past a fork by making a copy in each branch, changing its label from \(c\) to \(c^{\prime}\) in the process: Formally, the rewritten graph is constructed using a double-pushout (DPO) [49] from category theory. We use DPO rewriting since rule applications are side-effect free (e.g., no "dangling" edges) and local (i.e., all graph changes are described by the rules). We additionally use Habel and Plump's relabeling DPO extension [50] to facilitate the relabeling of nodes in partially labeled graphs. In Fig. 7, we showed contracts and characteristics in dashed boxes, but in the rules that follow, we will represent them as (schematic) labels inside the nodes for space reasons. In addition, we also consider rewrite rules parameterized by graphs, where we draw the parameter graph as a cloud (see e.g., Fig. 21(a) for an example). A cloud represents any graph, sometimes with some side-conditions that are stated together with the rule. When looking for a match in a given graph \(G\), it is of course sufficient to instantiate clouds with subgraphs of \(G\) -- this way, we can reduce the infinite number of rules that a parameterized rewrite rule represents to a finite number. Parameterized rewrite rules can formally be represented using substitution of hypergraphs [51] or by!-boxes in open graphs [52]. Since we describe optimization strategies as graph rewrite rules, we can be flexible with when and in what order we apply the strategies. We apply the rules repeatedly until a fixed point is reached, i.e., when no further changes are possible, making the process idempotent. Each rule application preserves IPCG correctness in the sense of Definition 5, because input contracts do not get more Figure 19: Example: message translator construction Figure 18: Boundary construction template. specific, and output contracts remain the same. Methodologically, the rules are specified by pre-conditions, change primitives, post-conditions and an optimization effect, where the pre- and post-conditions are implicit in the applicability and result of the rewriting rule. ### OS-1: Process Simplification We first consider the process simplification strategies from Sec. 3.2 OS-1 to OS-3 that mainly strive to reduce the model complexity and latency. #### 7.2.1 Redundant sub-process This optimization removes redundant copies of the same sub-process within a process. **Change primitives:** The rewriting is given by the rule in Fig. 21(a), where \(SG1\) and \(SG2\) are isomorphic pattern graphs with in-degree \(n\) and out-degree \(m\). The Content Enricher (CE) node is a message processor pattern from Fig. 15 with a pattern characteristic (\(\mathit{PRG},(\text{addCtxt},[0,\infty))\)) for an enrichment program addCtxt which is used to add content to the message (does it come from the left or right subgraph?). Similarly, the Content Filter (CF) is a message processor, with a pattern characteristic (\(\mathit{PRG},(\text{removeCtxt},[0,\infty))\)) for an enrichment program removeCtxt which is used to remove the added content from the message again. Moreover, the Content-based Router (CBR) node is a conditional fork pattern from Fig. 14 with a pattern characteristic (\(\mathit{CND},\{\text{fromLeft?}\}\)) for a condition fromLeft? which is used to route messages depending on their added context. In the right hand side of the rule, the \(CE\) nodes add the context of the predecessor node to the message in the form of a content encircher pattern, and the \(\mathit{CBR}\) nodes are content-based routers that route the message to the correct recipient based on the context introduced by \(CE\). The graph \(SG_{1}^{\prime}\) is the same as \(SG_{1}\), but with the context introduced by \(CE\) copied along everywhere. This context is stripped off the message by a content filter \(CF\). **Effect:** The optimization is beneficial for model complexity when the isomorphic subgraphs contain more than \(n+m\) nodes, where \(n\) is the in-degree and \(m\) the out-degree of the isomorphic subgraphs. The latency reduction is by the factor of subgraphs minus the latency introduced by the additional \(n\)\(CE\) nodes, \(m\)\(\mathit{CBR}\) nodes and \(2m\)\(CF\) nodes. #### 7.2.2 Combine sibling patterns Sibling patterns have the same parent node in the pattern graph (e.g., they follow a non-conditional forking pattern) with channel cardinality of 1:1. Combining them means that only one copy of a message is traveling through the graph instead of two -- for this transformation to be correct in general, the siblings also need to be side-effect free, i.e., no external calls. **Change primitives:** The rule is given in Fig. 21(b), where \(SG_{1}\) and \(SG_{2}\) are isomorphic side-effect free pattern graphs, and \(F\) is a fork. **Effect:** The model complexity and latency are reduced by the model complexity and latency of \(SG_{2}\). Figure 21: Rules for redundant sub-process and combine sibling patterns. Figure 20: Join router construction ### OS-2: Data Reduction Now, we consider data reduction optimization strategies, which mainly target improvements of the message throughput (incl. reducing element cardinalities). These optimizations require that pattern input and output contracts are regularly updated with snapshots of element data sets \(EL_{\text{in}}\) and \(EL_{\text{out}}\) from live systems, e.g., from experimental measurements through benchmarks [53]. #### 7.3.1 Early-Filter A filter pattern can be moved to or inserted prior to some of its successors to reduce the data to be processed. The following types of filters have to be differentiated: * A _message filter_ removes messages with invalid or incomplete content. It can be used to prevent exceptional situations, and thus improves stability. * A _content filter_ removes elements from messages, thus reduces the amount of data passed to subsequent patterns. Both patterns are message processors in the sense of Fig. 15. The content filter assigns a filter function \((prg_{1},([0,\infty))\to f(msg,value)\) to remove data from the message (i.e., without temporal information), and the message filter assigns a filter condition \(\{(cond_{1})\}\to g(msg)\). **Change primitives:** The rule is given in Fig. 22(a), where \(P_{3}\) is either a content or message filter matching the output contracts of \(P_{1}\) and the input contract of \(P_{2}\), removing the data not used by \(P_{2}\). **Effect:** Message throughput increases by the ratio of the number of reduced elements that are processed per second, unless limited by the throughput of the additional pattern. #### 7.3.2 Early-Mapping A mapping that reduces the number of elements in a message can increase the message throughput. **Change primitives:** The rule is given in Fig. 22(b), where \(P_{3}\) is an element reducing message mapping compatible with both \(SG_{2}\), \(P_{4}\), and \(P_{1}\), \(SG_{2}\), and where \(P_{4}\) does not modify the elements mentioned in the output contract of \(P_{3}\). Furthermore \(P_{5}\) is a content filter, which ensures that the input contract of \(P_{4}\) is satisfied. The Message Translator (MT) node is a message processor pattern from Fig. 15 with a pattern characteristic \((PRG,(prg,[0,\infty)))\) for some program \(prg\) which is used to transform the message. **Effect:** The message throughput for the subgraph subsequent to the mapping increases by the ratio of the number of unnecessary data elements processed. #### 7.3.3 Early-Aggregation A micro-batch processing region is a subgraph which contains patterns that are able to process multiple messages combined to a multi-message [30] or one message with multiple segments with an increased message throughput. The optimal number of aggregations is determined by the highest batch-size for the throughput ratio of the pattern with the lowest throughput, if latency is not considered. **Change primitives:** The rule is given in Fig. 23(a), where \(SG_{2}\) is a micro-batch processing region, \(P_{1}\) an aggregator, \(P_{2}\) a splitter which separates the batch entries to distinct messages to reverse the aggregation, and \(SG_{2}^{\prime}\) finally is \(SG_{2}\) modified to process micro-batched messages. The Aggregator (AG) node is a merge pattern from Fig. 16 with a pattern characteristic \(\{(CND,\{end_{cr},end_{cr}\})\), \((PRG,prg_{age},(v_{1},v_{2})\}\) for some correlation condition \(end_{cr}\), completion condition \(end_{cr}\), aggregation function \(prg_{age}\), and timeout interval \((v_{1},v_{2})\). The Splitter (SP) node is a message processor from Fig. 15 with a pattern characteristic \((PRG,(prg,[0,\infty)))\) for some split function \(prg\) which is used to split the message into several ones. **Effect:** The message throughput is the minimal pattern throughput of all patterns in the micro-batch processing region. If the region is followed by patterns with less throughput, only the overall latency might be improved. Figure 23: Rules for early-aggregation and early-claim check Figure 22: Rules for early-filter and early-mapping. #### 7.3.4 Early Claim Check If a subgraph does not contain a pattern with message access, the message payload can be stored intermediately persistently or transiently (depending on the quality of service level) and not moved through the subgraph. For instance, this applies to subgraphs consisting of data independent control-flow logic only, or those that operate entirely on the message header (e.g., header routing). **Change primitives:** The rule is given in Fig. 23(b), where \(SG_{2}\) is a message access-free subgraph, \(P_{1}\) a claim check that stores the message payload and adds a claim to the message properties (and possibly routing information to the message header), and \(P_{2}\) a content encher that adds the original payload to the message. The Claim Check (CC) node is a message processor from Fig. 15 with a pattern characteristic (\(PRG\), (\(\_,[0,\infty)\))), which stores the message for later retrieval. **Effect:** The main memory consumption and CPU load decreases, which could increase the message throughput of \(SG_{2}\), if the claim check and content encher pattern throughput is greater than or equal to the improved throughput of each of the patterns in the subgraph. #### 7.3.5 Early-Split Messages with many segments can be reduced to several messages with fewer segments, and thereby reducing the processing per message. A segment is an iterable part of a message, such as a list of elements. When such a message grows bigger, the message throughput of a set of adjacent patterns might decrease, compared to the expected performance for a single segment; a phenomenon called _segment bottleneck sub-sequence_. Algorithmically, such bottlenecks can be found using max flow-min cut techniques based on workload statistics. The splitter (SP) node is a message processor from Fig. 15 with a pattern characteristic (\(PRG\), \((prg,[0,\infty))\)), for some split program \(prg\). **Change primitives:** The rule is given in Fig. 24, where \(SSQ_{1}\) is a segment bottleneck sub-sequence. If \(SSQ_{1}\) already has an adjacent splitter, Fig. 24(a) applies, otherwise Fig. 24(b). In the latter case, \(SP\) is a splitter and \(P_{2}\) is an aggregator that re-builds the required segments for the successor in \(SG_{2}\). For an already existing splitter \(P_{1}\) in Fig. 24(a), the split condition has to be adjusted to the elements required by the input contract of the subsequent pattern in \(SSQ_{1}\). In both cases we assume that the patterns in \(SSQ_{1}\) deal with single- and multi-segment messages; otherwise all patterns have to be adjusted. **Effect:** The message throughput increases by the ratio of reduced number of message segments per message, if the throughput of the moved / added splitter (and aggregator) \(\geq\) throughput of each of the patterns in the segment bottleneck sub-sequence after the segment reduction. ### OS-3: Parallelization Parallelization optimization strategies increase message throughput, and again require experimentally measured message throughput statistics, e.g., from benchmarks [53]. #### 7.4.1 Sequence to parallel A bottleneck sub-sequence with channel cardinality 1:1 can also be handled by distributing its input and replicating its logic. The parallelization factor is the average message throughput of the predecessor and successor of the sequence divided by two, which denotes the improvement potential of the bottleneck sub-sequence. The goal is to not overachieve the mean of predecessor and successor throughput with the improvement to avoid iterative re-optimization. Hence the optimization is only executed, if the parallel sub-sequence reaches lower throughput than their minimum. **Change primitives:** The rule is given in Fig. 25(a), where \(SSQ_{1}\) is a bottleneck sub-sequence, \(P_{2}\) a fork node, \(P_{3}\) a join router, and each \(SSQ^{\prime}_{k}\) is a copy of \(SSQ_{1}\), for \(1\leq k\leq n\). The parallelization factor \(n\) is a parameter of the rule. **Effect:** The message throughput improvement rate depends on the parallelization factor \(n\), and the message throughput of the balancing fork and join router on the runtime. For a measured Figure 24: Rules for early split. Figure 25: Rules for sequence to parallel variants. throughput \(t\) of the bottleneck sub-sequences, the throughput can be improved to \(n\times t\leq\) average of the sums of the predecessor and successor throughput, which is limited by the upper boundary of the balancing fork or join router. #### 7.4.2 Merge parallel The balancing fork and join router realizations can limit the throughput in some runtime systems, so that a parallelization decreases the throughput, e.g., when a fork or a join has smaller throughput than a pattern in the following sub-sequence. **Change primitives:** The rule is given in Fig. 25(b), where \(P_{3}\) and \(P_{4}\) limit the message throughput of each of the \(n\) sub-sequence copies \(SSQ_{1}^{\prime}\),..., \(SSQ_{n}^{\prime}\) of \(SSQ_{1}\). **Effect:** The model complexity is reduced by \((n-1)k-2\), where each \(SSQ_{i}^{\prime}\) contains \(k\) nodes. The message throughput might improve, since the transformation lifts the limiting upper boundary of a badly performing balancing fork or join router implementations to the lowest pattern throughput in the bottleneck sub-sequence. #### 7.4.3 Heterogeneous Parallelization A heterogeneous parallelization consists of parallel sub-sequences that are not isomorphic. In general, two subsequent patterns \(P_{i}\) and \(P_{j}\) can be parallelized, if the predecessor pattern of \(P_{i}\) fulfills the input contract of \(P_{j}\), \(P_{i}\) behaves read-only with respect to the data element set of \(P_{j}\), and the combined outbound contracts of \(P_{i}\) and \(P_{j}\) fulfill the input contract of the successor pattern of \(P_{j}\). **Change primitives:** The rule is given in Fig. 26, where the sequential sub-sequence parts \(SSQ_{1}\),.., \(SSQ_{n}\) are side-effect free and can be parallelized, \(P_{3}\) is a parallel fork, \(P_{4}\) is a join router, and \(P_{5}\) is an aggregator that waits for messages from all sub-sequence branches before emitting a combined message that fulfills the input contract of \(P_{2}\). **Effect:** Synchronization latency can be improved, but the model complexity increases by 3. The latency improves from the sum of the sequential pattern latencies to the maximal latency of all sub-sequence parts plus the fork, join, and aggregator latencies. ### OS-4: Pattern Placement All of the data reduction optimizations discussed in Sec. 7.3 can be applied in OS-4, i.e., "Pushdown to Endpoint", by extending the placement to the message endpoints, with contracts similar to our definition. However, due to our focus on the integration processes, we will not further elaborate on it in this work. ### OS-5: Reduce Interaction Optimization strategies that reduce interactions target a more resilient behavior of an integration process. #### 7.6.1 Ignore Failing Endpoints When endpoints fail, different exceptional situations have to be handled on the caller side. This can come with long timeouts, which can block the caller and increase latency. Knowing that an endpoint is unreliable can speed up processing, by immediately falling back to an alternative. **Change primitives:** The rule is given in Fig. 27(a), where \(SG_{ext}\) is a failing endpoint, \(SG_{1}\) and \(SG_{2}\) subgraphs, and \(P_{1}\) is a service call or message send pattern with configuration \(cf\). This specifies the collected number of subsequently failed delivery attempts to the endpoint or a configurable time interval. If one of these thresholds is reached, the process stops calling \(SG_{ext}\) and does not continue with the usual processing in \(SG_{1}\), however, invokes an alternative processing or exception handling in \(SG_{2}\). **Effect:** Besides improved latency (i.e., average time to response from endpoint in case of failure), the integration process behaves more stable due to immediate alternative processing. To not exclude the remote endpoint forever, the rule in Fig. 27(b) is scheduled for execution after a period of time to try whether the endpoint is still failing. If not, the configuration is updated to \(cf^{\prime}\) to avoid the execution of Fig. 27(a). The retry time is adjusted depending on experienced values (e.g., endpoint is down every two hours for ten minutes). #### 7.6.2 Reduce Requests A _message limited_ endpoint, i.e., an endpoint that is not able to handle a high rate of requests, can get unresponsive or fail. To avoid this, the caller can notice this (e.g., by TCP back-pressure) and react by reducing the number or frequency of requests. This can be done be employing a throttling or even Figure 26: Heterogeneous sequence to parallel. Figure 27: Rules for ignore failing endpoints. sampling pattern [4], which removes messages. An aggregator can also help to combine messages to multi-messages [30]. **Change primitives:** The rewriting is given by the rule in Fig. 28(a), where \(P_{1}\) is a service call or message send pattern, \(SG_{ext}\) a message limited external endpoint, \(SG_{2}\) a subgraph with \(SG_{2}^{\prime}\) a re-configured copy of \(SG_{2}\) (e.g., for vectorized message processing [30]), and \(SG_{cs}\) a subgraph that reduce the pace, or number of messages sent. **Effect:** Latency and message throughput might improve, but this optimization mainly targets stability of communication. This is improved by configuring the caller to a message rate that the receiver can handle. ### Optimization Correctness We now show that the optimizations do not change the input-output behaviour of the pattern graphs in the timed db-nets semantics, i.e., if we have a rewrite rule \(G\Rightarrow G^{\prime}\) (cf. Sec. 7.1), then the constructed timed DB-net with boundaries \([\![G]\!]\) has the same observable behaviour as that of \([\![G^{\prime}]\!]\) (addressing REQ-8). More formally, we mean that the transition systems of the original and the rewritten graphs are bisimilar in a certain sense, as defined in Definition 11. At a high level, this means that \(G\) can simulate \(G^{\prime}\) with respect to input-output behaviour, and vice versa. Recall that we associate a labelled transition system to each timed DB-net in Sec. 5.2. **Definition 11** (Functional bisimulation).: _Let \(B\) and \(B^{\prime}\) be timed DB-nets with equal boundaries, and let \(\Gamma_{s_{0}}^{\mathcal{B}}=\langle S,s_{0},\rightarrow\rangle\) and \(\Gamma_{s_{0}}^{\mathcal{B}}=\langle S^{\prime},s^{\prime}_{0},\rightarrow^{ \prime}\rangle\) be their associated labelled transition systems. We say that a \(B\)-snapshot \((I,m)\) is functionally equivalent to a \(B^{\prime}\)-snapshot \((I^{\prime},m^{\prime})\), \((I,m)\approx(I^{\prime},m^{\prime})\), if \(I=I^{\prime}\), and \(m\) and \(m^{\prime}\) agree on output places except for age variables. Further we say that \(\Gamma_{s_{0}}^{\mathcal{B}}\) is functionally bisimilar to \(\Gamma_{s_{0}}^{\mathcal{B}}\), \(\Gamma_{s_{0}}^{\mathcal{B}}\sim\Gamma_{s_{0}}^{\mathcal{B}}\), if whenever \(s_{0}\rightarrow^{\star}(I,m)\) then there is \((I^{\prime},m^{\prime})\) such that \(s^{\prime}_{0}\rightarrow^{\star n^{\prime}}(I^{\prime},m^{\prime})\), \((I,m)\approx(I^{\prime},m^{\prime})\), and \(\Gamma_{(I,m)}^{\mathcal{B}}\sim\Gamma_{(I^{\prime},m^{\prime})}^{\mathcal{B}}\), and similarly whenever \(s^{\prime}_{0}\rightarrow^{\star}(I^{\prime},m^{\prime})\) then there is \((I,m)\) such that \(s_{0}\rightarrow^{\star}(I,m)\), \((I^{\prime},m^{\prime})\approx(I,m)\), and \(\Gamma_{(I^{\prime},m^{\prime})}^{\mathcal{B}}\sim\Gamma_{(I,m)}^{\mathcal{B}}\)._ The notion of functional bisimulation captures the notion of having the same output behaviour, in the sense that transition system can reach a certain configuration of the output places if and only if the other one can. Note that this definition allows bisimilar transition systems to assign different token ages -- what matters is not the exact age value, but that the corresponding transitions are always possible. Let us discuss an explanatory example of bisimulation. **Example 10**.: _Figure 29 shows the interpretation of a simple IPCG as a timed DB-net before and after applying the rewrite rule for the combining sibling patterns from Fig. 21(b) (for simplicity without boundaries). The improvement of the optimization is to move \(SG_{1}\) (isomorphic to \(SG_{2}\)) in front of the forking pattern \(F\) and leave out \(SG_{2}\), which reduces the modeling complexity on the right hand side (cf. Fig. 29(b)). The synchronization subpart such is required to show bisimilarity between the original and the resulting net, since tokens might be moved independently in \(SG_{1}\) and \(SG_{2}\) before applying the optimisation. The subnet (essentially transitions \(T_{x1},T_{x2}\)) compensates for that to ensure that places \(P_{2}\) and \(P_{3}\) can be reached independently as well. The timed DB-net \(B\) representing Fig. 29(a) and \(B^{\prime}\) Fig. 29(b) are bisimilar \(\Gamma_{(I,m)}^{\mathcal{B}}\sim\Gamma_{(I,m^{\prime})}^{\mathcal{B}}\) for any database instance \(I\), and any markings \(m\) and \(m^{\prime}\) with \(m(P_{1})=m^{\prime}(P_{1})\) and \(m(p)=\emptyset=m^{\prime}(p)\) for all other \(p\)._ We will need the following basic lemma to show the correctness of our optimizations, i.e.,, that the right and left hand sides of the respective optimization rules are bisimilar. **Lemma 2**.: _The relation \(\sim\) is an congruence relation with respect to composition of timed db-nets with boundaries, i.e., it is reflexive, symmetric and transitive, and if \(\Gamma_{s_{0}}^{\mathcal{B}_{1}}\sim\Gamma_{s_{0}}^{\mathcal{B}_{1}^{\prime}}\) and \(\Gamma_{s_{0}}^{\mathcal{B}_{2}}\sim\Gamma_{s_{0}}^{\mathcal{B}_{2}^{\prime}}\) for all \(s_{0}\) on the shared boundary of \(B_{1}\) and \(B_{2}\), then \(\Gamma_{s_{0}}^{\mathcal{B}_{1}\mathcal{B}_{2}}\sim\Gamma_{s_{0}}^{\mathcal{B}_{ 1}^{\prime}\mathcal{B}_{2}^{\prime}}\)._ The following lemma means that it makes sense to ask the question if the optimized version of an IPCG is bisimilar to the original IPCG or not. Figure 28: Rules for reduce requests. Figure 29: Timed db-net translation of IPCGs before and after applying the “combine sibling patterns” rewrite rule **Lemma 3**: _Let \(G\) and \(G^{\prime}\) be IPCGs. For each optimisation rewrite rule \(G\Rightarrow G^{\prime}\), \(\llbracket G\rrbracket\) and \(\llbracket G^{\prime}\rrbracket\) have the same boundary._ We are now ready to state and prove our correctness theorem. **Theorem 2** (Change Correctness): _Let \(G\) and \(G^{\prime}\) be IPCGs such that \(G\Rightarrow G^{\prime}\) is an optimization rule. For every initial snapshot \(s_{0}\) of both \(\llbracket G\rrbracket\) and \(\llbracket G^{\prime}\rrbracket\), with tokens in input places only, we have \(\Gamma^{\llbracket G\rrbracket}_{s_{0}}\sim\Gamma^{\llbracket G^{\prime} \rrbracket}_{s_{0}}\)._ We verify the statement for each optimization \(G\Rightarrow G^{\prime}\). By Lemma 2, it is enough to show that the parts of the interpretation of the graphs which are actually modified by the rewrite are bisimilar. _Redundant Sub-Processes (Sec. 7.2.1)_. Each move on the left hand side of the optimization rule in Fig. 21(a) (on Page 20) either moves tokens into a cloud, out of a cloud, or inside a cloud. In the first two cases, this can be simulated by the right hand side by moving the token through the CE or CBR and CF respectively followed by a move into or out of the cloud, while in the latter case the corresponding token can be moved in \(SG^{\prime}_{1}\) up to the isomorphism between \(SG^{\prime}_{1}\) and the cloud on the left. Similarly, a move on the right hand side into or out of the cloud can easily be simulated on the left hand side. Suppose a transition fires in \(SG^{\prime}_{1}\). Since all guards in \(SG^{\prime}_{1}\) have been modified to require all messages to come from the same enriched context, the corresponding transition can either be fired in \(SG_{1}\) or \(SG_{2}\). _Combining Sibling Patterns (Sec. 7.2.2)_. Suppose the left hand side of Fig. 21(b) (on Page 20) takes a finite number of steps and ends up with \(m(P_{2})\) tokens in \(P_{2}\) and \(m(P_{3})\) tokens in \(P_{3}\). There are three possibilities: (i) there are tokens of the same color in both \(P_{2}\) and \(P_{3}\); or (ii) there is a token in \(P_{2}\) with no matching token in \(P_{3}\); or (iii) there is a token in \(P_{3}\) with no matching token in \(P_{2}\). For the first case, the right hand side can simulate the situation by emulating the steps of the token ending up in \(P_{2}\), and forking it in the end. For the second case, the right hand side can simulate the situation by emulating the steps of the token ending up in \(P_{2}\), then forking it, but not moving one copy of the token across the boundary layer in the interpretation of the fork pattern. The third case is similar, using that \(SG_{2}\) is isomorphic to \(SG_{1}\). The right hand side can easily be simulated by copying all moves in \(SG_{1}\) into simultaneous moves in \(SG_{1}\) and the isomorphic \(SG_{2}\). _Early-Filter (Sec. 7.3.1)_. By construction, the filter removes the data not used by \(P_{2}\), so if the left hand side of Fig. 22(a) (on Page 21) moves a token to \(P_{2}\), then the same token can be moved to \(P_{2}\) on the right hand side and vice versa. _Early-Mapping (Sec. 7.3.2)_. Suppose the left hand side of Fig. 22(b) (on Page 21) moves a token to \(P_{4}\). The same transitions can then move the corresponding token to \(P_{4}\) on the right hand side, with the same payload, by construction. Similarly, the right hand side can be simulated by the left hand side. _Early-Aggregation (Sec. 7.3.3)_. The interpretation of the subgraph \(SG_{2}\) is equivalent to the interpretation of \(P_{1}\) followed by \(SG^{\prime}_{2}\) followed by \(P_{3}\), by construction in Fig. 23(a) (on Page 21), hence the left hand side and the right hand side are equivalent. _Early Claim Check (Sec. 7.3.4)_. Since the claim check CC + CE in Fig. 23(b) (on Page 21) simply stores the data and then adds it back to the message in the CE step, both sides can simulate each other. _Early-Split (Sec. 7.3.5)_. By assumption, \(P_{1}\) followed by \(SS\,Q_{1}\) (\(P_{1}\) followed by \(SS\,Q_{1}\) followed by \(P_{2}\) for the inserted early split in Fig. 24(a) (on Page 22)) is equivalent to \(SS\,Q_{1}\) followed by \(P_{1}\), from which the claim immediately follows. _Sequence to Parallel (Sec. 7.4.1), Merge Parallel (Sec. 7.4.2)_. The left hand side of Fig. 25(a) (on Page 22) can be simulated by the right hand side by copying each move in \(SS\,Q_{1}\) by a move each in \(SS\,Q_{1}^{\prime}\) to \(SS\,Q_{n}^{\prime}\). If the right hand side moves a token to an output place, it must move a token through some \(SS\,Q_{i}^{\prime}\), and the same moves can move a token through \(SS\,Q_{1}\) in the left hand side. The same reasoning applies to the Merge Parallel transformation in Fig. 25(b), but in reverse. _Heterogeneous Sequence to Parallel (Sec. 7.4.3)_. By assumption, the sub-sequences \(SS\,Q_{1}\) to \(SS\,Q_{n}\) are side-effect free. The right hand side of Fig. 26 (on Page 22) can simulate the left hand side as follows: if the left hand side moves a token to an output place, it must move it through all of \(SS\,Q_{1}\) to \(SS\,Q_{n}\). The right hand side can make the same moves in the same order. For the other direction, the left hand side can reorder the moves of the right hand side to first do all moves in \(SS\,Q_{1}\), then in \(SS\,Q_{2}\) and so on. This is still a valid sequence of steps because the subsequences can be parallelized. _Ignore, try failing endpoints (Sec. 7.6.1)_. Suppose the left hand side of Fig. 27(a) takes a finite amount of steps to move a token to an output place in \(SG_{1}\), however, the transition to \(SG_{ext}\) does not produce a result due to an exceptional situation (i.e., no change of the marking in \(cf\)). Correspondingly, the right hand side moves the token, however, without the failing, and thus read-only transition to \(SG_{ext}\), which ensures the equality of the resulting tokens on either side. Under the same restriction that the no exception context is returned from \(SG_{ext}\), the right hand side can simulate the left hand side accordingly. The situation for try failing endpoints in Fig. 27(b) is the same in reverse. _Reduce requests (Sec. 7.6.2)_. Since the only difference between the left hand side and the right hand side is the slow-down due to the insertion of the pattern \(CS\), and simulation does not take the age of messages into account, the left hand side can obviously simulate the right hand side and vice versa. ## 8 Evaluation In this section, (a) we evaluate the impact of optimization strategies (i.e., OS-1-3) from Sec. 3.2 that are most relevant to this work, and (b) we study ReCO for real-world integration processes. ### Optimization Strategies For (a) we quantitatively analyze the effect of optimizations on two catalogs of integration processes regarding improvements of model complexity, throughput and latency. The catalogs have a two year difference to be able to study whether improvements were found by integration experts within that time span by themselves. Then, we revisit our motivating example from Sec. 2 and study a more complex integration process regarding applicable optimization strategies. #### 8.1.1 Quantitative Analysis We applied the optimization strategies OS-1-3 to 627 integration scenarios from the 2017 standard content of the SAP CPI (called ds17), and compared with 275 scenarios from 2015 (called ds15). Our goal is to show the applicability of our approach to real-world integration scenarios, as well as the scope and trade-offs of the optimization strategies. The comparison with a previous content version features a practical study on content evolution. To analyze the difference between different scenario domains, we grouped the scenarios into the following categories [5]: On-Premise to Cloud (OP2C), Cloud to Cloud (C2C), and Business to Business (B2B). Since hybrid integration scenarios such as OP2C target the extension or synchronization of business data objects, they are usually less complex. In contrast native cloud application scenarios such as C2C or B2B mediate between several endpoints, and thus involve more complex integration logic [5]. The process catalog also contained a small number of simple Device to Cloud scenarios; none of them could be improved by our approach. **Setup: Construction and analysis of IPCGs** For the analysis, we constructed an IPCG for each integration scenario following the workflow sketched in Fig. 30. Notably, the integration scenarios are stored as process models in a BPMN-like notation [4]. The process models reference data specifications such as schemas (e.g., XSD, WSDL), mapping programs, selectors (e.g., XPath) and configuration files. For every pattern used in the process models, runtime statistics are available from benchmarks [53]. The data specifications are picked up from the 2015 content archive and from the current 2017 content catalog, while the runtime benchmarks are collected using the open-source integration system _Apache Camel_[54]5 as used in SAP CPI. The mapping and schema information is automatically mined and added to the patterns as contracts, and the rest of the collected data as pattern characteristics. For each integration scenario and each optimization strategy, we determine if the strategy applies, and if so, if the cost is improved. This analysis runs in about two minutes in total for all 902 scenarios on our workstation. Footnote 5: All measurements were conducted on a HP Z600 workstation, equipped with two Intel X5650 processors clocked at 2.67GHz with a 12 cores, 24GB of main memory, running a 64-bit Windows 7 SP1 and a JDK version 1.7.0, with 2GB heap space. We now discuss the improvements for the different kinds of optimization strategies identified in Sec. 3.2. **Improved Model Complexity: Process Simplification (OS-1).** The relevant metric for the process simplification strategies from OS-1 is the average reduction in model complexity, shown in Fig. 31. _Results._ Although all scenarios were implemented by integration experts, who are familiar with the modeling notation and the underlying runtime semantics, there is still a small amount of patterns per scenario that could be removed without changing the execution semantics. On average, the content reduction for the content from 2015 and 2017 was 1.47 and 2.72 patterns/IPCG, respectively, with significantly higher numbers in the OP2C domain. _Conclusions_. (1) Even simple process simplifications are not always obvious to integration experts in scenarios represented in a control-flow-centric notation (e.g., current SAP CPI does not use BPMN Data Objects to visualize the data flow); and (2) the need for process simplification does not seem to diminish as experts gain more experience. **Improved Bandwidth: Data Reduction (OS-2).** Data reduction impacts the overall bandwidth and message throughput [11]. To evaluate data reduction strategies from OS-2, we leverage the data element information attached to the IPCG contracts and characteristics, and follow their usages along edges in the graph, similar to "ray tracing" algorithms [55]. We collect the data elements that are used or not used, where possible -- we do not have sufficient design time data to do this for user defined functions or some of the message construction patterns, such as Figure 31: Pattern reduction per scenario. Figure 30: Pattern composition evaluation pipeline. request-reply. Based on the resulting data element usages, we calculate two metrics: the comparison of used vs. unused elements in Fig. 32(a), and the savings in abstract costs on unused data elements in Fig. 32(b). _Results_. There is a large amount of unused data elements per scenario for the OP2C scenarios; these are mainly web service communication and message mappings, for which most of the data flow can be reconstructed. This is because the predominantly used EDI and SOA interfaces (e.g., SAP IDOC, SOAP) for interoperable communication with on-premise applications define a large set of data structures and elements, which are not required by the cloud applications, and vice versa. In contrast, C2C scenarios are usually more complex, and mostly use user defined functions to transform data, which means that only a limited analysis of the data element usage is possible. When calculating the abstract costs for the scenarios with unused fields, there is an immense cost reduction potential for the OP2C scenarios as shown in Fig. 32(b). This is achieved by adding a content filter to the beginning of the scenario, which removes unused fields. This results in a cost increase \(|d_{in}|=\) #unused elements for the content filter, but reduces the cost of each subsequent pattern up to the point were the elements are used. _Conclusions_. (3) Data flows can best be reconstructed when design time data based on interoperability standards is available; and (4) a high number of unused data elements per scenario indicates where bandwidth reductions are possible. **Improved Latency: Parallelization (OS-3).** For the sequence-to-parallel optimization strategies from OS-3, the relevant metric is the processing latency of the integration scenario. Because of the uncertainty in determining whether a parallelization optimization would be beneficial, we first report on the classification of parallelization candidates in Fig. 33(a). We then report both the improvements according to our cost model in Fig. 33(b), as well as the actual measured latency in Fig. 33(c). _Results_. Based on the data element level, we classify scenario candidates as parallel, definitely non parallel, or potentially parallel in Fig. 33(a). The uncertainty is due to sparse information. From the 2015 catalog, 81% of the scenarios are classed as parallel, or potentially parallel, while the number for the 2017 catalog is 53%. In both cases, the OP2C and B2B scenarios show the most improvement potential. Figure 33(b) shows the selection based on our cost model, which supports the pre-selection of all of these optimization candidates. The actual, average improvements per impacted scenario are shown in Fig. 33(c). The average improvements of up to 230 milliseconds per scenario must be understood in the context of the average runtime per scenario, which is 1.79 seconds. We make two observations: (a) the cost of the additional fork and join constructs in Java are high compared to those implemented in hardware [11], and the improvements could thus be even better, and (b) the length of the parallelized pattern sequence is usually short: on average 2.3 patterns in our scenario catalog. _Conclusions_. (5) The parallelization requires low cost fork and join implementations; and (6) better runtime improvements might be achieved for scenarios with longer parallelizable pattern sequences. #### 8.1.2 Case Studies We apply, analyze and discuss the proposed optimization strategies in the context of two case studies: the Replicate Material on-premise to cloud scenario from Fig. 3 in Sec. 2, as well as an SAP eDocument invoicing cloud to cloud scenario. These scenarios are part of the SAP CPI standard, and thus several users (i.e., SAP's customers) benefit immediately from improvements. For instance, we additionally implemented a content monitor pattern [5] that allowed analysis of the SAP CPI content. This showed the Material Replicate scenario was used by 546 distinct customers in 710 integration processes copied from the standard into their workspace -- each one of these users is affected by the improvement. **Replicate Material (revisited)** Recall from Sec. 2 that the Replicate Material scenario is concerned with enriching and translating messages coming from a CRM before passing them on to a Cloud for Customer service, as in Fig. 3. As already discussed, the content enricher and the message translator can be paral Figure 32: Unused elements in integration scenarios. leized according to the sequence to parallel optimization from OS-3. The original and resulting IPCGs are shown in Figs. 7(a) and 7(b). No throughput optimizations apply. _Latency improvements_. The application of this optimization can be considered, if the latency of the resulting parallelized process is smaller than the latency of the original process, i.e. if \[cost(MC) + \max(cost(CE),cost(MT))\] \[+ cost(JR)+cost(AGG)\] \[<cost(CE)+cost(MT)\] Subtracting \(\max(cost(CE),cost(MT))\) from both sides of the inequality, we are left with \[cost(MC)+cost(JR) + cost(AGG)\] \[<\min(cost(CE),cost(MT))\] If we assume that the content encirder does not need to make an external call, its abstract cost becomes \[cost(CE)(|d_{in}|,|d_{r}|)=|d_{in}|,\] and plugging in experimental values from a pattern benchmark [53], we arrive at the inequality (with latency costs in seconds) \[0.01+0.002+0.005\ \np\min(0.005,0.27)\] which tells us that the optimization is not beneficial in this case -- the additional overhead is larger than the saving. However, if the content encirder does use a remote call, \(cost(CE)(|d_{in}|,|d_{r}|)=|d_{in}|+|d_{r}|\), and the experimental values now say \(cost(CE)=0.021\). Hence the optimization is worthwhile, as \[0.01+0.002+0.005<\min(0.021,0.27)\enspace.\] _Model Complexity_. Following Sanchez-Gonzalez et al. [37], we measure the model complexity by node count. In this case, the optimization increases the complexity by 3. _Conclusions_. (7) Pattern characteristics are important when deciding if an optimization should be applied (e.g., local vs. remote enrichment); and (8) there are conflicts between different objectives, as illustrated by the trade-off between latency reduction and model complexity increase. **eDocuments: Italy Invoicing.** The Italian government accepts electronic invoices from companies, as long as they follow regulations -- they have to be correctly formatted, signed, and not be sent in duplicate. Furthermore, these regulations are subject to change. This can lead to an ad-hoc integration process such as in Fig. 34 (simplified). Briefly, the companies' _Fattura Electronica_ is used to generate a _factorapa_ document with special header fields (e.g., _Pasee_, _IdCodice_), then the message is signed and sent to the authorities, if it has not been sent previously. The multiple authorities respond with standard _Coglienza_, _Risposta_ acknowledgments, that are transformed to a _SendInvoicResponse_. We transformed the BPMN model to an IPCG, tried to apply optimizations, and created a BPMN model again from the optimized IPCG. _Model Complexity_. Our heuristics for deciding in which order to try to apply different strategies are "simplification before parallelization" and "structure before data", since this seems to enable the largest number of optimizations. Hence we first try to apply OS-1 strategies: the _combine siblings_ rule matches the sibling Message Signers, since the preceding content-based router is a fork. (The signer is also side-effect free, so applying this rule will not lead to observably different behavior.) _Latency Improvements_. Next we try OS-3 strategies. Although _heterogeneous parallelization_ matches for the CE and the Message Encoder, it is not applied since \[cost(MC)+cost(JR)+cost(AGG)\] \[\np\min(cost(CE),cost(ME)),\] i.e., the overhead is too high, due to the low-latency, local CE. Finally, the _early-filter_ strategy from OS-2 is applied for the Content Filter, inserting it between the Content Enricher and the Message Encoder. No further strategies can be applied. The resulting integration process translated back from IPCTG to BPMN is shown in Fig. 35. _Conclusions_. (9) The application order OS-1, OS-3, OS-2 seems most beneficial ("simplification before parallelization", "structure before data"); (10) an automatic translation from IPCGs to concepts like BPMN could be beneficial for connecting with existing solutions. Figure 33: OS-3 “Sequence to parallel” optimization candidates on (a) integration flows, (b) optimization selection based on abstract cost model, and (3) actual latency improvements. ### Case Studies: Responsible Pattern Composition For (b), we evaluate the translation in two case studies of real-world integration scenarios: the replicate material scenario from Fig. 3, and a predictive machine maintenance scenario. The former is an example of hybrid integration, and the latter of IOT device integration. For each of the scenarios, we give an integration pattern contract graph with matching contracts, translate it to a timed DB-net with boundaries, and show how its execution can be simulated. The scenarios are both taken from the SAP Cloud Platform Integration solution catalog of reference integration scenarios, and are frequently used by customers [13]. For the simulation we use the CPN Tools timed DB-net prototype from Sec. 5.4 with the extension for hierarchical PN composition. In CPN Tools hierarchies, the patterns can be represented as sub-groups and pages with explicit in- and out-port type definitions [56], which we use as part of the boundaries defined in Sec. 5. Thereby the synchronization is checked based on the CPN color sets of the port types. The other boundary checks are performed during the simulation according to the constructed boundaries (see construction mechanism in Definition 10 in Sec. 6.2). #### 8.2.1 Hybrid Integration: Replicate Material An IPCG representing an integration process for the replication of material from an enterprise resource planning or customer relationship management system to a cloud system was given in Fig. 7(a) in Sec. 4.1. We now add slightly more data in the form of the pattern characteristics, which provides sufficient information for the translation to timed DB-nets with boundaries. Figure 36 depicts the enriched IPCG. The adapters are actually message processors, however, for simplicity they are represented as start and end pattern types, \(ADPT_{s}\) denoting \(\mathit{erp}\) and \(ADPT_{r}\) representing \(\mathit{cod}\). The characteristics of the \(CE\) node includes the tuple (\(PRG,(\mathit{prg1},[0,\infty))\)), with enrichment function \(\mathit{prg1}\) which assigns the \(\mathit{DOCNUM}\) payload to the new header field \(\mathit{AppID}\). Similarly, the characteristics of the \(\mathit{MT}\) nodes includes a tuple (\(PRG,(\mathit{prg2}\),.)) with mapping program \(\mathit{prg2}\), which maps the \(\mathit{EDI\_DC40\_DOCNUM}\) payload to the \(\mathit{MMRR\_BMH\_ID}\) field (the Basic Message Header ID of the Material Mass Replication Request structure), and the \(\mathit{EPM\_PRODUCT\_ID}\) payload to the \(\mathit{MMRR\_MAT\_ID}\) field (the Material ID of the Material Mass Replication Request structure). Translation to a Timed DB-Nets with BoundariesFirst we translate each single pattern from Fig. 36 according to the construction in Sec. 6.1. The integrati on adapter nodes \(ADPT_{s}\) and \(ADPT_{r}\) are translated as the start and end patterns in Fig. 11(a) and Fig. 11(b), respectively. The content encircher \(CE\) node and message translator \(\mathit{MT}\) node are message processors without storage, and hence translated as in Fig. 15 with \(<f>_{CE}=\mathit{prg1}\) and \(<f>_{MT}=\mathit{prg2}\) (no database values are required). Since no database table updates are needed for either translation, the database update function parameter \(<g>\) can be chosen to be the identity function in both cases. In the second step, we refine the timed DB-net with boundaries to also take contract concepts into account by the construction in Definition 10. The resulting net is shown in Fig. 37. This ensures the correctness of the types of data exchanged between patterns, and follows directly from the correctness of the corresponding IPCG. Other contract properties such as encryption \(\mathit{encr}\), encodings \(\mathit{enc}\), and signatures \(\mathit{sign}\) are checked through transition guards. Figure 34: Country-specific invoicing (potential improvements as BPMN Group) Simulation.We test the correctness of the composition construction of the material replicate scenario in Fig. 37 through simulation in the form of a hierarchical timed DB-net model, shown in Fig. 38. Thereby, the CE and MT patterns are represented by CPN Tool Subspace elements that are annotated with subpage tags _enricher_, _translator_, respectively. On arrival of the request _msg_ from the ERP system, the boundary configuration is appended to the message in place _erpToCe_. In the replicate material scenario the data is received unencrypted, uncoded and unsigned, leading to a boundary (msg,no,no) (which is encoded as _(msg,false,false,false)_ in our prototype). The extended message _erp_msg is then moved to the boundary place _ch0_ by transition _CheckCeBoundary_, if the _[encr=false]_ guard holds, and thus ensures the correctness of the data exchange between patterns. Subsequently only the actual message without the boundary data is moved to place _ch0_, that is linked to the input place _ch0_ of the enricher, as in Fig. 38. We recall, that the _in_ port type ensures that the synchronization on the CPN color set level are correct. After the _enricher_ processing, the _out_ port type ensures the correctness of the synchronization on the CPN color set level and the resulting message _emsg_ is moved to the linked output place _ch4_. The constructed outbound boundary, represented by transition _SetCeBoundary_ sets the boundary properties of the enricher to (msg,false,false) for the following pattern. On the input boundary side of the _translator_, transition _CheckMBoundary_ evaluates its guard, before moving the message without the boundary data to the boundary place _ch5_, which proceeds similar to the enricher. Note that our boundary construction mechanism from Definition 10 generated the input boundary, e.g., denoted by place _erpToCe_ and transition _CheckCeBoundary_, as well as the output boundary, e.g., transition _SetCeBoundary_ and place _ceToMt_, including the transition guards, colorsets, variables, and port type configurations, for the validation by simulation. Discussion.Notably, constructing an IPCG requires less technical knowledge such as particularities of timed DB-nets but still enables correct pattern compositions on an abstract level. While the \(CPT\) part of the pattern contracts (e.g., encrypted, signed) could be derived and translated automatically from a scenario in a yet to be defined modeling language, many aspects like their elements \(EL\) as well as the configuration of the characteristics by enrichment and mapping programs requires a technical understanding of IPCGs and the underlying scenarios. As such IPCGs can be considered a suitable intermediate representation of pattern compositions. The user might still prefer a more appealing graphical modeling language on top of IPCGs. The simulation capabilities of the constructed timed DB-net with boundaries allow for the experimental validation of the composition correctness of real-world pattern compositions. However, the complexity of the construction highlights the importance of an automation of the construction. Conclusions.(11) IPCG and timed DB-net with boundaries are correct with respect to composition and execution semantics; (12) timed DB-nets with boundaries are even more complex than timed DB-nets; (13) IPCGs are more comprehensible than timed DB-nets, and expressive enough for current integration scenarios. Figure 35: Invoice processing from Fig. 34 after application of strategies OS-1–3. #### 8.2.2 Internet of Things: Predictive Maintenance and Service (PDMS) The IPCG representing the predictive maintenance create notification scenario that connects machines with enterprise resource planning (ERP) and PDMS systems is given in Fig. 39. We add all pattern characteristics and data, which provides sufficient information for the translation to timed DB-nets with boundaries. Figure 39 depicts the corresponding IPCG. The characteristics of the \(CE_{1}\) node includes an enrichment function \(prg1\) that adds further information about the machine in the form of the _FeatureType_ to the message that contains machine _ID_ and _UpperThresholdWamingValue_. This data is leveraged by the \(UDF_{1}\)_predict_ node, which uses a prediction function \(prg2\) about the need for maintenance and adds the result into the _MaintenanceRequestByld_ field. Before the data is forwarded to the ERP system (simplified by an \(End\)), the single machine predictions are combined into one message by the \(AGG_{1}\) node with correlation \(cnd_{cr}\) and completion \(cnd_{cc}\) conditions as well as the aggregation function \(prg_{3}\) and completion timeout \((v_{1},v_{2})\) as pattern characteristics \(\{([cnd_{cr},cnd_{cc}]),(PRG,prg_{4},(v_{1},v_{2}))\}\). Translation to Timed DB-Nets with BoundariesAgain, we translate each single pattern from Fig. 39 according to the construction in Sec. 6.1. The \(Start\) and \(End\) nodes are translated as the start and end pattern in Fig. 11(a) and Fig. 11(b) respectively. The CE \(CE_{1}\) and user-defined function \(UDF_{1}\) nodes are message processors, and hence translated as in Fig. 15 with \(<f>_{CE_{1}}=prg1\) and \(<f>_{UDF_{1}}=prg2\). Since no table updates are needed for either translation, the database update function parameter \(<g>\) can be chosen to be the identity function in all cases. The aggregator \(AGG_{1}\) node is a merge pattern type, and thus translated as in Fig. 16 with \((v_{1},v_{2})\rightarrow[\tau_{1},\tau_{2}]\), \(prg_{agg}\to f(msgs)\). Moreover, the correlation condition \(cnd_{cr}\to g(msg,msgs)\) and the completion condition \(cnd_{cc}\to complCount\). In the second step, we refine the timed DB-net with boundaries to also take contract concepts into account by the construction in Definition 10. The resulting net is shown in Fig. 40. This ensures the correctness of the types of data exchanged between patterns, and follows directly from the correctness of the corresponding IPCG. Other contract properties such as encryption, signatures, and encodings are checked through the transition guards. SimulationWe illustrate the correctness of the composition construction of the predictive maintenance scenario in Fig. 40 through simulation in the form of a hierarchical timed DB-net model, shown in Fig. 41. Again, all timed DB-net patterns are hierarchically represented by CPN Tool Subpage elements that are annotated with subpage tags _enricher_, _message_aggregator_, respectively, and the user-defined function _predict_ is denoted by a transition. The boundaries are constructed from Fig. 40 by inserting _SetPdmsBoundary_ and _pdmsToCe_ as output boundary of _get report_, which matches the input boundary of the subsequent _enricher_, denoted by the _CheckCeBoundary_ transition. Transition _SetCeBoundary_ and place _ceToPredict_ represent the output boundary of the enriched, which again match the input boundary of the _predict_ user-defined function pattern through transition _CheckPredictBoundary_. Finally, the output boundary of the predict step is ensured by transition _SetPredictBoundary_ and place _predictToAgg_. Again, it can be easily seen that the input boundary of the aggregator in the form of the _CheckAggBoundary_ transition matches, and thus the overall composition is correct. Consequently, the simulation of the timed DB-net with these boundaries in Fig. 41 results in the same, correct output with the timed DB-nets without boundaries in Fig. 41. DiscussionIn this slightly more complex scenario, it becomes more obvious that the constructed IPCGs are quite technical as well and require a careful construction of pattern characteristics and contracts. While this seems to be an ideal representation for checking the structural correctness of compositions, this should be no manual task for a user. Especially for more complex scenarios, we found that the re-configurable pattern type-based translation works well. However, the construction of the timed Figure 36: Complete integration pattern contract graph of the replicate material scenario DB-nets with boundaries corresponding to an IPCG would benefit from an automatic translation (e.g., through tool support). _Conclusions_. (14) IPCGs are still quite technical, especially for more complex scenarios; (15) a tool support for automatic construction and translation is preferable. ### Discussion The evaluation on the optimization strategies on IPCGs (cf. (a)) resulted into several interesting conclusions, i.e., emphasizing on the importance of a pattern composition and optimization formalization, which are relevant even for experienced integration experts (conclusions 1-2), with interesting choices (conclusions 3-4, 6), implementation details (conclusions 5, 10) and trade-offs (conclusions 7-9). The contract graphs provide a rich composition context, which might help the user when composing patterns with built-in structural correctness guarantees. The second major aspect of this work -- besides process optimizations -- concerns the responsible composition of processes out of integration patterns (cf. (b)) that can be automated (cf. conclusion 10). The evaluation of two case studies -- following ReCO -- resulted into further interesting conclusions, i.e., the suitability of our approach for pattern compositions (cf. conclusions (11,13)), model complexity considerations (cf. conclusions (12,14)) and desirable extensions like automatic translation (cf. conclusion (15)). However, while IPCGs based on timed DB-nets with boundaries denote the first comprehensive definition of application integration scenarios with built-in functional correctness and compositional correctness validation and verification, it might not give an appealing modeling language for (non-technical) users (cf. conclusions (12,14)). We envision a novel modeling language and tool support that facilitates a translation from that language to IPCGs (cf. conclusion (15)), which we consider as future work. Based on such a language infrastructure, more advanced compositional aspects like modeling guidelines on the different layers (i.e., language, intermediate IPCG, and simulation timed DB-net with boundaries) could be studied. ## 9 Related Work We presented related optimization techniques in Sec. 3.2. We now briefly situate our work within the context of other formalizations, beyond the already discussed BPMN [4] and PN [38] approaches, as summarized in Tab. 5. **Enterprise Application Integration (EAI)** Similar to the BPMN and PN notations, several domain-specific languages (DSLs) have been developed that describe integration scenarios. Apart from the EIP icon notation [2], there is also the Java-based Apache Camel DSL [54], and the UML-based Guarana DSL [57]. However, none of these languages aim to be \begin{table} \begin{tabular}{l|c c} \hline Approach & Formal model & Optimizations & Correctness \\ \hline EAI & & & \\ BPM & & & \\ SPC & & & \\ AOS & & & \\ GT & & & \\ \hline ReCO & & & \\ \hline \end{tabular} \end{table} Table 5: Optimization Strategies in context of the objectives Figure 37: Material replicate scenario as a timed DB-nets with boundaries formal integration scenario representations. Conversely, we do not strive to build another integration DSL. Instead we claim that all of the integration scenarios expressed in such languages can be formally represented in our formalism, so that optimizations can be determined that can be used to rewrite the scenarios. There is work on formal representations of integration patterns, e.g. Mederly et al. [58] represents messages as first-order formulas and patterns as operations that add and delete formulas, and then applies AI planning to find an process with a minimal number of components. While this approach shares the model complexity objective, our approach applies to a broader set of objectives and optimization strategies. For the verification of service-oriented manufacturing systems, Mendes et al. [59] uses "high-level" Petri nets as a language instead of integration patterns, similar to the approach of Fahland and Gierds [38]. **Business Process Management (BPM)** Sadiq and Orlowska [60] applied reduction rules to workflow graphs for the visual identification of structural conflicts (e.g., deadlocks) in business processes. Compared to process control graphs, we use a similar base representation, which we extend by pattern characteristics and data contracts. Furthermore, we use graph rewriting for optimization purposes. In Cabanillas et al. [61], the structural aspects are extended by a data-centered view of the process that allows to analyze the life cycle of an object, and check data compliance rules. This adds a view on the required data, but does not propose optimizations for the EIPs. The main focus is rather on the object life cycle analysis of the process. **Semantic Program Correctness (SPC)** Semantic correctness plays a bigger role in the compiler construction and analysis domain. For example, Muchnick [62] provides an exhaustive catalog of optimizing transformations and states that the proof of the correctness of rewritings must be based on the (execution) semantics, and Nielson [63] provides semantic correctness proofs using data-flow analysis, while Cousot [64] provides a general framework for designing program transformations by analyzing abstract interpretations. Although far simpler than general programming language transformations, our translation of IPCGs to timed db-nets with boundaries can be seen as a concretization in the sense of an abstract interpretation, and thus giving a similar notion of semantic correctness. **Analysis and Optimization Structures (AOS)** Transformation techniques for optimization have been employed by compiler construction, e.g., for parallel [65] or pipeline processing [66], where dependence graph representations become especially useful. For example, Kuck et al. [67] construct dependence graphs with output, anti, and flow dependencies as a foundation for optimizing transformations. These kind of dependence graphs were also used by Bohm et al. [35], however, they are "linearized" in the form of our pattern contracts. This makes the decision of the optimization "local" and does not require dependence graph abstractions like intervals [68] or scoping [69]. More recently these techniques have been applied for business process optimization by Sadiq [60], Niedermann et al. [16; 17] or reductions to process tree structures [70] with incremental transformations [71]. In our case the scope of the analysis is a local match of pattern contracts. **Graph Transformations (GT)** Similar to our approach, graph transformations have been used in related domains, e.g., formalizing parts of the BPMN semantics by Dijkman et al. [72], who specify the execution semantics as graph rewrites. Conformance is checked experimentally and verification is left for future work. For the optimizations, we use the same visual notation and double-pushout rule application approach. However, our execution semantics are given as timed db-net and can be formally analyzed. Figure 38: Material replicate scenario simulation ## 10 Conclusions This work addresses an important shortcoming in EAI research, namely the lack of means for responsible or correct integration pattern compositions and the application of changes, e.g., as optimization strategies, which preserve structural and semantic correctness, and thus ends the informality of descriptions of pattern compositions and optimizations (cf. Q1-Q3). We approached the questions along a responsible pattern composition and optimization process (short ReCO), and started by compiling catalogs of integration pattern characteristics as well as optimization strategies from the literature. We then developed a formalization of pattern compositions in order to precisely define optimizations as pattern contract graphs. Then we extended the timed db-nets formalism, covering integration pattern semantics, into timed db-nets with boundaries, which resemble the contracts in the pattern graphs, and defined a mechanism to interpret the pattern graphs by them. With the resulting formal framework, we proved the correctness of all defined optimizations, which we evaluated on data sets containing in total over 900 real world integration scenarios, and two brief case studies. The responsible pattern composition part in ReCO was then studied for two integration processes down to the execution semantics, essentially showing ReCO from a modeling perspective. We conclude that formalization and optimizations of integration processes in the form of integration pattern compositions -- using pattern contract graphs -- are relevant even for experienced integration experts (conclusions 1-2), with interesting choices (concls. 3-4, 6), implementation details (conclusions 5, 10) and trade-offs (concls. 7-9). In the two additional case studies, we showed the suitability of our interpretation of pattern contract graphs in the newly defined timed db-nets with boundaries for pattern compositions (concls. (11,13)), model complexity considerations (concls. (12,14)) and desirable extensions like automatic translation (concl. (15)).
2302.12064
Shadows and photon rings of regular black holes and geonic horizonless compact objects
The optical appearance of a body compact enough to feature an unstable bound orbit, when surrounded by an accretion disk, is expected to be dominated by a luminous ring of radiation enclosing a central brightness depression typically known as the shadow. Despite observational limitations, the rough details of this picture have been now confirmed by the results of the EHT Collaboration on the imaging of the M87 and Milky Way supermassive central objects. However, the precise characterization of both features - ring and shadow - depends on the interaction between the background geometry and the accretion disk, thus being a fertile playground to test our theories on the nature of compact objects and the gravitational field itself in the strong-field regime. In this work we use both features in order to test a continuous family of solutions interpolating between regular black holes and horizonless compact objects, which arise within the Eddington-inspired Born-Infeld theory of gravity, a viable extension of Einstein's General Relativity (GR). To this end we consider seven distinctive classes of such configurations (five black holes and two traversable wormholes) and study their optical appearances under illumination by a geometrically and optically thin accretion disk, emitting monochromatically with three analytic intensity profiles previously suggested in the literature. We build such images and consider the sub-ring structure created by light rays crossing the disk more than once and existing on top of the main ring of radiation. We discuss in detail the modifications as compared to their GR counterparts, the Lyapunov exponents of unstable nearly-bound orbits, as well as the differences between black hole and traversable wormholes for the three intensity profiles.
Gonzalo J. Olmo, João Luís Rosa, Diego Rubiera-Garcia, Diego Sáez-Chillón Gómez
2023-02-23T14:39:25Z
http://arxiv.org/abs/2302.12064v2
# Shadows and photon rings of regular black holes and geonic horizonless compact objects ###### Abstract The optical appearance of a body compact enough to posses an unstable bound orbit, when surrounded by an accretion disk, is expected to be dominated by a luminous ring of radiation enclosing a central brightness depression known as the shadow, a picture fully backed up by the recent results of the EHT Collaboration. The characterization of both features - ring and shadow - depends on the interaction between the background geometry and the accretion disk, thus being a fertile playground to test our theories on the nature of compact objects and the gravitational field itself in the strong-field regime. In this work we use both features in order to test a continuous family of solutions interpolating between regular black holes and horizonless compact objects, which arise within the Eddington-inspired Born-Infeld theory of gravity, a viable extension of Einstein's General Relativity (GR). To this end we consider seven distinctive classes of such configurations (five black holes and two traversable wormholes) and study their optical appearances under illumination by a geometrically and optically thin accretion disk, emitting monochromatically with three analytic intensity profiles previously suggested in the literature. We build such images and consider the sub-ring structure created by light rays crossing the disk more than once and existing on top of the main ring of radiation. We discuss in detail the modifications as compared to their GR counterparts, the Lyapunov exponents of nearly unstable orbits, as well as the differences between black hole and traversable wormholes for the three intensity profiles. In addition we consider the properties of the shadow region and link them to the calibrated expectations by the EHT Collaboration. ## I Introduction ### Multimessenger quantum gravity era In the last few years, we have witnessed the beginning of a new era in gravitational physics triggered by the leap in the quantity, quality, and variety of observational data from different probes. This newly born discipline relies on several messengers (photons, neutrinos, cosmic rays and gravitational waves) from different sources, consequently being dubbed as multimessenger astronomy [1]. Equipped with its power, a new window is open to access to energy scales beyond those attainable by particle accelerators, allowing to probe deeper into phenomenological clues of the long-sought quantum theory of gravity. On a more conservative basis, it provides an opportunity to test the nature and properties of compact objects (neutron stars and black holes alike [2]) as well as to verify the consistency of the cosmological concordance model [3]. Traditionally, the community has addressed the quantum gravity problem via either fundamental approaches - mostly string theory and loop quantum gravity - or via effective implementations - modified theories of gravity [4; 5]. Within the latter one also finds model-agnostic approaches, in which one either builds ad hoc specific solutions with the desired properties, or considers parameterized deviations from General Relativity (GR) solutions [6; 7] hoping that both post-Newtonian and strong-field limit observations will allow to sufficiently constrain its coefficients. This procedure hopes to extract useful lessons on the viable candidates to supersede GR expectations on its strong-field regime [8]. Either way, a presuring topic in the community is how to bring putative Planck-scale effects associated to quantum gravity into scales that are directly testable by multimessenger astronomy, e.g. horizon or photon sphere scales (see [9; 10] for some ideas at this regard). On the theoretical front, every such a theory should have something to say about the issue of space-time singularities [11], namely, the unavoidable loss of causal determinism and predictability of GR caused by the geodesic incompleteness within some of its most physically relevant solutions: black holes and early cosmological evolution. ### The metric-affine proposal Among the pool of frameworks and proposals to extend GR, this work is anchored in the hypothesis that allowing for a larger flexibility in the basic blocks of our gravitational theories, understood as a manifestation of a space-time imbued with geometrical properties, may contain some clues on how to deal with the issue of space-time singularities in (semi-)classical gravity. Such blocks are the metric - defining the notion of local distances, areas and volumes - and the affine connection - defining parallelism and thus appearing in the definition of covariant derivatives -. Disentangling them as _a priori_ independent entities opens up the world of the _metric-affine_ foundation of gravity, in which an affine connection is split into its curvature, torsion, and non-metricity pieces (see [12] for how to build gravitational theories based on each piece), while the field equations of the theory are obtained by independent variations of the action with respect to metric and connection. This needs to be accompanied by a recipe on how to connect the (curvature/energy/size) scales at which this conceptual modification may become relevant, with the GR-world at the scales which have been tested so far. It turns out that the physics of solid state systems harbours useful lessons on how a continuum, macroscopic, regular geometrical-type world may emerge from a discrete, microscopic world filled with irregularities or _defects_. The latter are any type of imperfection rupturing the array distribution of individual atoms in real crystals, which arise either as point-like (interstices/vacancies), one (dislocations) and two-dimensional (textures), and that unavoidably exist within any real material. The presence of any such defects breaks the possibility of a pure Riemannian description in the transition to its continuum geometrical limit, requiring a larger flexibility of the encompassing geometry: non-metricity is associated to the point-like defects while torsion accounts for dislocations (check the book [13] for a broad discussion of this connection). Moreover, defects have _dynamics_, i.e., they are able to move and interact, even to recombine with other defects to form larger ones or even to annihilate each other. This way, the metric-affine approach makes a natural implementation of an underlying _quantum foam_ world yielding an emergent continuum space-time geometry embedded (in general) with curvature, non-metricity and torsion [14]. When translated into the gravitational context, it provides a natural connection between the above fundamental geometrical objects and the implementation of symmetry principles under unified gauge-type frameworks [15]. Furthermore, it turns out that the metric-affine picture may resolve the problem of space-time singularities within black hole scenarios under different choices for the gravitational and matter sectors and without any violation of the energy conditions [16; 17; 18; 19]. It is thus natural to explore the phenomenological consequences of these theoretically well-grounded models. ### Shadow and photon ring observations The main aim of this work is to start a programme based on the systematic analysis of shadows in ultra-compact objects within the metric-affine models mentioned above and their singularity-free solutions, and their capability to compete in the interpretation of real images with the canonical GR-based compact objects. In this sense, the observation in 2019, by the Event Horizon Telescope (EHT) [20], of the so-called "shadow" of the central supermassive object at the center of the M87 galaxy, followed in 2022 by a similar observation at the center of our own Milky Way Galaxy (Sagittarius A\({}^{*}\)[21]), is allowing the field of ray-tracing and the analysis of photon trajectories in the vicinity of ultra-compact objects to blossom to the best of its potential [22]. The EHT observations are founded in the well-tested idea that light trajectories near a massive body, which follow null geodesics in the corresponding space-time geometry, experience large deflections when approaching the critical points of its effective potential [23]. The trajectories for which this deflection turns formally infinite can thus be dubbed (though this terminology is far from being universal) as the _critical curve(s)_, the photon sphere being the corresponding spherically symmetric limit. This allows light rays to revolve several times around the central object before being released to asymptotic infinity to generate, in the observer's screen, a thin photon ring, consisting of strongly lensed radiation appearing on top of the direct emission generated by the disk itself. Even though one would intuitively expect such a critical curve and its associated thin photon ring to define the outer edge of the central brightness depression - the shadow - [24], it turns out that the location of such an edge also depends on the interaction between the geometry of the compact object and the physics of the accretion disk providing the main source of illumination and, therefore, it is also critical to define the optical appearance of the object [25]. Two main features of the disk contribute to this: its optical thickness (whether it is transparent to its own radiation or not), and its geometrical shape: infinitesimally-thin, spherical, or any other intermediate thickness. The results of [26] consistently argue that, no matter how thick the disk may be as long as it is not completely spherical, the apparent size of the shadow can be strongly reduced from the size of the critical curve down to a minimum inner shadow limit defined by the geometry of the object (basically a gravitationally lensed and red-shifted image of the event horizon [27]). Furthermore, the photon ring itself in this situation is decomposed into an infinite number of self-similar rings with exponentially decreasing contributions to the total luminosity and approaching the inner shadow limit [28]. Finally, when the background geometry is generalized to include rotation - i.e. the Kerr solution - the critical curve degenerates into a photon shell of unstable bound geodesics [29] but the rough details of this picture hold (though significantly more involved in their description). Such photon rings are a trademark of a given geometry, thus potentially harbouring a way to make robust tests of the Kerr hypothesis [30; 31; 32; 33]. While EHT observations may be successfully reproduced with a Kerr black hole supplied within General Relativistic Magnetic-Hydrodynamical (GRMHD) simulations of an accretion disk model [34], many works in the literature have sought for modifications to GR canonical black holes via addition of new fields [35; 36; 37; 38; 39] and hairy black holes [40; 41], horizonless compact objects such as naked singularities [42; 43], black bounces [44; 45], boson stars [46; 47; 48; 49], rotating [50] and asymmetric wormholes [51; 52], as well as modified black holes beyond GR within Gauss-Bonnet [53], asymptotic safety [54], noncommutative geometry [55], Einstein-Ether [56], Horndeski theory [57; 58], quadratic gravity [59], or braneworlds [60], to mention a few. ## II Background geometries ### Geometry features We are interested in static, spherically symmetric geometries described in the set of spherical coordinates \((t,x,\theta,\phi)\) by a line element of the form \[ds^{2}=-A(x)dt^{2}+B(x)dx^{2}+r^{2}(x)d\Omega^{2}, \tag{1}\] where \(A\left(x\right)\) and \(B\left(x\right)\) are well-behaved functions of \(x\), \(d\Omega^{2}=d\theta^{2}+\sin^{2}\theta d\phi^{2}\) is the surface element on the 2-sphere, and the notation \(r^{2}(x)\) allows for a non-monotonic behavior of the radial function. Note that one can always perform a coordinate transformation to reabsorb the function \(B(x)\) into the radial coordinate via \(B(x)dx^{2}=dy^{2}\), resulting in a space-time with only two independent metric functions, as demanded by the spherical symmetry of the system. However, given the theoretical properties of the configurations considered in this work and the advantages brought by working with the explicit form of \(r^{2}(x)\) in some specific models, we shall preserve the former description for the moment. Indeed, we are interested in charged spherically symmetric spacetimes corresponding to solutions of a certain metric-affine theory of gravity dubbed as Eddington-inspired Born-Infeld gravity (EiBI) (the best account of this theory can be found in the review paper [61], and we refer the reader there for an enhanced discussion of its properties), with action \[\mathcal{S}_{EiBI} =\frac{1}{\kappa^{2}\epsilon}\int d^{4}x\left(\sqrt{|g_{\mu\nu}+ \epsilon R_{\mu\nu}(\Gamma)|}-\lambda\sqrt{-g}\right)\] \[+\mathcal{S}_{m}(g_{\mu\nu},\psi_{m}) \tag{2}\] Here \(\kappa^{2}=8\pi G\) is Newton's constant in suitable units, \(\epsilon\) is a (small) parameter with units of length squared, \(R_{\mu\nu}(\Gamma)\) is the Ricci scalar of the affine connection \(\Gamma\equiv\Gamma_{\mu\nu}^{\Lambda}\), \(g\) is the determinant of the space-time metric \(g_{\mu\nu}\), vertical bars denote a determinant too, \(\lambda\) is a constant parameter, \(\mathcal{S}_{m}\) is the matter action, and \(\psi_{m}\) collectively represents the matter fields. In the limit \(|R_{\mu\nu}|\ll\epsilon^{-1}\) the theory boils down to GR plus a cosmological constant term given by \(\Lambda=\frac{\lambda-1}{\kappa^{2}}\) (from now on we set \(\lambda=1\) for asymptotically flat solutions) and the next-to-leading order term corresponds to quadratic gravity corrections. As we are working on the metric-affine approach, after applying the variational principle to the action, the corresponding field equations are obtained as two independent sets for the metric and for the affine connection; nonetheless, the latter can be removed in favour of additional contributions of the matter fields to the dynamics of the EiBI theory. This is a general feature of a large class of metric-affine gravities (see [62] for an overview) in which deviations from GR dynamics are fuelled by new terms associated to the matter fields relevant only in the strong-field regime, whereas weak-field limit tests are naturally satisfied and we just need to be concerned with the corrections near the matter sources. When coupled to a standard electromagnetic (Maxwell) field, the action (2) leads to a line element of the form (1) with analytical functions characterizing it. The latter were first found in Ref. [63] and later discussed in detail in Ref. [64], and we shall take here the main ingredients needed for our analysis below: the parameters characterizing the line element are the Schwarzschild radius \(r_{S}=2M\), the charge radius \(r_{Q}^{2}=\frac{\kappa^{2}Q^{2}}{4\pi}\) (in this work a system of geometrized units for which \(G=c=1\), which implies that \(\kappa^{2}=8\pi\), and consequently \(r_{Q}^{2}=2Q^{2}\), is employed), and a EiBI parameter conveniently rewritten as \(\epsilon=-2l_{\epsilon}^{2}\neq 0\) (so we shall work in the negative branch of \(\epsilon\) only) encoding the deviations from GR predictions. In this framework, the corresponding metric functions are given by \[A(z)=\frac{1}{\Omega_{+}}\frac{(1-\delta_{1}G(z))}{\delta_{2}z\Omega_{-}^{1/2 }}\quad;\quad B(z)=\frac{1}{A(z)\Omega_{+}^{2}}, \tag{3}\] with the following definitions and conventions: the objects \(\Omega_{\pm}\), such that the inequality \(g_{tt}\neq g^{xx}\) holds, are written in a dimensionless form via the introduction of a new variable \(z=r/r_{c}\), where the constant \(r_{c}\) is a normalised length-scale given by \(r_{c}^{2}=l_{\epsilon}^{2}r_{Q}^{2}\), leading to: \[\Omega_{\pm}=1\pm\frac{1}{z^{4}}; \tag{4}\] As for the function \(G(z)\) characterizing the metric, it can be written in terms of an infinite series expansion as \[G(z)=-\frac{1}{\delta_{c}}+\frac{\sqrt{z^{4}-1}}{2}\left(f_{3/4}(z)+f_{7/4}(z) \right), \tag{5}\] where the \(f_{\lambda}(z)={}_{2}F_{1}\left[\frac{1}{2},\lambda,\frac{3}{2},1-z^{4}\right]\) are hypergeometric functions; the constants \(\delta_{i}\) introduced previously are defined as \[\delta_{1}=\frac{1}{2r_{S}}\sqrt{\frac{r_{q}^{3}}{l_{\epsilon}}}\quad;\quad \delta_{2}=\frac{r_{c}}{r_{S}}; \tag{6}\] and the constant \(\delta_{c}\approx 0.572069\) emerges from the integration of the field equations to guarantee the compatibility of the asymptotic and central expansions of the metric. Finally, the non-monotonic function \(r^{2}(x)\) is obtained via \(x^{2}=r^{2}\Omega_{-}^{1/2}\), which can be solved exactly and provides, in the dimensionless notation \(z\equiv r/r_{c}\), the following solution: \[z^{2}(x)=\frac{x^{2}+\sqrt{x^{4}+4}}{2}, \tag{7}\] where the coordinate \(x\) ranges in the interval \(x\in(-\infty,+\infty)\), whereas the radial function \(r\) satisfies \(r\geq r_{c}\). The relationship between \(r\) and \(x\) can also be written in a differential form as (note that an implicit \(r_{c}\) factor is present here) \[\frac{dx}{dz}=\pm\frac{\Omega_{+}}{\Omega_{-}^{1/2}}. \tag{8}\] The spherically symmetric geometry described by Eqs. (3) to (7) may seem a bit involved, but is actually nothing else but an extension of the usual Reissner-Nordstrom (RN) solution of GR via \(l_{e}\)-corrections. Such corrections are important only in the strong-field regime, whereas in the weak-field regime (large distances, \(r\gg r_{c}\)), the metric components behave as \[g_{tt}\approx-g^{rr}\approx-1+\frac{1}{\delta_{2}z}-\frac{\delta_{1}}{\delta_{ 2}z^{2}}+\mathcal{O}\left(\frac{1}{z^{4}}\right), \tag{9}\] which, upon a restoration of the usual radial, mass, and charge quantities, corresponds precisely to the RN geometry. Let us recall that RN configurations feature horizons located at radii \(r=r_{\pm}\) given by \[r_{\pm}=M\pm\sqrt{M^{2}-Q^{2}}. \tag{10}\] In such a case, if \(M^{2}>Q^{2}\) the geometry has two horizons, the inner horizon at \(r=r_{-}\), and the event horizon at \(r=r_{+}\), both converging into a single (degenerate) horizon for \(M^{2}=Q^{2}\), while if \(M^{2}<Q^{2}\) the space-time features a naked singularity. These latter configurations are physically problematic as they display a geodesic incompleteness character (visible to external observers) and a resultant lack of predictability and causal determinism, barred by cosmic censorship conjecture arguments and also by its incompatibility with current observations. The most dramatic deviation in the geometry considered here from its GR counterpart is the behaviour of the radial function \(r^{2}(x)\). Indeed, while in the usual RN solutions this function is trivial, for this geometry it is given by Eq.(7). In the positive branch of \(l_{e}^{2}>0\), the radial function \(r(x)\) does not approach zero in the interior region of the geometry; instead, it reaches a spherical surface of minimum area \(S=4\pi r_{c}^{2}\) at \(x=0\). As one approaches this surface from the asymptotically flat region (say) \(x>0\), the ingoing sets of geodesics do not converge to a focal point, but instead re-expand to a new asymptotically flat region of the geometry with negative \(x<0\) values (note that \(r^{2}(x)\) remains positive in the two pieces of the space-time; in other words, \(x\in(-\infty,+\infty)\) but \(r^{2}>r_{c}^{2}\), or \(z\geq 1\)). This mechanism, canonically interpreted in the literature as a _wormhole structure_, allows in the present case for the restoration of the geodesic completeness of the geometry. Note that this structure is present for the whole spectrum of mass and charge of the solutions considered, provided that \(l_{e}^{2}>0\) and, perhaps more strikingly, despite the fact that curvature scalars generically diverge at the wormhole throat \(r=r_{c}\) (for more details, see below). However, for the sake of shadow/photon ring images, the wormhole structure plays a marginal role, particularly in those configurations featuring an event horizon. In such cases, it is more convenient to use Eq.(8) to cast the line element in Eq.(1) as \[ds^{2}=-A(r)dt^{2}+\frac{dr^{2}}{A(r)\Omega_{-}(r)}+r^{2}d\Omega^{2}, \tag{11}\] recalling that the radial coordinate \(r\) is still bounded as \(r\geq r_{c}\), where \(r=r_{c}\) corresponds to the wormhole throat. When simulating the image of a shadow, whenever the throat is hidden behind an event horizon, the presence of a throat is irrelevant; however, the theory also encompasses horizonless compact objects and in such cases the simulations extend down to the throat (in this work we do not consider flows of light rays crossing the wormhole throat into our local universe patch). ### Classes of configurations There are several solutions of interest depending on the relative values of mass, charge and the parameter \(l_{e}^{2}\) that defines the theory. Recall that the zeroes of \(g^{rr}(r_{\pm})=0\) provide the radii of the horizons (if any), \(r=r_{\pm}\), of every configuration; however, finding the explicit expressions for the radii \(r_{\pm}\) is impossible for the metric above. Nonetheless, it is possible to determine analytically the number of horizons of each configuration via a combined analysis of the behaviour at the throat with the asymptotic behaviour given in Eq.(9) by considering the possible number of crossings with the surface \(g^{rr}(r_{\pm})=0\). In this sense, taking a series expansion of the metric components in the region near the wormhole throat, \(z=1\), one obtains \[g_{tt} \approx \frac{1-\delta_{1}/\delta_{c}}{4\delta_{2}\sqrt{z-1}}-\frac{1}{2 }\left(1-\frac{\delta_{1}}{\delta_{2}}\right)+\mathcal{O}(z-1)^{1/2}, \tag{12}\] \[g^{rr} \approx -\frac{1-\delta_{1}/\delta_{c}}{\delta_{2}\sqrt{z-1}}+2\left(1- \frac{\delta_{1}}{\delta_{2}}\right)+\mathcal{O}(z-1)^{1/2}. \tag{13}\] This way, we see that the leading-order terms in Eqs.(12) and (13) determine both the nature (time-like or space-like) of the central region of the configurations near the wormhole throat, and the number and type of horizons, as controlled by the ratio \(\delta_{1}/\delta_{c}\). Consequently, the configurations are split into three different general types: * Regular Schwarzschild-like black holes: When \(\delta_{1}<\delta_{c}\) the central region is space-like and a single non-degenerate event horizon is present. Thus, the wormhole lurking in the innermost region of the geometry behind the event horizon is not visible to external observers. * Regular Reissner-Nordstrom-like solutions: When \(\delta_{1}>\delta_{c}\) the central region is time-like and the global structure of the solution resembles the RN black hole geometry, splitting into three different possibilities: * Regular two-horizon Reissner-Nordstrom-like solutions: Two non-degenerate (inner and event horizon, respectively) appear on each side of the geometry depending on a fairly complex interplay between \(r_{S}\), \(r_{Q}\) and \(l_{\epsilon}\). * Regular extreme Reissner-Nordstrom-like black holes: This is the limiting case of the previous situation, in which both horizons merge into a degenerate one. * Regular horizonless Reissner-Nordstrom-like solutions: In such a case the horizons are absent and the solution describes a horizonless compact object. While the metric describing this object resembles the overcharged RN solution, this solution features a traversable wormhole with a throat radius \(r=r_{c}\) instead of a naked singularity. * Regular, finite-curvature (FC) solutions: When \(\delta_{1}=\delta_{c}\) then the leading-order coefficients in Eqs.(12) and (13) are constant. A consequence of this is that all curvature scalars turn out to remain finite at the throat (and everywhere). These solutions also split into three sub-cases: * Regular, finite-curvature traversable wormholes: When \(\delta_{2}>\delta_{c}\) no horizon is present and this configuration represents a traversable wormhole. * Regular, finite-curvature non-traversable wormholes: When \(\delta_{2}<\delta_{c}\) a horizon is present, rendering the wormhole as non-traversable. * Regular, finite-curvature transition case: When \(\delta_{2}=\delta_{c}\) the horizon is exactly located at the throat itself. This case should thus be considered as non-traversable as well. It should be emphasized that all the three types of configurations above, along with their several sub-classes, are regular according to the geodesic completeness criterion, which is the main concept ingrained in the singularity theorems [11]. This is a consequence of the presence of the wormhole throat at \(r=r_{th}=r_{c}\), which in the present cases allows for the extensibility of every (time-like and null) geodesic able to reach the region beyond \(x=0\) to the other side of the throat, as explicitly verified in [64]. This happens despite the curvature scalars generically diverging at the wormhole throat (unless \(\delta_{1}=\delta_{c}\), as already stated), since this feature does not apparently entail the presence of any pathological behaviour, as has been shown through several routes, e.g. using tidal forces upon geodesic congruences plus causal transmission between the constituents of a body [65], scattering of scalar waves off the wormhole [16], and completeness of accelerated observers' paths [66]. The absence of singularities in these solutions implies that, as opposed to the transition from RN black holes to naked singularities in GR (above \(Q^{2}/M^{2}=1\)), the cosmic censorship conjecture is unnecessary, as the transition from black hole solutions to horizonless traversable wormholes e.g. by changing the ratio \(\delta_{1}/\delta_{c}\), does not uncloak any singularity. This ratio can be more conveniently expressed in terms of a critical cubic-charge-to-quadratic-mass ratio \(R^{1}\) of the form \[R^{1}=\frac{Q^{3}}{M^{2}},\qquad R^{1}_{c}=Cl_{\epsilon}, \tag{14}\] where \(R^{1}_{c}\) is the critical value of the ratio \(R^{1}\) and we defined the constant \(C=\frac{16\delta^{2}_{c}}{23/2}\approx 1.8513\). Thus, RN-like configurations require \(R^{1}>R^{1}_{c}\), Schwarzschild-like satisfy \(R^{1}<R^{1}_{c}\), while FC solutions live on the line \(R_{1}=R^{1}_{c}\). For the RN configurations, the transition between two-horizon black holes and horizonless compact objects cannot be found analytically for generic values of the parameters of the problem. As opposed to that, one can extract such an information in the particular cases in which \(R^{1}=R^{1}_{c}\), since here the transition between the single-horizon and naked configurations occurs at \(\delta_{2}=\delta_{c}\), which can also be expressed as a critical charge-to-quadratic-mass ratio \(R^{2}\) of the form \[R^{2}=\frac{Q}{M^{2}},\qquad R^{2}_{c}=\frac{C}{2l_{\epsilon}}, \tag{15}\] where \(R^{2}_{c}\) is the critical value of the ratio \(R^{2}\), so that naked configurations require \(R^{2}>R^{2}_{c}\) in addition to the constraint on \(R^{1}_{c}\) above, while a horizon is present otherwise. The space of configurations classified this way is depicted in Fig. 1 for the choice of EiBI parameter \(l^{2}_{\epsilon}=1/4\) in the plane \(\{M,Q\}\), where the divide between Schwarzschild-like and RN-like configurations is marked by the blue curve \(R_{1}=R^{c}_{1}\), while the configurations with finite curvature correspond to those located _on_ the blue curve. For the latter, the presence/absence of an event horizon depends on the value of its ratio \(R^{2}\) as compared to the critical one \(R^{2}_{c}\), i.e., above or below the red curve. ## III Black hole imaging via null trajectories ### Null geodesic equations Let us consider a null geodesic \(g_{\mu\nu}k^{\mu}k^{\nu}=0\) with \(k^{\mu}=\dot{x}^{\mu}\) a null vector representing the photon's wave number, and where a dot denotes a derivative with respect to the affine parameter. For the line element given in Eq.(1) this equation reads (restricting the motion to the equatorial plane \(\theta=\pi/2\) without loss of any generality due to spherical symmetry) \[-A\dot{t}^{2}+B\dot{x}^{2}+r^{2}(x)\dot{\phi}^{2}=0. \tag{16}\] The spherical symmetry of the system also allows one to introduce two conserved quantities, \(E=A\dot{t}\) and \(L=r^{2}(x)\dot{\phi}\), corresponding to the energy and angular momentum per unit mass, respectively. With these definitions the previous equation can be rewritten as (a factor \(L^{2}\) is re-absorbed by a redefinition of the affine parameter) \[AB\dot{x}^{2}=\frac{1}{b^{2}}-V(r), \tag{17}\] where we have introduced the impact parameter \(b\equiv L^{2}/E^{2}\), while the effective potential \(V\left(r\right)\) is given by the expression \[V(r(x))=\frac{A(x)}{r^{2}(x)}. \tag{18}\] In the process of generating images, it is more convenient to make use of the previously defined conserved quantities to rewrite the geodesic equation as the variation of the azimuthal angle with respect to the coordinate \(x\) as \[\frac{d\phi}{dx}=\mp\frac{b}{r^{2}(x)}\sqrt{\frac{AB}{1-\frac{AB^{2}}{r^{2}(x )}}}. \tag{19}\] A light ray issued from the observer's screen with a given impact parameter \(b\) approaching the object experiences a variation on its azimuthal angle given by the expression above with a negative sign. At the turning point, the square root on the right-hand-side vanishes, and the integration of the equation above continues with the positive sign. For the geometry of interest in this work, this equation reads \[\frac{d\phi}{dx}=\mp\frac{b}{\Omega_{+}r^{2}(x)}\,\frac{1}{\sqrt{1-\frac{AB^{ 2}}{r^{2}(x)}}}. \tag{20}\] Under this form of the geodesic equation the factor \(\Omega_{+}\) varies between one (at asymptotic infinity) and two (at the wormhole throat), so its influence is minor at large distances, and will only manifest itself for the light rays flowing near the wormhole throat. The most important modifications with respect to GR solutions appear both via \(A(r(x))\), which is quantitatively and qualitatively different from the RN solution of GR, and via \(r^{2}(x)\), given its non-trivial behavior according to Eq.(7). Alternatively, one can also use Eq.(8) to rewrite the previous equation in terms of the radial coordinate \(r\) as \[\frac{d\phi}{dr}=\mp\frac{b}{\Omega_{-}^{1/2}r^{2}}\,\frac{1}{\sqrt{1-\frac{ AB^{2}}{r^{2}}}} \tag{21}\] where in this case one must add the condition that the radial function is bounded by \(r\geq r_{c}\). Now, \(\Omega_{-}\) varies between one and zero, but the latter is only attained at the wormhole throat, while the photon sphere radius stands far from that region (recall that we are not considering radiation flow across the wormhole throat in the cases in which the wormhole is traversable). Thus this feature will be mostly irrelevant for black hole configurations, but it will acquire relevance in the horizonless ones. ### Photon sphere and photon rings A turning point \(r=r_{0}\) in a light ray trajectory happens when the right-hand-side of Eq.(17) vanishes, which corresponds to \[b_{0}^{2}=\frac{r_{0}^{2}}{A_{0}} \tag{22}\] where \(A_{0}\equiv A(r_{0})\). The smallest impact parameter, \(b_{c}\), for which such a turning point exists, corresponds to the maximum of the effective potential, that is \[V_{eff}(r_{ps})=\frac{1}{b_{c}^{2}},\quad\frac{dV}{dr}(r_{ps})=0,\quad\frac{d ^{2}V}{dr^{2}}(r_{ps})<0. \tag{23}\] Figure 1: The space of configurations for \(l_{\epsilon}^{2}=1/4\) as given by the ratios \(R_{1}^{2}\) and \(R_{2}^{2}\) defined in Eqs.(14) and (15), respectively, and depicted in blue and red. For \(R_{1}>R_{1}^{2}\) (\(R_{1}<R_{1}^{2}\)) we find Schwarzschild-like (RN-like) configurations. For those with \(R_{1}=R_{1}^{2}\) (i.e. on the blue curve), those with \(R_{2}>R_{2}^{2}\) (i.e. above the red line) have finite curvature everywhere but a single horizon, while those with \(R_{2}<R_{2}^{2}\) are finite-curvature, naked objects. The point at which \(R_{1}^{2}=R_{2}^{2}\) meet, behaves as pure Minkowski. The value \(b_{c}\) is dubbed as the _critical impact parameter_, while the location of the maximum, \(x=x_{ps}\) (or \(r=r_{ps}\) in the \(r\)-representation), is sometimes referred to as the critical curve, though it is (perhaps) more popularly known as the _photon sphere_ given the spherical symmetry of the system (in the rotating case, it forms instead an oblate finite-size region known as the photon shell). It corresponds to the locus of null unstable geodesics, i.e., light ray trajectories that asymptotically approach a critical point of the effective potential. Using the conditions above, from Eq.(18), the critical points are given by the solutions to the implicit equation \[r^{\prime}|_{x_{ps}}A(x_{ps})-\frac{1}{2}r(x_{ps})A^{\prime}(x_{ps})=0 \tag{24}\] where primes denote derivatives with respect to the variable \(x\). Alternatively, one can rewrite this equation in terms of derivatives with respect to the radial function itself using the relation (8)) as \[A(r_{ps})-\frac{1}{2}r_{ps}\frac{dA}{dr}\Big{|}_{r_{ps}}=0 \tag{25}\] as long as \(dr/dx\neq 0\), i.e., everywhere except at the wormhole throat. We consider a setting in which the compact object is oriented face-on with respect to the asymptotic observer. A light ray issued from asymptotic infinity and approaching the compact object in a trajectory with an impact parameter \(b>b_{c}\) sees its trajectory gravitationally deflected with a given angle \(\delta\phi\) before reaching the turning point given by a zero of the denominator of Eq.(20), and afterwards is released back to asymptotic infinity. The closer the impact parameter to its critical value \(b_{c}\) is, the larger the deflection angle becomes. Eventually, the bending is so large that the photon executes one or more half-turns around the compact object. These half-turns can be measured via the definition introduced in [67] \[n=\frac{\delta\phi+\pi}{2\pi}. \tag{26}\] Such a light ray crosses the equatorial plane just once if \(1/2\leq n<3/4\), where the lower bound corresponds to a completely non-deflected trajectory. These trajectories yield the _direct emission_ of the disk, which is the major contribution to the image of the compact object. Alternatively, and for the sake of this work, we shall use instead the number of crossings a light ray makes with the equatorial plane, labelled by an integer number \(m\geq 1\) such that \[\frac{m}{2}+\frac{1}{4}\leq n<\frac{m}{2}+\frac{3}{4} \tag{27}\] In this language the direct emission corresponds to \(m=0\) (i.e. it departs from the disk itself instead of crossing it, and in such a case the lower bound in this expression is \(1/2\) instead), while highly-bent trajectories (\(m=1,2,\ldots\)) produce a thin bright ring of radiation in the optical image of the object, dubbed as the _photon ring_, which tracks the location of the critical curve in the sky. For values of the impact parameter \(b<b_{c}\), the light ray spirals down and reaches the event horizon. Due to this fact, a brightness depression is generically expected to be present in the central part of the image, at least in black hole solutions, usually known as the _shadow_. However, its outer edge does not necessarily coincide with the critical curve, since we did not take into account yet the features of the accretion disk providing the main source of illumination, something we shall tackle in Sec. IV. In the usual RN solution of GR the critical impact parameter is given by the analytical expression \[b_{c}=\frac{\left(4Q^{2}-3M\left(\sqrt{9M^{2}-8Q^{2}}+3M\right)\right)}{\sqrt {2M\left(\sqrt{9M^{2}-8Q^{2}}+3M\right)-4Q^{2}}}, \tag{28}\] while the radius of the photon sphere is given by \[r_{ps}=\frac{3M+\sqrt{9M^{2}-8Q^{2}}}{2}. \tag{29}\] This implies that for any \(M^{2}\leq(9/8)Q^{2}\) the solution features a photon sphere which always lies outside the event horizon, if the latter exists, and thus it is accessible to photons. In particular, if the condition \(Q^{2}<M^{2}\leq(9/8)Q^{2}\) is met, the solution features a photon sphere but not an event horizon: naked singularities in RN geometry may thus have a photon sphere. The modifications to the RN geometry introduced by the EiBI solutions modify these two relevant quantities but do not allow one to obtain an explicit expression. Nonetheless, a numerical analysis on a case-by-case basis allows one to find these quantities for any configuration of interest. ### On the photon rings Both the critical curve and the shadow are relevant features in the theoretical characterization of black hole images. However, it is important to make a distinction between critical curve/photon sphere, and photon rings: while the latter are observable, the former is _not_. Indeed, while the critical curve is a unique theoretical feature of the background geometry alone, the actual location, separation, and luminosity of the photon ring(s) are dependent also on the features of the accretion disk mainly due to two reasons. First, the fact that the photon ring is actually an (infinite) composition of sub-rings corresponding to light rays that have executed \(m\) half-turns around the black hole, provided that a mild number of astrophysical assumptions on the disk hold: optical thinness (i.e. transparency to its own radiation) in the emission region, and departure from complete spherical shape (i.e. existence of gaps in the emission region) [26]. The second one is determined by the inner edge of the emission region: when the disk is spherically symmetric, the shadow completely fills the critical curve, whereas in thin or thick disk scenarios the shadow is strongly reduced (and further darkened) to an inner shadow [27]. In the latter case, if the emission is extended all the way down to the event horizon, the shadow is essentially a lensed image of the event horizon though apparently augmented due to gravitational lensing effects. Under these two assumptions (optical thinness and non-spherical geometry of the disk), an infinite number of photon rings (\(m=1,2,\ldots\)) appear on the image, several possibly superimposed on top of the direct emission (\(m=0\)) and appearing inside the photon sphere radius. ### Size of the shadow The results of the EHT Collaboration regarding simulated images of the supermassive object in the center of our Milky Way galaxy - Sgr A\({}^{*}\) - report the possibility of using the radius of the bright ring enclosing the object as a proxy of the shadow radius itself [68], defined as the apparent boundary of the gravitationally lensed image of the photon sphere, given by the simple relation (for a more detailed analysis of analytical studies of the shadow's size see e.g. [69]): \[r_{sh}=\frac{r_{ps}}{\sqrt{A(r_{ps})}}. \tag{30}\] This possibility is subject to two main requirements: i) the presence of a sufficiently numerous source of photons strongly lensed near the black hole event horizon; and ii) a (sufficiently) geometrically thick emission region which is optically thin at the EHT's wavelength operation window. While these bounds are only marginally affected by the details of the accretion flow beyond the two above, nonetheless a calibration factor must be introduced between the radius of the ring and the one of the shadow, addressing to which extent the former can be used as a proxy of the latter. This factor includes both theoretical and observational uncertainties on the modelling of the accretion flow and on the measurement of this bright radius, respectively. For a Schwarzschild black hole, the shadow radius in Eq.(30) yields (we restore units here) \(r_{sh}=3\sqrt{3}GM/c^{2}\), therefore subtending (in the small angle approximation) an angular diameter \(\theta_{sh}=6\sqrt{3}GM/(c^{2}D)\), where \(D\) is the distance between the observer and the source. Since for Sgr A\({}^{*}\) the ratio \(M/D\) is known, the EHT Collaboration states that one can use these ingredients to find the following bounds on the shadow size \[4.54\lesssim r_{sh}/M\lesssim 5.22 \tag{31}\] at \(1\sigma\) and \[4.20\lesssim r_{sh}/M\lesssim 5.56 \tag{32}\] at \(2\sigma\). Note that the generalization of the previous bounds to extremal Kerr black holes modifies the result by a factor \(\lesssim 7\%\) down [22]. To approach these results in an agnostic way (i.e. without judging their validity with regards to its assumptions), we shall consider its compatibility with our modified charged geometries (also using as a reference point the usual RN configurations in GR). For this purpose, we shall analyze the dependence of the radius of the shadow in the charge (for a fixed gravity parameter \(l_{\epsilon}^{2}\) in the present case). This strategy was implemented in Ref. [70] for an overwhelming number of modified gravitational configurations inspired by either modifications to the matter fields or to the gravitational sector, though the one considered in this work is not included there. The evolution of the critical curve radius and the shadow radius for both RN and EiBI solutions is depicted in Fig. 2 alongside with the bounds above, where we have taken a reference value of \(l_{\epsilon}^{2}=1/4\) for convenience. In the usual RN case, for values of (taking \(M=1\)) \(Q\gtrsim 0.8\) the shadow size is incompatible with the \(1\sigma\) bound, and for \(Q\gtrsim 0.95\) with the \(2\sigma\) bound. This seems to discard naked RN singularities as viable models to describe the shadow of Sgr A\({}^{*}\). As for the EiBI solutions, both the critical curve radius and the shadow radius are enhanced as compared to the RN solution, the size of the enhancement increasing with the charge, i.e., as the new gravitational dynamics encoded in the EiBI field are powered up by stronger contributions in the matter fields. In particular, this implies that constraints on the size of the shadow allow for slightly larger values of the electric charge in the EiBI version, and that this effect would become more pronounced for larger values of the EiBI parameter \(l_{\epsilon}^{2}\). ### Selected configurations for generation of images In this section we shall fix the parameters of the configurations we are interested in working with. In this sense, the choice of \(l_{\epsilon}^{2}=1/4\) used above seems fair enough, since it allows for mild deviations from their GR solutions; nonetheless, it will also entail the presence of configurations which strongly deviate from a qualitative and quantitative point of view in the shape of their metrics and effective potentials, following our discussion above on the global structure of the configurations for every \(l_{\epsilon}^{2}\neq 0\). For this choice, the seven classes of configurations can be classified according to the ratios \(R_{1}^{\rm c}\) and \(R_{2}^{\rm c}\) as depicted in Fig. 1). The first ratio splits the Schwarzschild-like from the RN-like cases, with the finite-curvature (including the limiting Minkowski-like case) case corresponding to the curve \(R=R_{1}^{\rm c}\) itself. For the latter configurations, with a fixed charge-to-(quadratic) mass ratio, values of the mass below the critical \(M_{c}\approx 0.6180\) yield solutions with a single horizon, while those with \(M>M_{c}\) feature none. The transition between both cases yields a critical charge \(Q_{c}\approx 0.7071\). Based on this scheme, we select the parameters \(\{M,Q\}\) characterizing the seven qualitatively different classes of configurations according to the number and type of their causal structure (i.e. horizons). These are reported in Table 1 alongside their corresponding values for the four quantities of interest in the generation of shadow images: the radius of the throat, the radius of the event horizon, the radius of the critical curve, and the value of the critical impact parameter. A typical shadow image is the one of a black hole (e.g. Schwarzschild) encircled by a critical curve driving light trajectories with more than one half-turn around the central object, and contributing to create photon rings on top of the direct emission. In our case this corresponds to the Sch, RN2, RN1e, and FC1 configurations of this Table, as depicted in Fig. 3 for the effective potential (blue, red, brown and green curves, respectively). In addition we have a configuration, RN0, without event horizons but featuring a critical curve, a (stable) minimum of the potential (an anti-photon sphere), and an infinite slope of the potential at the center, all of them accessible to photons with impact parameters above the critical one (this is somewhat similar to the usual naked Reissner-Nordstrom configurations of GR [71] but, in contrast, RN0 harbours no singularities). This mimics previous (ad hoc) toy-models studied in the field, such as in generalized black hole bounces [72], where their toy-ness comes from the fact that they are postulated a priori as opposed to those found here, which arise naturally from resolution of the field equations of the EiBI theory. It should be mentioned that the presence of the pair of unstable/stable critical curves is a generic property of horizonless compact objects as long as they are compact enough [73], and carry the burden of being capable to trigger the development of non-linear instabilities \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline & Sch & RN2 & RN1e & RN0 & FC1 & FCt & FC0 \\ \hline \(M\) & 1 & 1 & 1 & 1 & 1 & 0.6180 & 0.55 \\ \hline \(Q\) & 0.5 & 1 & 1.0511 & 1.1 & 0.9745 & 0.7071 & 0.6542 \\ \hline \(r_{t}\) & 0.5946 & 0.8408 & 0.8621 & 0.8819 & 0.8301 & 0.7071 & 0.6801 \\ \hline \(r_{h}\) & 1.8751 & 1.3133 & 1.0511 & X & 1.3750 & 0.7071 & X \\ \hline \(r_{ps}\) & 2.8333 & 2.1433 & 1.9875 & 1.7683 & 2.2074 & 1.2170 & 1.0737 \\ \hline \(b_{c}\) & 4.9777 & 4.1166 & 3.94144 & 3.7244 & 4.1918 & 2.3754 & 2.0862 \\ \hline \end{tabular} \end{table} Table 1: The values of the features of interest for generation of images for the chosen configurations representative of all the possible sub-cases: Schwarzschild-like black holes (Sch), RN-like with two (RN2), a single (extreme, RN1e) or none (RNO horizons, and finite-curvature (FC) solutions with a single horizon (FC1) or none (FC0), including the transition case (FCt). The parameters \(\{M,Q\}\) refer to the masses and charges of the configurations, while \(\{r_{t},r_{h},r_{ps},b_{c}\}\) correspond to the throat’s radius, event horizon radius, critical curve radius, and critical impact parameter, respectively. Figure 3: The effective potential for the seven configurations reported in Table 1: Sch (blue), RN2 (red), RN1e (brown), RN0 (green), FC1 (dashed orange), FC0 (dashed cyan) and FCt (dashed purple). Figure 2: The critical curve location (left) as given by the Eq.(25), and the shadow radius \(r_{sh}\) (right) as given by Eq.(30), for GR-RN solutions (blue and red, respectively); and for EiBI solutions with \(l_{e}^{2}=1/4\) (green curves), as a function of the charge \(Q\). The set of straight dashed (brown) and dotted (purple) lines corresponds to the bounds on the shadow’s size at \(1\sigma\) and \(2\sigma\), respectively, as reported by the EHT Collaboration [68]. In these plots, \(M=1\). [74] which would be pathological if the associated time scale is small enough (something to be analyzed on a case-by-case basis for every solution). We also consider a configuration, FC0, in which there is a critical curve, no horizon, but the potential is everywhere finite (extending down to the wormhole throat), and no stable anti-photon sphere is present. Finally, because the configuration FCt features a horizon located at the throat itself, we shall take it as a limiting case of the previous configurations, being of black hole-type. Thus, our selection of configurations offers several qualitatively different types of potentials, whose net effect in terms of their shadow images we want to study here. Before going into that, it should be stressed that in comparing these configurations with their GR-RN counterparts, the nature of the horizons for each class may change. For instance, the RN2 configuration (having two non-extreme horizons) corresponds to an extreme black hole in GR, while the RN1e configuration is a naked singularity in GR instead of an extreme black hole. As for the RN0, FCt, and FC0 configurations, their counterparts in GR do not have photon spheres, so the latter are not of interest to us in this work. ## IV Shadow images ### Ray-tracing procedure A ray-tracing procedure is a tracking of all light trajectories that reach the observer, and that integrate backwards the gravitational deflection equation (21) for a bunch of impact parameters to find the location in the sky every such light ray came from. In doing so, an important variable to store is the location in the equatorial plane (as a function of the impact parameter) each light ray crossed it. This is so because for those light rays nearing in their impact parameter the photon sphere there may be more than one such a crossings, labelled by a function \(r_{m}(b)\), where \(m=1,2,\dots\) belongs to a lensed image of order \(m\) of the source (for the sake of this work we favor this language over the one of _lensing/photon ring emissions_ introduced in [67]), which in the main case of physical interest corresponds to an accretion disk (\(m=0\) identified as the direct emission of the disk, as already mentioned). In order to produce shadow images of sufficient visual quality but within manageable computational times, we ray-trace a bunch of light rays from the observer's screen, taken to be located at a far away distance of \(r=1000M\). We perform this integration in two steps. First, we cover a range of impact parameters between zero and three times the one of the critical curve using \(\sim 500\) iterations. Secondly, we track much more closely the critical curve performing an additional set of \(\sim 1500\) iterations within the range \(b_{c}(1-0.5\times 10^{-2})<b<b_{c}(1+0.5\times 10^{-2})\) in order to finely capture the quick variations of luminosities on its restricted range of impact parameter values for the photon rings \(m=1,2\). For the case of horizonless compact objects [see Sec. IV.5] in which the contributions of higher-order \(m>2\) trajectories can become non-negligible (RN0 and to a lower extend FC0), we adapt this tracking of the critical curve in order to detect additional crossings of the light trajectories with the equatorial plane. In all cases, the transfer function \(r=r_{m}(b)\) storing the information on every crossing of the light ray with the equatorial plane is required to be sufficiently smooth after all iterations have been carried out. For turning points, where the denominator of the deflection angle diverges, we set a precision of \(1-b^{2}A(r)/r^{2}<10^{-15}\) to stop the integration from asymptotic infinity towards this point, and select the opposite sign to continue with the integration from there back towards asymptotic infinity. For trajectories below the critical impact parameter value, this part of the integration never takes place and continues instead until hitting the event horizon in the black hole cases, or the throat in the wormhole cases. We produce a list of the stored data on \(r_{m}\) for all trajectories \(m=0,1,2,\dots\) and build separately their corresponding contributions to the luminosity of the object bearing in mind the gravitational redshift accounting from the trip of the photon from source to observer [see Eq.(33) below]: an interpolation function guarantees the smoothness of the corresponding profiles. One then obtains a collection of three functions which are subsequently merged to produce the total intensity function \(I(b)\) after a number of additional effects are taken into account [discussed below in Sec. IV.2]. For the moment let us mention that in the GLM3 model below, these three contributions will be typically appear in our body of simulations as isolated functions with quite a wide extension (for the direct emission \(m=0\)) and as two additional peaked functions corresponding to the photon rings \(m=1,2\). In the GLM1/GLM2 models, these contributions will appear overlapped on top of the direct emission. These preliminary results conform to our initial conceptual expectations, and thus we can confidently go through elaborating the simulations to full resolution. ### Optical, geometrical and emission properties of the accretion disk For the sake of our images we consider an optically and geometrically thin accretion disk. The former assumption means that the disk is transparent to its own radiation, and it is obviously needed for the purposes of this work since it is the one allowing photons to circle the central object several times, raising the possibility of having photon ring(s) on top of the direct emission. The second assumption implies that the effective source of emission comes only from an infinitely-thin surface extending up to a certain minimum radius (not necessarily the event horizon itself). Moreover, this must be supplemented with choices for the emissivity and intensity (absorptivity is effectively taken to zero via the optically thin assumption above) making up the disk, which are linked to each other via a radiative transfer equation. Solving such equation, in general, is only accessible via GRMHD simulations for a pool of assumptions on such quantities [75]. Here we cut significantly the complexity of this process by assuming a monochromatic emission in the disk's frame, i.e., \(I_{\nu}\equiv I(r)\). Due to gravitational redshift, in the frame of the asymptotic observer this transforms into \(I_{\nu_{a}}=A(r)^{3/2}I(r)\) via the line element (1). Integrating over the full spectrum of frequencies yields \(I^{ob}=\int dv_{\rho}I_{\nu_{a}}=A^{2}(r)I(r)\). In addition, one needs to bear in mind the subsequent intersections with the accretion disk, so one finally writes \[I^{ob}=\sum_{m=0}^{i}A^{2}(r)I(r) \tag{33}\] where \(i\) denotes the order of the photon ring beyond which the contributions to the luminosity become negligible (\(i\leq 2\) for black hole configurations). This leaves as single free function \(I(r)\) to characterize the emission of the disk. This function should be demanded a minimum number of properties such as its smoothness and to be mostly localized a few Schwarzschild radius units away from the event horizon. Typically one would expect the disk's inner edge to extend all the way down to the event horizon, to match current observations by the EHT Collaboration. Nonetheless, this scenario makes it hard to observe the photon ring sub-structure, since due to its relative dimness it is completely dazzled by the contribution of the direct emission (we shall further comment on this later on). To account for both scenarios, we consider a single emission profile given by the functional form proposed in [28] (hereafter referred to as GLM models) and given by the choice \[I(r;\gamma,\mu,\sigma)=\frac{e^{-\frac{1}{2}\left[\gamma+\arcsinh\left(\frac{ r-\mu}{\sigma}\right)\right]^{2}}}{\sqrt{(r-\mu)^{2}+\sigma^{2}}} \tag{34}\] where \(\{\gamma,\mu,\sigma\}\) are free functions of the modelling acting as follows: \(\gamma\) drives the rate of growth of the intensity profile from infinity down to its peak, \(\mu\) performs a translation of the intensity profile in order to shift the profile to a desired position, and \(\sigma\) controls the dilation of the profile. This profile was originally thought to match the results of GRMHD for rotating (Kerr) black holes in order to explore the structure of their photon rings, see e.g. [76], so we will simplify the problem a little bit given the spherical symmetry of our setting. In particular, the constant \(\mu\) is related in such models either to the innermost circular time-like orbit (ISCO) radius or to the inner horizon of a Kerr black hole, and we shall take here their restrictions and match it to the non-rotating and uncharged (Schwarzschild) case. This way, we end up with the following three models: \[GLM3: \gamma=-2,\mu=\frac{17M}{3},\sigma=\frac{M}{4} \tag{35}\] \[GLM1: \gamma=-\frac{3}{2},\mu=0,\sigma=\frac{M}{2}\] (36) \[GLM2: \gamma=0,\mu=0,\sigma=\frac{M}{2} \tag{37}\] The intensity profiles of the GLM models are plotted in Fig. 4. The GLM3 model is aimed to probe the photon ring structure at a maximum of emission located near the ISCO radius of a Schwarzschild black hole: while such a radius obviously depend on charge (in the RN case), and also on the constant \(l_{\epsilon}^{2}\) (in the present EiBI case), for the sake of comparison of photon ring images associated to a given geometry under the same conditions of the disk we shall take the same profile for all configurations (GR and EiBI alike), regardless of where exactly the ISCO is located for each of them. As for the GLM2 and GLM3 models, they probe the geometry all the way down to the event horizon or, in the naked case, to the wormhole throat. They are distinguished in the strength of their respective decay: GLM2 is the softer of the pair. This will have an impact in the distribution of their total luminosity. Finally, all models are normalized in their maximum values of their intensity to one, to allow for an equal footing comparison. For future reference, we depict the application of these three models under the conditions stated above, for the Schwarzschild geometry with \(M=1\) in Fig. 5. As expected, the optical appearance of the same model illuminated by different emission profiles may be very different, particularly at the level of the distribution of the luminosity over the direct and photon ring contributions, and the visibility of the latter (one ring clearly seen in the GLM3 model, but not so in the GLM1/GLM2 models). Figure 4: Intensity profiles \(I(r)\) as a function of the normalized radius \(r/M\) of the three GLM models considered in Eqs.(35), (36) and (37). ### Generation of images Having our setting ready, we now proceed with the generation of images of our modified gravitational configurations. To this end, we systematically construct the optical appearance of the seven classes of configurations appearing in Table 1 based on the implementation of the intensity profile (33). We proceed first with the black hole configurations: Sch, RN2, RN1, FC1 and FCt, which are depicted in the set of Figures 6, 7, 8, 9, 10, respectively. On the top panel we depict the (normalized) image for the (from left to right) GLM3 (35), GLM1 (36) and GLM2 (37) emission models, while on the bottom panel we include the observed intensity profiles for the same models. There is nothing surprising in these plots as compared to our initial expectations and what we have in the Schwarzschild case of Fig. 5: the intensity profiles of the GLM3 model display the typical lump of the direct emission, \(m=0\), with its maximum located near the (gravitationally redshifted) inner edge of the emission, and that thanks to the fact that it is extended over a wide range of impact parameter values, it dominates the optical appearance of the object. Isolated (GLM3 model), or superimposed to it (GLM1/GLM2 models), there are the two narrow spikes associated to the \(m=1,2\) photon rings (the higher the \(m\) the inner the location in the image and the sharper the spike). The degree of overlapping between the \(m=0\) and \(m=1,2\) emissions depends both on the gravitational configuration and on the emission model, though the latter has a much larger influence on it. This can be seen by going left-to-right on the sequence of images for a given gravitational configuration and comparing between the different figures at fixed GLM model. The image of each object is also affected by this degree of overlapping. Varying degrees of non-overlapping are maximized in the GLM3 model (left plots): this is clear in Fig. 6 for the Sch configuration, which is the most conservative extension of the usual GR-Schwarzschild/RN solution for these EiBI configurations, and which splits the three emissions into their individual constituents. This manifests in the image of the object as a wide ring of radiation enclosing two photon rings (the innermost of it, corresponding to \(m=2\), barely visible at naked eye). The central brightness depression in this case thus fills all the central part of the image up to the \(m=2\) photon ring (since we are dismissing as negligible the contributions of higher-order photon rings to the total luminosity, see however Section IV.5.). For other EiBI configurations, the two photon rings are still mostly visible, but the direct emission is overlapped with the \(m=1\) ring or both, to some extend. Now, as we move forward towards the GLM1/GLM2 models, the direct emission completely dominates the image in all the range of impact parameter values, having inserting on it the two photon rings, and which manifest as the formation of a single spike Figure 5: The optical appearance of a Schwarzschild black hole (top panel) in the GLM3 (35), GLM1 (36) and GLM2 (37) emission models (taking \(M=1\)) and the associated intensity for each of them (bottom panel), displaying the direct (\(m=0\)) and photon ring (\(m=1,2\)) emissions, respectively. in the GLM2 model, which is accompanied by a little bump at a slightly larger impact parameter value in the GLM1 model. This provides a way of distinguishing the GLM1/GLM2 models: in the latter the \(m=1,2\) emissions blend to yield a single photon ring, while in the former there are two rings superimposed on top of the Figure 6: The optical appearance of EiBI Sch configurations (top panel) in the GLM3 (35), GLM1 (36) and GLM2 (37) emission models, and the associated intensity for each of them (bottom panel), displaying the direct (\(m=0\)) and photon ring (\(m=1,2\)) emissions, respectively. Figure 7: The optical appearance of EiBI RN2 configurations (top panel) in the GLM3 (35), GLM1 (36) and GLM2 (37) emission models, and the associated intensity for each of them (bottom panel), displaying the direct (\(m=0\)) and photon ring (\(m=1,2\)) emissions, respectively. direct emission. On the other hand, in these GLM1/GLM2 models the size of the shadow is strongly reduced to its inner counterpart in such a way that the non-zero brightness re Figure 8: The optical appearance of EiBI RN1 configurations (top panel) in the GLM3 (35), GLM1 (36) and GLM2 (37) emission models, and the associated intensity for each of them (bottom panel), displaying the direct (\(m=0\)) and photon ring (\(m=1,2\)) emissions, respectively. Figure 9: The optical appearance of EiBI FC1 configurations (top panel) in the GLM3 (35), GLM1 (36) and GLM2 (37) emission models, and the associated intensity for each of them (bottom panel), displaying the direct (\(m=0\)) and photon ring (\(m=1,2\)) emissions, respectively. gion lies well inside the critical curve. This size is essentially a gravitationally lensed image of the event horizon and, therefore, it has little to do with the GLM1/GML2 models but it is instead heavily associated to the gravitational configuration setting the location of the event horizon. This effect is also present in those EiBI configurations which qualitatively deviate heavily from the Schwarzschild one, as in the FCt of Fig. 10, where we see some radiation leaking off the region inner to the photon ring. In all cases, the photon ring(s) appear in the optical appearance of the black holes as a boost of luminosity between the outer edge of the direct emission and the inner shadow. Next we head on this very direction to track this question in more detail. ### Lyapunov exponents and photon rings relative luminosities In terms of luminosities, for GLM2 models the fact that the \(m=1,2\) emissions blend into a single photon ring provides a non-negligible boost in the maximum luminosity of this hybrid photon ring. Nonetheless, our code allows us to track individually the properties of each of these individual photon rings: locations, impact parameters and, more importantly to our purposes here, their corresponding luminosities as compared to their RN counterparts and also to that of the direct emission itself. A marker of this relevant feature is provided by the so-called Lyapunov exponents, which are a sort of measure of the instability scale for trajectories slightly displaced from the bound orbit, that is, \(r=r_{ps}+\delta r_{0}\) with \(\delta r_{0}\ll r_{ps}\). This means that, in a spherically symmetric space-time of the type considered here, for a photon starting its trip at a certain \(\delta r_{0}\) away from the critical curve, after a certain coordinate time \(t\) has passed it will be located at a distance \[\delta r\approx e^{\lambda t}\delta r_{0} \tag{38}\] where \(\lambda\), measuring the growth rate of these radial perturbations in coordinate time, is dubbed in [74] as the _principal Lyapunov exponent_, and it can be expressed as \[\lambda=\sqrt{\frac{\mathcal{V}^{\prime\prime}(r)}{2t}} \tag{39}\] which is dependent on the second derivative with respect to \(r\) (indicated by a prime) of a modified potential defined in our case as (recovering the one of GR for \(\Omega_{+}\to 1\)) \[\mathcal{V}(r(x))=\Omega_{+}^{2}\left(E^{2}-\frac{AL^{2}}{r^{2}}\right) \tag{40}\] for a static spherically symmetric geometry of the form (1). Using the conserved quantities of the system and the definitions introduced so far for the EiBI geometries considered in this work, one can characterize this second derivative of the potential defined in (40) in the Figure 10: The optical appearance of EiBI FCt configurations (top panel) in the GLM3 (35), GLM1 (36) and GLM2 (37) emission models, and the associated intensity for each of them (bottom panel), displaying the direct (\(m=0\)) and photon ring (\(m=1,2\)) emissions, respectively. representation as (here \(A_{ps}\equiv A(r(x_{ps}))\)) \[\frac{d^{2}\mathcal{V}}{dx^{2}}\Big{|}_{x=x_{ps}}=\frac{L^{2}}{r^{4}(x_{ps})} \Omega_{+}^{2}(x_{ps})\left[2A_{ps}-r^{2}(x_{ps})A_{ps}^{\prime\prime}\right] \tag{41}\] or, alternatively, in the \(r\)-representation, as \[\frac{d^{2}\mathcal{V}}{dr^{2}}\Big{|}_{r=r_{ps}}=\frac{L^{2}}{r_{ps}}\Omega_{- }(r_{ps})\left[2A_{ps}-r_{ps}^{2}A_{ps}^{\prime\prime}\right] \tag{42}\] where \(x\)-derivatives can be traded by \(r\)-derivatives by using both the photon sphere condition and the frame transformation (8). Eq.(42) is in agreement with the formulae found in the literature [77] for general spherically symmetric space-times of the form (1), and which in our case entail modifications to the usual GR structure via both the explicit \(\Omega_{+}\) factor but also via the new shape of the metric function \(A(r(x))\). The instability of the orbits is better understood (and better tuned to the potential observables that characterize the photon rings of the background geometry) when, instead of the coordinate time, it is expressed in terms of the increase in the number of half-orbits. Indeed, after \(m\)-half orbits, a given photon departing at a distance \(\delta r_{0}\) from the critical curve will be located at a radius \(\delta r\) away from it given by [78] \[\delta r=e^{\gamma m}\delta r_{0} \tag{43}\] Since the Lyapunov exponent \(\gamma\) controls the relative flux between successive rings, in the limit \(m\to\infty\) it provides a universal quantifier of the contribution of a given spherically symmetric geometry to the properties of the image [29], which is independent of the properties of the accretion disk. In particular, for the Schwarzschild case this exponent equals \(\gamma=\pi\) so that this relative flux decays with successive rings as \(e^{-\pi}\simeq 0.0432\)[24]: thus each of such ring would be \(\sim 23.1\) times fainter than the previous one, which we call its extinction rate. For the sake of this work we shall concentrate on the Lyapunov exponent between the photon rings \(m=1,2\) instead of the infinite \(m\) limit, since these are the rings we hope to see in future interferometric projects [29]. Thus, we shall evaluate it according to \[\gamma_{2/1}=\log(l_{2}/l_{1}) \tag{44}\] where \(l_{1,2}\) are the locations of the \(m=1,2\) photon rings with respect to the critical curve, that is, \(l_{1,2}=r_{1,2}-r_{ps}\). In the Schwarzschild case, this number equals \(\gamma_{2/1}\approx 3.15075\), which amounts to a \(\sim 0.3\%\) of difference with the exact number of the \(m\to\infty\) limit. In Table 2 we display this number (first row) for the reference GR black hole configurations: Schwarzschild black hole, two-horizons RN black hole, and extreme black hole, and also its associated extinction rate (second row) between successive rings (i.e. its exponential version, as given by Eq. (43)). Note that due to the presence of the exponential, this rate decays quickly with \(\gamma_{2/1}\): for instance, for a value \(\gamma_{2/1}\approx 2.45\) we get a twice less severe extinction, and this will be typically the case for most configurations when we push the charge large enough. Similarly, in Table 3 we display the same numbers for the five black hole configurations of the EiBI case. In this case the Sch and RN1 configurations can be put into direct correspondence with their GR-RN (\(Q=0.5\) and \(Q=1\), respectively) counterparts. We see that the Lyapunov exponent is larger in those EiBI cases which can be put in direct correspondence with their GR counterparts, so the photon rings of the former are expected to be less luminous than those of the former, an effect enhanced with an increase of the charge. Nonetheless, the Lyapunov exponent is a theoretical number whose value must be compared with the real luminosity of each photon ring (as compared to either the direct emission or to each other) when specific emission models are switched on to illuminate the object. We find thus convenient to compute the extinction rate of the \(m=1\) photon ring as compared to the direct emission (\(m=0\)), and the extinction rate of the \(m=2\) photon ring as compared to the \(m=1\) one (since it is a target \begin{table} \begin{tabular}{|c|c|c|c|} \hline & Sch & \(Q=0.5\) & \(Q=1\) \\ \hline \(\gamma_{2/1}\) & 3.15075 & 3.05215 & 2.2593 \\ \hline \(E_{2/1}\) & 23.35 & 21.16 & 9.57 \\ \hline \(E_{III}\) & 21.00 & 19.29 & 12.14 \\ \hline \(E_{III}^{\prime}\) & 28.63 & 26.19 & 13.09 \\ \hline \(E_{I}^{\prime}\) & 12.56 & 11.64 & 7.34 \\ \hline \(E_{I}^{\prime}\) & 24.73 & 22.52 & 10.78 \\ \hline \(E_{II}^{\prime}\) & 9.79 & 8.99 & 5.15 \\ \hline \(E_{II}^{\prime}\) & 23.42 & 21.25 & 9.50 \\ \hline \end{tabular} \end{table} Table 2: The Lyapunov exponent \(\gamma_{2/1}\) (taking \(M=1\)) corresponding to the theoretical luminosity ratio of the \((m=2)/(m=1)\) photon rings, the extinction factor \(E_{2/1}=e^{\gamma_{2/1}}\) associated to it, and the ratio of real luminosities between the \((m=1)/(m=0)\) and \((m=2)/(m=1)\), for the Schwarzschild black hole, and three examples of charged, extreme, and overcharged RN black holes. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline & Sch & RN2 & RN1e & FC1 & FCt \\ \hline \(\gamma_{2/1}\) & 3.06288 & 2.56472 & 1.99167 & 2.62805 & 2.32797 \\ \hline \(E_{2/1}\) & 21.38 & 12.99 & 7.32 & 13.19 & 10.25 \\ \hline \(E_{II}^{\prime}\) & 19.37 & 12.80 & 11.44 & 13.38 & 9.85 \\ \hline \(E_{III}^{\prime}\) & 26.44 & 15.46 & 12.39 & 18.48 & 16.45 \\ \hline \(E_{I}^{\prime}\) & 11.73 & 8.33 & 7.41 & 8.53 & 8.10 \\ \hline \(E_{I}^{\prime}\) & 22.76 & 14.36 & 11.56 & 14.35 & 14.95 \\ \hline \(E_{II}^{\prime}\) & 9.06 & 6.13 & 5.24 & 6.27 & 6.00 \\ \hline \(E_{II}^{\prime}\) & 21.49 & 12.49 & 10.03 & 13.08 & 12.47 \\ \hline \end{tabular} \end{table} Table 3: The Lyapunov exponent \(\gamma_{2/1}\) (taking \(M=1\)) corresponding to the theoretical luminosity ratio of the \((m=2)/(m=1)\) photon rings, the extinction factor \(E_{2/1}=e^{\gamma_{2/1}}\) associated to it, and the ratio of real luminosities between the \((m=1)/(m=0)\) and \((m=2)/(m=1)\), for the five classes of EiBI black holes: Schwarzschild-like, the two-horizon and extreme RN-like, and the two single-horizon finite-curvature cases (FC1 and its limiting version FCt). of future interferometric projects [29]), i.e. \[E^{1}=\frac{I_{m=1}}{I_{m=0}}\quad;\quad E^{2}=\frac{I_{m=2}}{I_{m=1}} \tag{45}\] assuming that all three emission models (labelled by a subindex \(\{III,I,II\}\)) are normalized in their total intensity to unity. The first rate gives a measure of the degree of visibility of the first photon ring when superimposed with the direct emission, while the second rate provides a measure of how the Lyapunov exponent transforms into actual relative luminosity between successive photon rings. The combined analysis of both rates can actually probe regular geometries alternative to GR ones in a less dependent way of the accretion disk uncertainties, see e.g. [9] for a discussion on this point. These observationally-relevant numbers are displayed in Table 2 for GR configurations and in Table 3 for the EiBI ones. In both tables we see the comparison of the real extinction rates modelled by the emission profile of the disk as compared to the purely-geometrical (Lyapunov) calculations. Several observations are relevant from these rates: * The extinction rates are reduced with the increase of charge in both GR and EiBI configurations, an effect that is greatly enhanced the closer we are to the critical charge limiting the existence of black hole configurations (recall that, taking \(M=1\), this corresponds to \(Q=1\) in the GR case, but \(Q\approx 1.0511\) in the EiBI one). This was already expected on the grounds of the computation of the Lyapunov exponent. * The extinction rates, at equal mass and charge, are slightly higher in the EiBI case than in the GR one for every emission model, meaning that the corresponding rings are comparatively fainter. This could have been also foreseen from both the computation of the Lyapunov exponent and from the reduction of the shadow's size in the EiBI case. * Accretion disk models GLM3/GLM1/GLM2 infuse comparative larger luminosities to both the \(m=1\) and \(m=2\) photon rings at every value of charge in both the GR and EiBI configurations as we move in the sequence; indeed, the photon rings of the GLM2 model can be up to twice times more luminous than those of the GLM3 model. * The relevant extinction ratio \(E^{2}\) between the \((m=2)/(m=1)\) photon rings is also consistent with the trend above: in the GR case this rate is lower in the GLM2 model than in the GLM1 one, and the latter is larger than in the GLM3 one. In all cases the EiBI values are higher than in the GR case, thus entailing a lower luminosity contrast between the photon rings than in the GR configurations. The above numbers contain the tiny details of the set of figures 6, 7, 8, 9 and 10 extracted from the optical appearance (top) of each configuration (normalized intensity in each figure) for the emission models GLM3 (left), GLM1 (middle) and GLM2 (right) alongside the observed intensity profile (bottom). For low values of the charge, the GLM3 allows to see the peaks associated to each photon rings clearly isolated from the direct emission, though this neat separation is blurred as the charge is increased with the result that the photon rings end up overlapped with the direct emission in some cases. In the GLM1 and GLM2 models the direct and photon ring emissions are always overlapped, but this was already expected and assumed as an inherent consequence of the extension of the accretion disk to the event horizon of the black hole configurations. Because of this fact, the corresponding optical appearances of each geometry in combination with every emission model is different not only on the latter (which is the main actor in distinguishing between images) but also on the former, since there are obvious differences as we go deeper in the sequence \(\mathrm{Sch}\rightarrow\mathrm{RN2}\rightarrow\mathrm{RN1}\) for a fixed emission model. In particular, photon rings are clearly more visible within this sequence, though this effect is already present in the original RN solutions. FC1/FCt geometries are worth of similar comments, though here we do not compare to their counterpart images in GR. The bottom line of the discussion above is that the features of black hole images are highly entangled between the modifications to the background geometry by the EiBI corrections (at equal mass and charge) and the properties of the accretion disk itself, whose separation is a delicate and persistent issue for every black hole configuration [79]. The single clean modification is that photon rings in the EiBI case are slightly less luminous at equal parameters than their GR counterparts, while at the same time yielding an enhancement of the shadow's size. We conjecture this effect to be the result of a relative weakening of the gravitational interaction in the EiBI case which is somewhat behind the ability of the model to remove the troubles with geodesic incompleteness in these solutions, while at the same time affecting the trademark of the curved space-time around them via the feature of its photon rings. Since the presence of an event horizon prevents one from extracting more physical consequences of the regularity of these space-times, to tackle this challenge we consider next the two scenarios in which these configurations get rid of their event horizons, corresponding to the RN0/FC0 configurations. ### Higher-order photon rings of horizonless compact objects Horizonless compact objects have been long studied in the literature due to the conceptual and operational differences they pose as compared to black hole spacetimes, and which may have repercussions at the observational level via several avenues (a detailed review can be found in [2]). In order to stand as firm contestants to the black hole hypothesis, such objects are required to satisfy a number of physical admissibility conditions, including suitable gravitational collapse mechanisms, stability against linear perturbations, and being supported by a physically sound theory of the gravitational and matter fields. While not all such horizonless objects need to be compact enough to have a critical curve, the ones that we are going to consider in this section, corresponding to the RN0 configurations, have it. For the sake of shadow images the fact that in this case light trajectories can get access to the full internal part of the effective potential (given the absence of an event horizon) allows further contributions to photon ring emissions on top of those originated near the critical curve. Indeed, such new contributions are associated to those rays circulating between the location of the maximum of the effective potential, its minimum (dubbed as the _anti-photon sphere_) and the infinite potential slope [recall Fig. 3, green curve]. The net effect is that the corresponding object will show a _multi-ring structure_, namely, a cascade of additional non-negligible higher-order photon ring contributions (for an overly general analytic analysis of this problem we refer the reader to Refs. [80, 81]). These appear thanks to the enhancing effect of the region between the photon sphere and the internal part of the potential such that every photon with low enough impact parameter hits the infinite potential slope, allowing for further transits of light trajectories around the photon sphere on its way back, this way collecting additional luminosities1. As a consequence, the expectations based on the theoretical Lyapunov exponent regarding an exponential suppression of the luminosity of successive photon rings with higher \(m\) does not necessarily hold here, and one must study this problem on a case-by-case basis for the combination of background geometry and accretion disk's emission (for the canonical case of Kerr-Newman black holes see the analysis of its multi-ring structure of [85]). Footnote 1: A different, but somewhat related scenario corresponds to those compact objects having two accessible photon spheres, which can be realized in both black holes [82] and symmetric [83] and asymmetric [84] wormholes. In such a case, the potential well between both photon spheres acts as source of additional higher-order trajectories. In Fig. 11 we depict the result of our simulations for the images of the RN0 configurations appearing in Table 1, where we consider light trajectories going up to the \(m=5\) intersection with the disk, since we verified this is the last trajectory which provide visible enough associated rings. As for the number of iterations in tracking the critical curve, however, we limit them to 1000, in order to shorten computation times and because here we are not that interested on precisely determining the corresponding luminosities but on seeing the general visual appearance. Note that in this case despite the fact that no horizon is present, these trajectories cannot get to the wormhole throat radius since the potential grows without bound for a larger radius than the throat one. The multi Figure 11: The optical appearance of EiBI RN0 configurations (top panel) in the GLM3 (35), GLM1 (36) and GLM2 (37) emission models, and the associated intensity for each of them (bottom panel), displaying the direct (\(m=0\)) and photon ring (\(m=1,2,3,4,5\)) emissions, respectively, via a number of peaks \(M>m\). ring structure is clearly visible (top figures), while the observed intensity (bottom figures) reveals that many new narrow peaks have appeared in the internal region of the impact parameter space, which are particularly visible in the GLM3 model. Indeed, the one-to-one correspondence between trajectories crossing the equatorial plane \(m\) times and the number of photon rings \(M\) is broken with \(M>m\). This was expected on the grounds of the lessons learned from some toy-models having a potential "well" in which these new rings can be originated [82; 86; 87], qualitatively similar as the one found in these configurations. These new rings appear as sharp images in the optical appearance (top figures) of the object, with a strong dependence on the emission model. Indeed, only in the GLM3 model most of these rings appear as separated images from the direct emission; in the GLM1/GLM2 models they end up instead overlapped with the direct emission (given the fact that the latter extends all the way down to the potential slope) in quite a complex way, and we actually see a strong dependence on the luminosity infused to each of them depending of the decay of the intensity with the distance between each model. Despite their relative faintness, we recall that the sharpness of these rings grant their dominance in the Fourier spectrum of interferometric detections and, as such, they are potentially observable (should they happen to be there) in future probes [29]. On the other hand, the actual central brightness depression of the image is sharply reduced as compared to the black hole images, and it is bounded by the innermost of the higher-order photon rings. Therefore, the uncloaking of the wormhole throat unmistakably distinguish these configurations from their black hole counterparts, both in the multi-ring structure and in the drastic reduction of the actual shadow's size. It should be stressed that similar features as those discussed here can be found in the overcharged RN configurations of GR. However, in such a case one is faced with the insurmountable difficulty of the presence of a space-time singularity at its center as given by its incompleteness character, since purely radial (\(b=0\)) null geodesics can reach it in finite affine time without any possibility of further extension. In the present case the presence of the wormhole throat removes this theoretical problem but it changes little the optical appearance of the object since such GR incomplete geodesics contribute nothing to the image: the differences are again tiny in terms of relative luminosities between successive photon rings. Note also that in the present case the fact that the wormhole throat lies beyond the infinite potential slope means that the throat cannot actually be reached by any null geodesics (save also by purely radial ones) since each of them find a turning point and, from that point of view, this wormhole configuration could be regarded as non-traversable. This kind of objects having both a photon and an anti-photon sphere has been strongly criticised in the literature on the grounds that the stable light ring (the minimum of the effective potential) may trigger the development of a instability by trapping massless modes that eventually grow large enough so as to back-react upon the background geometry and destabilize the whole configuration [88; 74]. This is case, for instance, of a family of compact objects satisfying the above conditions that has been widely studied in the literature, namely, ultra-compact bosonic stars. The recent results of [89] using full non-linear numerical simulations of two families of such boson stars with both a photon and anti-photon spheres show that the final fate of such instability is that such objects either collapse into a black hole or migrate to configurations without critical curves, i.e., without photon rings in their images. In our case, the analysis of this problem would entail to possibly consider both pieces of the wormhole (i.e. in the regions \(x>0\) and \(x<0\)) and taking its non-trivial topology into account, something beyond the scope of this work. ### A finite-curvature traversable wormhole In the FC0 configurations the effective potential is everywhere finite while also displaying a critical curve [recall Fig. 3, cyan curve]. But now the wormhole throat is not only uncloaked by also reachable by any light ray with small enough impact parameter. However, the process of generation of images is very similar to those of the black hole ones since every trajectory passing over the maximum of the effective potential will be swallowed up by the wormhole throat instead of the event horizon, so the optical appearance and observed intensity are also very similar, see top and bottom panels of Figs. 12. Things would be very different should we consider radiation flowing from the other side of the wormhole throat to our local space-time patch, since in such a case there could be trajectories with impact parameters lower than the critical one coming to color the otherwise black central region, and which would constitute strongly different shadow images (we shall address this upgrade in forthcoming works on the subject). This way, this kind of object where the stable critical curve is absent could circumvent the potentially pathological long-lived modes associated to it [74]. Furthermore, it runs away from the theorem exposed in Ref. [73] by which critical curves (photon spheres) in horizonless ultra-compact objects always come in pairs (i.e. one unstable and another stable) via a simple argument by which such every regular object must have the same topological charge associated to critical curves (with opposite signs for stable/unstable such curves) as Minkowski space-time, namely, zero, in order to be created by a dynamical formation process. The two features that conspire to produce this effect for these configurations are i) the topologically non-trivial character of the space-time via the presence of a wormhole throat (see the discussion at the end of [89]) and ii) the fact that the metric at the throat is finite and larger than one (which has the side-effect of removing the presence of curvature divergences). Admittedly these configurations are some what "artificial" since any small modification to the cubic charge-to-quadratic mass ratio (14) would make them to suddenly "leap" into any of the other families of configurations in which the second effect is absent. Nonetheless, these configurations are also very suggestive since they naturally implement Wheeler's _geons_ idea [90], namely, self-sustained sourceless gravito-electromagnetic entities with both their charge and mass realized from a topological origin [63]. This topic deserves a much deeper analysis, which we leave for future work as it also goes beyond the scope of this paper. ## V Conclusion and Discussion In this work we have studied the optical appearance of several families of modified black hole and horizonless configurations when illuminated by a thin accretion disk. These geometries are found within a theoretically well-motivated and observationally viable extension of General Relativity dubbed as Eddington-inspired Born-Infeld gravity, formulated in a metric-affine context and coupled to an standard electromagnetic (Maxwell) field. As such, its counterpart in GR would be the Kerr-Newman solution, downgraded to the Reissner-Nordstrom solution in the spherically symmetric limit, the case of interest for the sake of this work. These families of configurations can be split into seven: five of black hole type and two of horizonless one, depending on two ratios between their masses and charges. This way, setting a mild value for the gravitational's theory parameter in the branch for which all configurations develop a wormhole throat on their innermost region (hidden behind the event horizon in the black hole cases) we have selected representative samples of each family of configurations. With this in hand, we have proceeded to the analysis of their shadow and photon ring images. First we have developed the null geodesic formalism, suitably adapted to the peculiarities of these geometries, with particular attention to light trajectories that can turn several (half-)times around each configuration. When the disk is transparent to its own radiation and furthermore geometrically thin, this creates a series of self-similar photon rings (characterized by an integer number \(m\geq 1\) counting the number of intersection with the equatorial plane of the disk) on top of the direct emission of the disk itself (\(m=0\)), and with exponentially-decreasing contributions to the total luminosity of the object. We have characterized such rings according to the theoretical luminosity provided by the Lyapunov exponent associated to the instability scale of nearly unstable trajectories, and the real luminosity after switching on three models of monochromatic emission characterized by a single distance-dependent function, previously introduced in the literature. One of them is designed to probe the structure of such photon rings (limited to \(m=1,2\), which are the only ones that provide visible contributions to the optical appearance of all black hole configurations) when they are (mostly) separated from the direct emission, while the other two extend all the way down to Figure 12: The optical appearance of EiBI FC0 configurations (top panel) in the GLM3 (35), GLM1 (36) and GLM2 (37) emission models, and the associated intensity for each of them (bottom panel), displaying the direct (\(m=0\)) and photon ring (\(m=1,2\)) emissions, respectively. the horizon with a difference dependence on the radial distance, and which mix the two photon rings over the direct emission of the disk. The main conclusion of our analysis is that photon ring and shadow images of these modified black hole configurations closely resemble their GR counterparts, with the properties of the accretion disk emission being the major actor casting these images. Two main qualitative differences arises though: the relative luminosity of the photon rings, particularly on the ratio between the \(m=2\) and \(m=1\) ones, is slightly fainter than their GR counterparts, an effect that is enhanced as larger values of the charge are considered. Secondly, the inferred shadow's radius of the Sgr A\({}^{*}\) from the EHT Collaboration according to some calibration factors suggests that the shadow's radii of the present configurations are also slightly larger than their GR counterparts. As for the two horizonless (traversable wormhole) configurations, one of them creates a stable photon sphere on its effective potential in addition to the unstable one, allowing for a multi-ring structure towards the \(m=3,4,5\) trajectories and which yields an optical appearance of the corresponding object that strongly departs from their GR-images. Criticisms on these configurations can be made at the potential instability that the stable photon sphere may trigger, which requires a deeper analysis. The second configuration resembles GR-images, but it has some theoretical niceties regarding the absence of stable photon sphere (and its apparent run-away from the hypotheses of [73]) and its interpretation as geonic objects. This work must be seen as a first step towards a systematic analysis of shadow and photon ring images of black holes and ultra-compact configurations within a metric-affine framework of gravitational extensions of GR, where the singularity-removal of the configurations of the latter can be achieved by two different mechanisms (see [18] for a discussion on this point). Since the shape of the modified gravitational configurations introduces several additional difficulties both in the analytic analysis of their properties as well as in the numerical integration of their light trajectories (in particular via the corresponding computational times), the disk's modeling is quite simplified. In order to obtain more realistic images, though, our analysis must be upgraded in various directions: * Real accretion disks' geometry is neither infinitely thin nor completely spherical (see however [91; 92]), so an upgrade to disks of some non-negligible thickness would be appropriate. * The optical thinness of the disk needs not be so at all frequencies, something disregarded in our analysis, which assumes a monochromatic, everywhere thin emission. Furthermore our assumption of a single frequency in the disk's frame contrasts with the EHT observations assumptions, which are made at a constant \(1.3\)mm frequency on the observer's frame [68]. As for the choice of the \(I(r)\) function, we supported ourselves on a family of profiles (the GLM ones) extracted from the matching between semi-analytic models and the outcomes of GRMHD simulations, but this is a fast-evolving field in which there is plenty of room for improvement. * For a black hole rotating at full speed the shadow's size is up to a \(\sim 7\%\) smaller than its spherically symmetric counterpart [22]. While this small difference could be ignored by arguing that there are other aspects of the modelling that influence the images more, the fact is that the photon sphere degenerates into a photon shell of unstable geodesics such that light rays can now perform more involved trajectories in the radial and angular directions, which will also leave their imprints in the corresponding images (via shapes and luminosities of their photon rings). Now, the finding of rotating black hole configurations (at least under analytical form) is a major problem in every modified theory of gravity, which in the present case can be shortcut via the application of a _mapping method_ allowing to find infinite classes of rotating black holes with reasonable ease [93]: the rotating counterpart of the spherically symmetric configurations discussed in this paper was actually found in the work [94]. Whether the integrability of the geodesic equation in such a case still holds is a major question of interest to carry out the analysis of their images. * Our current images are displayed at face-on orientation, while the EHT observations of M87 display an inclination of the rotation axis with the observer's line of sight of about 17 degrees [8]. Nonetheless one needs to go to larger inclinations in order to find strong deviations in the shape of photon rings as compared to the face-on case, basically via a bending of this shape to cross a section of the plane of the image. The implementation of such a feature in the current configurations is problematic due to computational times. As mentioned, the transition from GR configurations to EiBI ones may increase this time by a factor \(\sim 20-50\) due to the presence of special functions in the definition of the background geometry metric components (something that may worsen as we move towards other metric-affine configurations found in the literature), upon which the incorporation of inclination would further enlarge such times. Individually, every such upgrade requires hard work beyond what we have at the moment. We can roughly split them into two kinds: modifications to the background geometry and to the ray-tracing code. For the former, the community has accumulated a good deal of knowledge on metric-affine geometries, particularly on the spherically symmetric case, and we also recently hinted on how to generalize these geometries to the axially symmetric case [93] via the development of new powerful methods [62]. For the sake of this work we chose perhaps the most workable model of metric-affine gravity found so far, but there are others available in the literature such as \(f(R)\)[18] or quadratic gravity [95], coupled to both non-linear electromagnetic fields and to different classes of (anisotropic) fluids, many of which are also successful at resolving space-time singularities. For the latter, its implementation has varying degrees of difficulty: the extension of the geometry of the disk to be completely spherical and the implementation of the inclination of the disk are reasonable enough, while more refined intensity profiles are certainly possible: thick (but not completely spherical) disks and light trajectories in axially symmetric background geometry is a daunting challenge. To conclude, this work can be seen as the first one towards the understanding of what modifications can be expected in shadow and photon ring images of singularity-free geometries of metric-affine type. In implementing this programme we have already faced the widespread and well known problem that the role played by the background geometry and the geometrical, optical, and emission properties of the accretion disk are highly entangled in determining the properties of such images. Most geometrical modifications to GR predictions tend to be too mild to be easily recognizable via characterization of their images (as in the black hole cases considered here), while those introducing qualitatively new features (such as the RN0 horizonless configurations) may be in out-right contradiction with observed images. A thorough characterization of what metric-affine gravities have to say about such images in different settings for the background geometry and accretion disk modelling to seek for observational discriminators with respect to GR predictions is thus the most relevant long-term goal of our collaboration on this subject, on which we hope to report soon enough. ## Acknowledgements JLR acknowledges the European Regional Development Fund and the programme Mobilitas Pluss for financial support through Project No. MOBJD647, and project No. 2021/43/P/ST2/02141 co-funded by the Polish National Science Centre and the European Union Framework Programme for Research and Innovation Horizon 2020 under the Marie Sklodowska-Curie grant agreement No. 94533. DRG is funded by the _Atraccion de Talento Investigador_ programme of the Comunidad de Madrid (Spain) No. 2018-T1/TIC-10431. This work is supported by the Spanish National Grants FIS2017-8440-C2-1-P, PID2019-108485GB-I00, PID2020-116567GB-C21 and PID2020-117301GA-I00 funded by MCIN/AEI/10.13039/501100011033 ("ERDF A way of making Europe" and "PGC Generacion de Conocimiento"), and the project PROMETEO/2020/079 (Generalitat Valenciana). This article is based upon work from COST Actions CA18108 and CA21136, supported by COST (European Cooperation in Science and Technology). We run our simulations with our own ray-tracing and accretion disk-illumination code, fully workable for any spherically geometry.
2308.07354
Addressing the $r_{d}$ Tension using Late-Time Observational Measurements in a Novel deceleration Parametrization
This paper introduces a novel cosmological model aimed at probing the accelerated expansion of the late Universe through a unique parametrization of the deceleration parameter. We aim to constrain key cosmic parameters by integrating recent measurements of the Hubble parameter obtained from various observational methods, including cosmic chronometers, Type Ia Supernovae, Gamma-Ray Bursts (GRB), Quasars, and Baryon Acoustic Oscillations (BAO) from recent galaxy surveys. With a redshift range spanning $0.106 < z < 2.33$ and incorporating the latest Hubble constant measurement from Riess in 2022, our analysis yields optimal fit values for the Hubble parameter $H_{0}$ and sound horizon $r_{d}$. Notably, we uncover an inconsistency in $H_{0}$ values derived from late-time observational measurements, reflecting the well-known $H_{0}$ tension. In terms of $r_{d}$, while there is close agreement between Joint analysis and Joint analysis with R22, discrepancies arise upon gradual inclusion of BAO and BAO with R22 datasets. Our model demonstrates excellent fit to observed data and aligns well with the standard $\Lambda$CDM paradigm at higher redshifts. However, its most intriguing aspect lies in predicting a super-accelerated expansion in the distant future, in contrast to the de Sitter phase predicted by $\Lambda$CDM. Additionally, unique behaviors in the jerk parameter hint at novel dynamics beyond traditional cosmological models. Statefinder and $O_{m}$ Diagnostics tests were conducted, and comparison using the Akaike information criterion indicates neither model can be ruled out based on the latest observational measurements. These findings propose our cosmological model as a compelling alternative to $\Lambda$CDM, offering fresh insights into dark energy's nature and the cosmos' future.
Himanshu Chaudhary, Ujjal Debnath, G. Mustafa, S. K. Maurya, Farruh Atamurotov
2023-08-14T11:08:04Z
http://arxiv.org/abs/2308.07354v3
A New Cosmological Model: Exploring the Evolution of the Universe and Unveiling Super-Accelerated Expansion ###### Abstract In this paper, we present a cosmological model designed to study the evolution of the universe based on a new parametrization of the deceleration parameter. The model considers a spatially flat, homogeneous, and isotropic Friedmann-Lemaitre-Robertson-Walker (FLRW) universe filled with radiation, dark matter (DM), and dark energy (DE). We derive the Friedmann equations and the energy conservation equation for the universe, accounting for separate conservation equations for radiation, DM, and DE. Our proposed deceleration parameter is given by a formula involving constants \(H_{0}\), \(\Omega_{r0}\), \(\Omega_{m0}\), \(q_{2}\), \(q_{1}\), \(q_{0}\), \(\alpha\) and \(\beta\). which we subsequently fit to observational data. To assess the model's viability, we compare it with a diverse range of observational data, including cosmic chronometers, type Ia supernovae, baryon acoustic oscillations, and cosmic microwave background measurements. Employing the chi-square statistic and a Markov Chain Monte Carlo (MCMC) method, we estimate the best-fit values for the free parameters and investigate the constraints imposed by observational data on the model. Our results indicate that our cosmological model provides an excellent fit to the observed data and exhibits a remarkable agreement with the standard \(\Lambda\)CDM paradigm at higher redshifts. However, the most intriguing discovery lies in the model's prediction of a super-accelerated expansion in the distant future, in contrast to the de Sitter phase predicted by \(\Lambda\)CDM. This implies the presence of dark energy driving the universe's accelerated expansion. Additionally, the model displays unique behaviors in the jerk and snap parameters, indicating novel dynamics beyond the traditional cosmological model. These findings suggest that our proposed cosmological model offers a compelling alternative to the \(\Lambda\)CDM paradigm, shedding new light on the nature of dark energy and the future fate of the cosmos. ###### Contents * I Introduction * II Fundamental Equations of the FLRW Model * III Unveiling the Parameterized Deceleration: A Cosmic Odyssey * IV Data Analysis * IV.1 Methodology : Markov Chain Monte Carlo Analysis * IV.2 Data Discription * IV.2.1 Cosmic Chronometers (CC) * IV.2.2 type Ia supernovae (SNIa) * IV.2.3 Baryon Acoustic Oscillations (BAO) * IV.2.4 Cosmic Microwave Background * V Observational, and theoretical comparisons with CC and SNIa Measurements * V.1 Contrasting with CC Measurements * V.2 Contrasting with type Ia supernovae (SNIa). * V.3 Relative differences between Model and \(\Lambda\)CDM. * VI Cosmographic Parameters * VI.1 The deceleration parameter * VI.2 The jerk parameter * VI.3 The snap parameter * VII Statefinder Diagnostic * VIII Om diagnostic. * IX Information Criteria * X Results * XI Discussions and Conclusions Introduction The acceleration of cosmic expansion is seen to be one of the most important findings in modern cosmology [1]. The cosmos appears to be expanding with repulsive force and not slowing down under regular gravity. This unidentified factor, known as dark energy (DE), is responsible for current cosmic acceleration. DE is a mysterious mechanism of energy that is thought to pervade all of space and tends to burst the extension of the cosmos in physical cosmology and astronomy [2]. However, the nature of DE is yet uncertain, necessitating additional investigation. In the classic cosmological cold dark matter (CDM) model, the total mass energy of the universe consists of 4.9 % ordinary matter, 26.8 % DM, and 68.3 % dark energy. Also, in astronomy, dark matter (DM) is an undiscovered kind of substance that appears only as a gravitational interaction but does neither emit or absorb light [3]. The nature of DM is yet unclear, although its presence has been demonstrated by astronomical measurements [4; 5]. Various approaches have been suggested to explain DE, including quintessence [6], quintom [7], Chaplygin gas with its modified model [8], K-essence [9], new agegraphic DE [10], holographic DE model [11], pilgrim DE model [12], and Tsallis holographic DE (THDE) [13]. The simplest of them is the cosmological constant hypothesis, which is consistent with data [14]. Despite its success, it suffers from some problems such as the coincidence problem, the fine tuning problem and the age problem [15; 16; 17]. The equation of state (EoS) parameter \(\omega\) defines the relationship between energy density and pressure in the cosmic framework. This is a dimensionless parameter that describes the cosmos' phases [18]. A parametrized version of \(\omega\) is assumed for departing behavior of DE. A Taylor series expansion in the redshift, scale factor or any other parametrization of may be used [19]. The main idea in this approach is to consider a specific evolution scenario instead of considering any DE model apriori and then determine the nature of the exotic component that is triggering cosmic acceleration. It is known as the model-independent approach, which depends on estimating model parameters from existing observational datasets [20; 21; 22; 23; 24; 25]. However, number of studies have tried to find out the history of the universe without relying on any particular cosmological model. Such techniques are frequently referred to as cosmography or cosmokinetic models, but we shall simply refer to them as kinematic models [26; 27]. This name stems from the fact that the entire study of the Universe's expansion (or its kinematics) is characterised solely by the Hubble expansion rate \(H=\frac{\dot{a}}{a}\), the deceleration parameter \(q=-\frac{a\ddot{a}}{a^{2}}\) and the jerk parameter \(j=-\frac{\ddot{a}a^{2}}{a^{3}}\), where a is the scale factor in the Friedmann-Lemaitre-Robertson-Walker (FLRW) metric. The deceleration parameter allows us to investigate the transition from a decelerated to an accelerated phase, whereas the jerk parameter allows us to investigate deviations from the cosmic concordance model without being limited to a certain model. In terms of the deceleration parameter, various studies have attempted to determine the redshift at which the universe transitions to an accelerated phase. [28; 29; 30]. In modern cosmology, A model-independent determination of the current deceleration parameter \(q_{0}\) and the deceleration-acceleration transition redshift are two of the most important things. It is required to utilise some parametrization for the deceleration parameter in order to investigate it in a cosmological-model independent context. This system has both benefits and drawbacks. One benefit is that it does not depend on the amount of matter and energy in the universe. One problem with this method is that it is unable to clarify the reason for accelerating expansion. Furthermore, the current deceleration parameter's value may be affected by the anticipated form of \(q(z)\). Recently, Mamon et al. [31] studied a special form of deceleration parameter and obtained the best-fit values using \(\chi^{2}\) minimization technique with available observational data. They also analyzed the evolution of the jerk parameter for the considered parametrized model. Gadbali, Mandal, and Sahoo [32] have explored a specific parametrization of deceleration parameter in the context of \(f(Q)\) gravity theory and constrained the model parameter by using Bayesian analysis with observational data. Much recently, several theoretical models have been developed to analyze the entire evolutionary history of the Universe through parametrization of \(q(z)\) as a function of scale factor (\(a(t)\)) or time (\(t\)) or redshift in [33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72]. Motivated by the above developments, in this paper we present a parametrization of deceleration parameter with five unknown parameters. This work mainly focuses on constraining the model parameters using various observational datasets. In order to find the constraints on the cosmic parameters, we make use of 31 points of the Cosmic chronometers dataset, 1071 points of the supernova type Ia, 17 points of BAO and 162 points of the GRB dataset and 24 observations of the compact radio quasar obtained from a VLBI all-sky survey of 613 milliarcsecond ultra-compact radio sources at GHz. We have also made use of the CMB dataset. We adopt MCMC analysis to fit the model parameters on the data and perform data analysis by \(\chi^{2}\) minimization technique. Finally, for each model, we did a graphical analysis of cosmographic parameters such as deceleration, jerk, and snap. In addition to this we have discussed and analysed the statefinder and Om diagnostics for our parametrization. The paper is structured as follows: section(II) introduces the basic equations of FLRW universe. Section(III) we discuss the Taylor series expansion of deceleration parameter in terms of cosmic reshift, and introduce a new five parameter parametrization of DP. Thereafter, we find the Hubble function corresponding to the parametrization. In section(IV), the data description and analysis methodologies using MCMC techniques are presented to constrain the model parameters and find the best fit values. Section(V) compares the model's predictions with observational data. Section (VI) gives a detailed description about the kinematic cosmographic parameters such as the deceleration, jerk and snap parameters. Section(VII) and (VIII) discuss about the statefinder and Om diagnostics and present the evolution history of dark energy on \(s-r\) and \(q-r\) planes. Section(IX) discusses about the information criteria. Section(X) and Section(XI) discusses the results and conclusions respectively. ## II Fundamental equations of the FLRW model In this section, we delve into the foundational equations of the spatially flat, homogeneous, and isotropic FLRW (Friedmann-Lemaitre-Robertson-Walker) universe. This model serves as a cornerstone in our exploration of the cosmos. We embark on our journey by embracing the line element that characterizes the very fabric of space and time: \[ds^{2}=-dt^{2}+a^{2}(t)\left[dr^{2}+r^{2}\left(d\theta^{2}+\sin^{2}\theta d \phi^{2}\right)\right], \tag{1}\] Here, \(a(t)\) unfurls itself as the scale factor, a pivotal entity shaping the evolution of our cosmic theater. Within this cosmic arena, the energy-momentum tensor of the fluid orchestrates the intricate dance of matter and energy. It assumes the form: \[T_{\mu\nu}=(\rho+p)u_{\mu}u_{\nu}+pg_{\mu\nu}, \tag{2}\] At its core, \(\rho\) and \(p\) personify the energy density and pressure density of this cosmic fluid. The 4-velocity \(u^{\mu}=\frac{d\mu^{\nu}}{ds}\) weaves the narrative of motion through the cosmos, steadfastly adhering to the profound relationship \(u^{\mu}u_{\mu}=-1\). The FLRW Universe beckons us to uncover its secrets encoded in the Friedmann equations, sacred inscriptions within Einstein's gravitational framework. ### Friedmann Equation 1: \[H^{2}=\frac{8\pi G}{3}\ \rho, \tag{3}\] Friedmann Equation 2: \[\dot{H}=-4\pi G(\rho+p), \tag{4}\] Here, \(H=\frac{\dot{a}}{a}\) emerges as the herald of cosmic expansion, and the temporal derivative denoted by the overhead dot gracefully embodies the cosmic passage of time, \(t\). In the realm of these equations, we discern the harmonious interplay between energy densities, pressures, and the cosmic fabric itself, unveiling the essence of our FLRW cosmos. Delving into the intricate tapestry of the cosmos, we find its very essence woven with fluid matter, manifesting as a symphony of energy density \(\rho\) and pressure \(p\). This cosmic ensemble adheres to a sacred edict, the energy conservation equation, resonating as: \[\dot{\rho}+3H(\rho+p)=0, \tag{5}\] Here, \(H\) is the celestial conductor, the Hubble parameter orchestrating the cosmic rhythm. Our odyssey into the cosmic composition unveils a triad of cosmic constituents: radiation, dark matter (DM), and enigmatic dark energy (DE). Their presence shapes the grand narrative of our universe, with densities and pressures woven into the cosmic fabric: \(\rho=\rho_{r}+\rho_{m}+\rho_{d}\) and \(p=p_{r}+p_{m}+p_{d}\). Peering into their individual realms, we invoke distinct conservation equations: Radiation: \[\dot{\rho}r+3H(\rho r+p_{r})=0, \tag{6}\] Dark Matter (DM): \[\dot{\rho}m+3H(\rho m+p_{m})=0, \tag{7}\] Dark Energy (DE): \[\dot{\rho}d+3H(\rho d+p_{d})=0, \tag{8}\] The radiant dance of radiation is characterized by \(p_{r}=\frac{1}{3}\rho_{r}\), yielding the mesmerizing revelation \(\rho_{r}=\rho_{r0}a^{-4}\), where \(a\) gracefully denotes the cosmic scale factor. Meanwhile, the enigmatic dark matter, a realm of negligible pressure (\(p_{m}=0\)), weaves its destiny as \(\rho_{m}=\rho_{m0}a^{-3}\). In this symphony of cosmic evolution, these equations breathe life into the dynamic interplay of cosmic components, echoing through the vast cosmic expansion. In our cosmic odyssey, the deceleration parameter, a compass guiding the cosmic voyage, unfurls before us: \[q=-1-\frac{\dot{H}}{H^{2}} \tag{9}\] Venturing further, we unravel the narrative of the enigmatic dark energy. This enigma unfolds as: \[q_{d}=-1-\frac{\dot{H_{d}}}{H_{d}^{2}} \tag{10}\] Here, \(H_{d}\) takes on the mantle of the cosmic maestro, conducting the expansion of the dark energy domain. A cosmic duet of equations, (3) and (4), resonate harmoniously, revealing cosmic truths: \[H_{d}^{2}=\frac{8\pi G}{3}\ \rho_{d} \tag{11}\] \[\dot{H}d=-4\pi G(\rho d+p_{d}) \tag{12}\] Stepping onto the cosmic stage with these equations, the ethereal dance of fluid energy density takes center stage: \[\rho_{d}=\rho_{d0}\ e^{\int\frac{2(1+q_{d})}{1+z}dz} \tag{13}\] A symphony of symbols and cosmic energies converge, where \(\rho_{d0}\) embodies the present density parameter, and \(z\) portrays the redshift parameter, narrated as \(1+z=\frac{1}{a}\) (with \(a_{0}=1\) in the present moment). In the cosmic chronicle, celestial constants emerge as cosmic constellations. Casting our gaze upon the cosmic canvas, we define the cosmic tapestry: \(\Omega_{r0}=\frac{8\pi G\rho_{d0}}{3H_{0}^{2}}\), \(\Omega_{m0}=\frac{8\pi G\rho_{m0}}{3H_{0}^{2}}\), \(\Omega_{d0}=\frac{8\pi G\rho_{d0}}{3H_{0}^{2}}\). From this cosmic calculus, the Hubble parameter script is penned: \[H^{2}(z)=H_{0}^{2}\left[\Omega_{r0}(1+z)^{4}+\Omega_{m0}(1+z)^{3}+\Omega_{d0 }\ e^{\int\frac{2(1+q_{d})}{1+z}dz}\right] \tag{14}\] In this cosmic sonnet, \(\Omega_{d0}\) emerges as the sentinel of cosmic balance, whispering the cosmic tale of unity: \(\Omega_{d0}=1-\Omega_{r0}-\Omega_{m0}\). ## III Unveiling the parameterized deceleration: a cosmic Odyssey In our pursuit of deciphering the cosmos, we turn our gaze toward the elusive parameterized deceleration. Its simplest embodiment takes form as follows: \[q(z)=q_{0}+q_{1}\mathcal{X}(z) \tag{15}\] Within this cosmic equation, \(q_{0}\) and \(q_{1}\) emerge as steadfast cosmic companions, while \(\mathcal{X}(z)\), a function of redshift \(z\), guides us through the cosmic terrain. Various cosmic riddles beckon, each demanding a distinct form for \(\mathcal{X}(z)\). Yet, previous attempts often left cosmic enigmas unsolved, hinting at the need for a parametrization that truly captures the essence of cosmic evolution.responding Hubble parameter in terms of redshift \(z\). ### A New Deceleration Odyssey In our cosmic odyssey, we unveil a new paradigm, a symphony of cosmic harmony: \[q_{d}(z)=q_{0}+\frac{\alpha+(1+z)^{\beta}}{q_{1}+q_{2}(1+z)^{\beta}} \tag{16}\] Within this cosmic symphony, \(q_{0}\), \(q_{1}\), \(q_{2}\), \(\alpha\) and \(\beta\) take on the role of cosmic constants, conducting the cosmic dance. This newfound paradigm ripples through the cosmic tapestry, influencing the very fabric of cosmic energy density: \[\rho_{d}=\rho_{d0}\ (1+z)^{3(1+q_{0}+\frac{\alpha}{q_{1}})}[q_{1}+q_{2}(1+z)^{ \beta}]^{\frac{3(q_{1}-\alpha q_{2})}{\beta q_{1}q_{2}}} \tag{17}\] This cosmic crescendo echoes through the cosmic chronicle, reshaping the Hubble parameter: \[H^{2}(z)=H_{0}^{2}\left[(1+z)^{4}\,\Omega_{r0}+(1+z)^{3}\,\Omega_{m0}+(1-\Omega _{r0}-\Omega_{m0})\ (1+z)^{3(1+q_{0}+\frac{\alpha}{q_{1}})}[q_{1}+q_{2}(1+z)^{\beta}]^{\frac{3(q_{ 1}-\alpha q_{2})}{\beta q_{1}q_{2}}}\right] \tag{18}\] In this cosmic symphony, \(\Omega_{r0}\) and \(\Omega_{m0}\) compose the cosmic orchestra, weaving the cosmic melody that resonates across the cosmos. As we embark on this cosmic voyage, our new model of deceleration parameter charts a course toward understanding the enigmatic cosmic fabric. Data analysis In this section, we will conduct a thorough comparison between our proposed model's predictions and observational data. Our goal is to explore the constraints imposed on the model by utilizing two distinct observational datasets: the Cosmic Chronometers (CC) dataset, type Ia supernovae (SNIa) dataset, Gamma Ray Bursts (GRBs), Quasar (Q), Baryon Acoustic Oscillation (BAO) and Cosmic Microwave Background (CMB) measurements. By comparing the model's predictions with these cosmological data, we can determine the best-fit and mean values for the two free parameters of our model, denoted as \(\Omega_{r0}\), \(\Omega_{m0}\), \(q_{2}\), \(q_{1}\), \(q_{0}\), \(\alpha\) and \(\beta\). It is crucial to incorporate the present-day value of the Hubble function \(H_{0}\) into our analysis. To constrain the parameters of our cosmological model, we adopt a rigorous and widely-used approach based on Bayesian statistics. This technique involves the use of likelihood functions and the application of the MCMC method. The Bayesian framework provides a robust framework for parameter estimation by quantifying the probability of obtaining a certain set of model parameters given the observational data. ### Methodology : Markov Chain Monte Carlo Analysis In this section, We present our methodology to determine optimal values of the seven free parameters \(\Omega_{r0}\), \(\Omega_{m0}\), \(q_{2}\), \(q_{1}\), \(q_{0}\), \(\alpha\), and \(\beta\) with present-day value of the Hubble function \(H_{0}\), using MCMC. This advanced approach explores parameter space, inferring parameter values from observed data while quantifying uncertainties. Gaussian errors are assumed for each dataset. The likelihood, encapsulating data agreement, is expressed as: \[\mathcal{L}(\theta)\propto\exp\left(-\frac{1}{2}\sum_{i}\frac{(x_{\mathrm{obs}, i}-x_{\mathrm{th},i}(\theta))^{2}}{\sigma_{i}^{2}}\right), \tag{19}\] Here, \(\theta=(\Omega_{r0},\Omega_{m0},q_{2},q_{1},q_{0},\alpha,\beta,H_{0})\) denotes model parameters, \(x_{\mathrm{obs},i}\) is observed data, \(x_{\mathrm{th},i}(\theta)\) is model prediction, and \(\sigma_{i}\) is measurement standard deviation. The posterior distribution of parameters is governed by Bayes' theorem: \[P(\theta|\mathrm{data})\propto\mathcal{L}(\theta)\times\mathrm{Prior}(\theta). \tag{20}\] We initiate by defining parameter space and employing suitable prior distributions, reflecting initial parameter beliefs. Likelihood, gauging model-data agreement, is computed considering data deviations. Bayesian inference combines priors with likelihood, yielding the posterior parameter distribution. MCMC algorithms, such as Metropolis Hastings [73] or Hamiltonian Monte Carlo [74], generate parameter samples from the posterior. These samples collectively traverse parameter space, converging towards higher likelihood regions. Convergence is ensured by monitoring diagnostics like Gelman-Rubin statistic [75] and trace plots. A burn-in phase is discarded, stabilizing chains and mitigating initial point influence. Best-fit parameter values are calculated from converged MCMC samples, with associated uncertainties as sample spreads. The New Model's performance is evaluated against diverse datasets, including Cosmic Chronometers (CC), Type Ia Supernovae (SNIa), Gamma Ray Bursts (GRBs), Quasars (Q), Baryon Acoustic Oscillations (BAO), and Cosmic Microwave Background (CMB). Model selection criteria, like Bayesian Information Criterion (BIC) [76] or Akaike Information Criterion (AIC) [77], aid in choosing the optimal model considering data fit and complexity. This methodology robustly explores parameter space, extracting the New Model's optimal parameters while accounting for uncertainties and observational data. ### Data Discription #### iv.2.1 Cosmic Chronometers (CC) The Hubble parameter, \(H(z)\), can be determined by measuring the quantity \(dt\), which is derived from a spectroscopic survey and represents the differential age technique. In this study, we utilize a collection of 31 data points, known as cosmic chronometers, to serve as constraints for our cosmological model. These data points provide valuable information about the expansion rate of the Universe at different redshifts [78; 79]. To assess the agreement between the theoretical predictions, denoted as \(H_{\mathrm{th}}(z_{i})\), and the observed values, denoted as \(H_{\mathrm{obs}}(z_{i})\), we employ the chi-square function: \[\chi^{2}_{\mathrm{CC}}=\sum_{i=1}^{31}\frac{\left[H_{\mathrm{th}}(z_{i})-H_{ \mathrm{obs}}(z_{i})\right]^{2}}{\sigma_{H(z_{i})}^{2}},\] where \(\sigma_{H(z_{i})}\) represents the standard error associated with the observed value of \(H\). By minimizing the discrepancy between the theoretical predictions and the observed data using the chi-square function, we can determine the mean values of the model parameters. This makes it possible for us to find a quantitative estimate of the agreement between our cosmological model and the cosmic chronometer data. #### iv.2.2 type Ia supernovae (SNIa) Over the years, Numerous supernova data has been established [80; 81; 82; 83; 84]. Recently, they took a fresh Pantheon sample, known as the Pantheon+ sample [85]. It consists of 1701 SNIa data points covering a redshift range from \(0.001<z<2.3\).. SNIa observations have played a key role in uncovering the fact regarding accelerating expansion of the universe. These observations widely used to study the nature of the component driving this expansion. SNIa are incredibly luminous astrophysical objects and are considered as standard candles, which means their intrinsic brightness can be used to measure relative distances. The Pantheon+ dataset provides valuable information for studying the nature of the accelerating universe. The chi-square statistic is commonly used to compare observational data with theoretical models. In the case of the Pantheon+ dataset, the chi-square values are calculated using the following equation: \[\chi^{2}_{\text{Pantheon+}}=\vec{D}^{T}\cdot\mathbf{C}^{-1}_{\text{Pantheon+}} \cdot\vec{D} \tag{21}\] Here, \(\vec{D}\) represents the difference between the observed apparent magnitudes \(m_{Bi}\) of SNIa and the expected magnitudes given by the cosmological model. \(M\) represents the absolute magnitude of SNIa, and \(\mu_{\text{model}}\) is the corresponding distance modulus predicted by the assumed cosmological model. The term \(\mathbf{C}_{\text{Pantheon+}}\) denotes the covariance matrix provided with the Pantheon+ data, which includes both statistical and systematic uncertainties. The distance modulus is a measure of the distance to an object, defined as: \[\mu_{\text{model}}(z_{i})=5\log_{10}\left(\frac{D_{L}(z_{i})}{(H_{0}/c)\text{ Mpc}}\right)+25 \tag{22}\] Here, \(D_{L}(z)\) represents the luminosity distance, which is calculated for a flat homogeneous and isotropic FLRW universe as: \[D_{L}(z)=(1+z)H_{0}\int_{0}^{z}\frac{dz^{\prime}}{H(z^{\prime})} \tag{23}\] The Pantheon+ dataset differs from the previous Pantheon sample as it breaks the degeneracy between the absolute magnitude \(M\) and the Hubble constant \(H_{0}\). This is achieved by rewriting the vector \(\vec{D}\) in terms of the distance moduli of SNIa in the Cepheid hosts. The distance moduli in the Cepheid hosts, denoted as \(\mu_{i}^{\text{Ceph}}\), are measured independently using Cepheid calibrators. This allows for the independent constraint of the absolute magnitude \(M\). The modified vector \(\vec{D^{\prime}}\) is defined as: \[\vec{D^{\prime}}_{i}=\begin{cases}m_{Bi}-M-\mu_{i}^{\text{Ceph}}&\text{if $i$ is in Cepheid hosts}\\ m_{Bi}-M-\mu_{\text{model}}(z_{i})&\text{otherwise}\end{cases} \tag{24}\] With this modification, the chi-square equation for the Pantheon+ dataset can be rewritten as: \[\chi^{2}_{\text{SN}}=\vec{D^{\prime}}^{T}\cdot\mathbf{C}^{-1}_{\text{Pantheon+ }}\cdot\vec{D^{\prime}} \tag{25}\] This revised formulation allows for improved constraints on the absolute magnitude \(M\) and the cosmological parameters. We've also expanded our investigation to include a subset of 162 Gamma Ray Bursts (GRBs) [86], spanning a redshift range of \(1.44<z<8.1\). In this context, we define the \(\chi^{2}\) function as: \[\chi^{2}_{\text{GRB}}(\phi^{\nu}_{\text{g}})=\mu_{\text{g}}\mathbf{C}^{-1}_{ \text{g,cov}}\mu_{\text{g}}^{T}, \tag{26}\] Here, \(\mu_{\text{g}}\) denotes the vector encapsulating the differences between the observed and theoretical distance moduli for each individual GRB. Similarly, for our examination of 24 compact radio quasar observations [87], spanning redshifts in the range of \(0.46\leq z\leq 2.76\), we establish the \(\chi^{2}\) function as: \[\chi^{2}_{\text{Q}}(\phi^{\nu}_{\text{q}})=\mu_{\text{q}}\mathbf{C}^{-1}_{ \text{q,cov}}\mu_{\text{q}}^{T}, \tag{27}\] In this context, \(\mu_{\text{q}}\) represents the vector capturing the disparities between the observed and theoretical distance moduli for each quasar. #### ii.2.3 Baryon Acoustic Oscillations (BAO) To investigate the intricate phenomena of Baryon Acoustic Oscillations (BAO) in our study, we engage a comprehensive dataset encompassing 333 distinct measurements [88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99]. However, in light of potential uncertainties attributed to data correlations, we judiciously curate a more focused dataset comprising 17 meticulously selected BAO measurements for our analytical endeavors. This deliberate curation aims to mitigate errors and enhance the precision of our findings. A pivotal metric derived from BAO investigations along the transverse direction is the parameter \(D_{H}(z)/r_{d}\), where \(D_{H}(z)\) signifies the comoving angular diameter distance. This quantity is intimately linked to the following expressive relation [100; 101]: \[D_{M}=\frac{c}{H_{0}}S_{k}\left(\int_{0}^{z}\frac{dz^{\prime}}{E\left(z^{ \prime}\right)}\right), \tag{28}\] where \(S_{k}(x)\) is defined as: \[S_{k}(x)=\begin{cases}\frac{1}{\sqrt{\Omega_{k}}}\sinh\left(\sqrt{\Omega_{k}} x\right)&\text{if }\quad\Omega_{k}>0\\ x&\text{if }\quad\Omega_{k}=0\\ \frac{1}{\sqrt{-\Omega_{k}}}\sin\left(\sqrt{-\Omega_{k}}x\right)&\text{if }\quad\Omega_{k}<0.\end{cases} \tag{29}\] Furthermore, our exploration encompasses the incorporation of the angular diameter distance \(D_{A}\), given by \(D_{A}=D_{M}/(1+z)\), and the parameter \(D_{V}(z)/r_{d}\). The latter amalgamates the BAO peak coordinates with \(r_{d}\) which characterizes the sound horizon during the drag epoch. Moreover, we gain valuable insights from "line-of-sight" or "radial" observations derived directly from the Hubble parameter, as expressed by: \[D_{V}(z)\equiv\left[zD_{H}(z)D_{M}^{2}(z)\right]^{1/3}. \tag{30}\] By studying these BAO measurements, we gain insights into the cosmological properties and evolution of the universe, while minimizing potential errors and considering relevant distance measures and observational parameters. The total \(\chi^{2}\) function is given by the sum of the individual contributions: #### iv.2.4 Cosmic Microwave Background In this study, distant measurements of the Cosmic Microwave Background (CMB) were analyzed [102]. These measurements provide valuable information about the CMB power spectrum in two aspects: the acoustic scale \(l_{A}\) characterizes the variation in peak spacing of the CMB temperature power spectrum in the transverse direction, and the "shift parameter" \(R\) influences the peak heights along the line-of-sight direction. The acoustic scale is defined as: \[l_{A}=(1+z_{d})\frac{\pi D_{A}(z)}{r_{s}}, \tag{31}\] where \(z_{d}\) is the redshift at the drag epoch, \(D_{A}(z)\) is the comoving angular diameter distance, and \(r_{s}\) is an independent parameter. The shift parameter is given by: \[R(z)=\frac{\sqrt{\Omega_{m}}H_{0}}{c}(1+z_{d})D_{A}(z), \tag{32}\] here density parameter is denoted by \(\Omega_{m}\), while \(H_{0}\) describes the Hubble constant and \(c\) denotes speed of light. The study [102] reports the following observables: \(R_{z}=1.7502\pm 0.0046\), \(l_{A}=301.471\pm 0.09\), and \(n_{s}=0.9649\pm 0.0043\). The parameter \(r_{s}\) has an associated covariance matrix, which is detailed in Table 1 of [102]. These observables represent inflationary features and the expansion rate of the CMB epoch. In addition to the CMB data, other data from the late Universe were also considered in the analysis. The combined results, including Cosmic Chronometers (CC), Type Ia supernovae (SNIa), Baryon Acoustic Oscillations (BAO), and the CMB, were used to test the model. The results of the analysis showed good agreement between the model and the data. The contour plots illustrating the combined results of CC, SNIa, BAO, and CMB are presented in Figure 1. Furthermore, the best-fit values along with their corresponding error bars are tabulated in Table 1. \[\chi^{2}_{\rm tot}=\chi^{2}_{CC}+\chi^{2}_{\rm SNIa}+\chi^{2}_{\rm GRB}+\chi ^{2}_{\rm Q}+\chi^{2}_{BAO}+\chi^{2}_{CMB} \tag{33}\] ## V Observational, and theoretical comparisons with CC and SNIa measurements After determining the values of the free parameters in our cosmological model, it is crucial to compare the model's predictions with observational data and the well-established \(\Lambda\)CDM paradigm. This step allows us to assess the model's viability and its ability to explain observed phenomena in the universe. By contrasting the model predictions with observational data, we can evaluate the agreement and identify any discrepancies or deviations from the standard \(\Lambda\)CDM framework. This analysis plays a crucial role in validating or refining the proposed cosmological model and advancing our understanding of the underlying physical processes driving the evolution of the universe. ### Contrasting with CC Measurements We assess the compatibility of our model with observational data by comparing it to the 31 measurements of Cosmic Chronometers datasets, represented by orange dots accompanied by error bars in the purple line shown in Figure 2. For reference, we also include the well-established \(\Lambda\)CDM model in black line with \(\Omega_{\rm m0}=0.3\) and \(\Omega_{\Lambda}=0.7\). The comparative analysis provides essential insights into how well our model performs in describing the observed Hubble data. The figure shows a close alignment between our model's predictions and the orange data points, indicating that our model successfully captures the features and trends present in the dataset. This alignment demonstrates that our model \begin{table} \begin{tabular}{|c|c|c|c|} \hline \multicolumn{4}{|c|}{MCMC Results} \\ \hline Model & Priors & Parameters & Best fit Value \\ \hline \(\Lambda\)CDM Model & \([50,.100]\) & \(H_{0}\) & \(69.854848^{+1.299100}_{-1.259100}\) \\ & \([0,.1,.]\) & \(\Omega_{\rm m0}\) & \(0.268654^{+0.21822}_{-0.012822}\) \\ \hline Model 1 & \([50,.100]\) & \(H_{0}\) & \(67.58035^{+1.143426}_{-1.143426}\) \\ & \([0,.1,.]\) & \(\Omega_{m0}\) & \(0.21204^{+0.01725}_{-0.0050934}\) \\ & \([0,.1,.]\) & \(\Omega_{\rm r0}\) & \(0.01045^{+0.004427}_{-0.004427}\) \\ & \([0,.1,.]\) & \(q_{2}\) & \(0.55564^{+0.00985}_{-0.00985}\) \\ & \([0,.1,.]\) & \(q_{1}\) & \(0.61298^{+0.057102}_{-0.031386}\) \\ & \([-3,-.2]\) & \(q_{0}\) & \(-2.40901^{+0.07460}_{-0.040061}\) \\ & \([0,.1,.]\) & \(\alpha\) & \(0.608808^{+0.221284}_{-0.221284}\) \\ & \([3,.4,.]\) & \(\beta\) & \(3.595096^{+0.35455}_{-0.320216}\) \\ \hline \end{tabular} \end{table} Table 1: Summary of the MCMC results using CC + SNIa + Q + GRB + BAO + CMB dataset can reproduce the expansion history of the universe as inferred from the Hubble data. The agreement between our model and the observed cosmic evolution supports its validity and credibility. ### Contrasting with type Ia supernovae (SNIa). We compared the distance modulus function, denoted as \(\mu(z)\), of our model with 1048 observational points from type Ia supernovae (SNIa) observations, shown in orange dots. The accompanying error bars represent measurement uncertainties. For reference, we included Figure 1: This figure corresponds to \(1\sigma\) and \(2\sigma\) confidence contours obtained from \(H(z)\) + SNIa + Q + GRB + BAO + CMB dataset obtained for the emergent model. the \(\Lambda\)CDM model as a black line. The distance modulus function \(\mu(z)\) is a crucial quantity in cosmology, describing the apparent brightness of distant objects, such as supernovae, and enabling us to infer their distances. By comparing our model's predictions with the observed SNIa data, we can assess the model's ability to reproduce the observed behavior of the universe. Figure 3 visually demonstrates the close agreement between our model and the SNIa measurements. The fact that our model closely matches the dataset suggests that it provides an excellent fit for the observed supernova data. This agreement supports the viability of our model in describing the cosmological expansion and provides compelling evidence for its ability to explain the behavior of the universe as captured by the SNIa observations. ### Relative differences between Model and \(\Lambda\)CDM. The relative distinctions between our model and the \(\Lambda\)CDM model, we present Figure 4. The plot illustrates the behaviors of both models for redshifts \(z<1\), where they exhibit comparable dynamics. However, as the redshift surpasses unity, slight discrepancies emerge between the two models. These differences are particularly noticeable in the higher redshift range. The deviations between our model and the \(\Lambda\)CDM model gradually diminish beyond a critical redshift of approximately \(z\approx 0.5\). This convergence at lower redshifts indicates that both models tend to align in their predictions as we explore earlier cosmic epochs. The close resemblance between our model and the \(\Lambda\)CDM model for \(z<1\) suggests that both models adequately capture the observed cosmic dynamics within this range. However, the observed discrepancies for higher redshifts indicate the presence of unique features or alternative mechanisms in our model, potentially influencing the universe's evolution at those epochs. ## VI Cosmographic parameters Cosmographic parameters are mathematical quantities used to describe the behavior of the universe and its expansion. These parameters, such as the deceleration parameter, jerk parameter, and snap parameter, are derived from the Taylor series expansion of the scale factor and its derivatives.They give important insights into the dynamics and history of the cosmos, allowing us to investigate the acceleration or deceleration of its expansion as well as higher-order properties [19]. Researchers can acquire a better grasp of the underlying physics and test alternative hypotheses of dark energy by analysing and comparing these cosmographic features [103]. Figure 4: Comparative analysis of our model ( Purple line ) with 31 Cosmic Chronometers datasets ( orange dots ) and \(\Lambda\)CDM model ( black line ) Figure 3: Comparative analysis of our model ( Purple line ) with 1701 SNIa measurements ( orange dots ) and \(\Lambda\)CDM model ( black line ). Figure 2: Comparative analysis of our model ( Purple line ) with 31 Cosmic Chronometers datasets ( orange dots ) and \(\Lambda\)CDM model ( black line ) ### The deceleration parameter In cosmology, the deceleration parameter is a dimensionless quantity used to determine the pace of expansion of the universe. It informs us about the accelerated or decelerated stages of cosmic expansion. It is defined mathematically as [104]. \[q=-\frac{\ddot{a}\ddot{a}}{\dot{a}^{2}}. \tag{34}\] The DP influences the behavior of the Hubble parameter, which characterizes the rate at which the universe is expanding [105]. Depending on the sign of the deceleration parameter, the Hubble parameter can either increase or decrease. Positive values of \(q\) indicate an accelerated expansion of the universe, while negative values correspond to a decelerated expansion. Understanding and estimating the deceleration parameter is crucial for exploring the range of possible values denoted by \(q_{0}\) through observational analysis. One way to probe the deceleration parameter is by studying the apparent brightness and redshift of supernovae in distant galaxies [106]. By examining these observational data, researchers can gain insights into the behavior of the deceleration parameter and its impact on the universe's expansion. Although estimating the deceleration parameter may seem challenging, recent findings strongly support models that suggest an accelerating universe [107]. These new outcomes have greatly favored the notion of an expanding universe with an accelerated rate of expansion. In cosmology, there is often a desire to determine a precise value for \(q_{0}\), representing the deceleration parameter at the present epoch. However, achieving such precision requires rigorous observational analysis and careful consideration of various astrophysical phenomena [108]. Overall, the deceleration parameter provides valuable insights into the dynamics of cosmic expansion. By examining its behavior and exploring its implications through observational studies, scientists can deepen their understanding of the universe's evolution. ### The jerk parameter The dimensionless jerk parameter provides a generalization of the conventional cosmological parameters, such as the scale factor \(a(t)\) and the deceleration parameter \(q\). It arises from the fourth term in a Taylor series expansion of the scale factor around a reference time \(t_{0}\). The expansion is given by: \[\frac{a(t)}{a_{0}} =1+\left(t-t_{0}\right)H_{0}-\frac{(t-t_{0})^{2}}{2}\,H_{0}^{2} \,q_{0}+\frac{(t-t_{0})^{3}}{6}\,H_{0}^{3}\,j_{0} \tag{35}\] \[+O\left[(t-t_{0})^{4}\right],\] where subscript \(0\) denotes the present values of the parameters. This expansion captures the behavior of the scale factor around \(t_{0}\) up to the third derivative. Mathematically, the jerk parameter \(j\) is defined as the third derivative of the scale factor with respect to cosmic time, normalized by the ratio of the first derivative of the scale factor to the scale factor itself: \[j=\frac{1}{a}\frac{d^{3}a}{d\tau^{3}}\left(\frac{1}{a}\frac{da}{d\tau}\right)^ {-3}=q(2q+1)+(1+z)\frac{dq}{dz}. \tag{36}\] The jerk parameter plays a significant role in distinguishing various proposals for dark energy (DE) and their implications for the dynamics of the universe. It helps identify favorable candidates for the physical interpretation of cosmic dynamics amidst the wide range of DE possibilities. By examining specific values of the jerk parameter, one can establish correspondences between DE proposals and standard models of the universe. Models involving a cosmic jerk provide a more pronounced demonstration of transitions between different eras of accelerated expansion. Notably, the flat \(\Lambda\)CDM model corresponds to a jerk parameter value of \(j=1\). Figure 5: This figure provides visual perception between respective and \(\Lambda\)CDM Model of deceleration Parameter. Figure 6: This figure provides visual perception between respective and \(\Lambda\)CDM Model of jerk Parameter. ### The snap parameter The snap parameter, also known as jounce, is a dimensionless quantity that provides further insights into the behavior of the expansion factor. It is derived from the fifth term in the Taylor series expansion of the scale factor around the reference value \(a_{0}\). The expansion is given by: \[\frac{a(t)}{a_{0}} =1+H_{0}(t-t_{0})-\frac{1}{2}q_{0}H_{0}^{2}(t-t_{0})^{2}+\frac{1}{ 6}j_{0}H_{0}^{3}(t-t_{0})^{3} \tag{37}\] \[+\frac{1}{24}s_{0}H_{0}^{4}(t-t_{0})^{4}+O\left[(t-t_{0})^{5} \right].\] Mathematically, the snap parameter \(s\) is defined as the fourth derivative of the scale factor with respect to cosmic time, normalized by the ratio of the first derivative of the scale factor to the scale factor itself: \[s=\frac{1}{a}\frac{d^{4}a}{d\tau^{4}}\left(\frac{1}{a}\frac{da}{d\tau}\right)^{ -4}=\frac{j-1}{3\left(q-\frac{1}{2}\right)}. \tag{38}\] For the flat \(\Lambda\)CDM model, where \(j=1\), the snap parameter can be expressed as \(s=-(2+3q)\). This relationship provides a clear connection between the snap parameter and the dynamics of the \(\Lambda\)CDM model. Furthermore, the divergence of \(\frac{da}{dq}\) from \(-3\) indicates deviations in the evolutionary mechanism from the dynamics of the \(\Lambda\)CDM model. By examining the behavior of this derivative, we can gain insights into how the system deviates from the standard \(\Lambda\)CDM dynamics. ## VII Statefinder diagnostic The statefinder diagnostic pair \(\{r,s\}\) is a widely used tool in the study of dark energy (DE) models, offering insights into their nature based on higher-order derivatives of the scale factor [109]. These dimensionless parameters provide a model-independent way to analyze the cosmic properties of DE. The statefinder pair is computed using the following expressions: \[r=\frac{\dddot{a}}{aH^{3}},\quad s=\frac{r-1}{3\left(q-\frac{1}{2}\right)}, \tag{39}\] Here, \(r\) represents the ratio of the third derivative of the scale factor to the cube of the Hubble parameter, and \(s\) is a linear combination of \(r\) and the deceleration parameter \(q\). Specific values of \(r\) and \(s\) have well-established interpretations within standard DE models. For example, \(\{r,s\}=\{1,0\}\) corresponds to the \(\Lambda\)CDM model, while \(\{r,s\}=\{1,1\}\) corresponds to the standard cold dark matter (SCDM) model in a Friedmann-Lemaitre-Robertson-Walker (FLRW) universe. The value range of \((-\infty,\infty)\) indicates the possibility of an Einstein static universe. In the \(r-s\) plane, different regions correspond to distinct types of DE models. For instance, \(s>0\) signifies quintessence-like models, whereas \(s<0\) indicates phantom-like models. Deviations from the standard value of \(\{r,s\}=\{1,0\}\) can indicate an evolutionary process from phantom-like to quintessence-like behavior. Moreover, specific combinations of the deceleration parameter \(q\) and the statefinder parameter \(r\) are associated with well-known models. The pair \(\{q,r\}=\{-1,1\}\) is linked to the \(\Lambda\)CDM model, while \(\{q,r\}=\{0.5,1\}\) corresponds to the SCDM model. Figure 7: This figure provides visual perception between respective and \(\Lambda\)CDM Model of snap Parameter. ## VIII OM Diagnostic. The \(Om\) diagnostic is a geometrical formalism used to test the validity of the \(\Lambda\)CDM model in cosmology [110]. It provides a useful tool for differentiating between various dark energy (DE) models and the cosmological constant. By analyzing the slope variation of the Om parameter with redshift, one can distinguish quintessence and phantom models based on positive and negative slopes, respectively. In the case of a constant slope, it corresponds to the cosmological constant. In a flat universe, the \(Om\) diagnostic is defined as the ratio of the squared Hubble parameter at a given redshift to its present value, normalized by the expansion factor. \[Om(z)=\frac{\left(\frac{H(z)}{H_{0}}\right)^{2}-1}{(1+z)^{3}-1}. \tag{40}\] Unlike the statefinder diagnosis, the Om diagnostic only requires the first-order temporal derivative. It has also been applied in the study of Galileon models. Overall, the Om diagnostic provides a valuable tool for exploring and understanding different DE scenarios in cosmology. ## IX Information Criteria Model selection criteria such as the corrected Akaike information criterion (\(AIC\)) play a golden role in evaluating the sustainability of the models [111; 112; 113; 114; 114]. The \(AIC_{c}\) takes into account the maximum likelihood function, the number of data, and the number of free parameters of models. For larger datasets, the \(AIC_{c}\) reduces to \(AIC\). The AIC is used to figure out the goodness-of-fit of different models. The model having the lowest AIC is the most preferred by observations. The difference, \(\Delta AIC=AIC_{preferred}-AIC_{model}\), is more meaningful that the \(AIC\) itself. A range of \((0,2)\) for \(\Delta AIC\) refers to models strongly supported, values in the range of \((4,7)\) are less supported, while values beyond \(10\) refer to models without any support. Therefore, the use of \(AIC\) is a very useful tool for classifying a set of models based on their ability to fit data. In addition to the AIC, the Bayesian information criterion (\(BIC\)) [115; 116; 76] also contributes to model selection. Like the AIC, the \(BIC\) considers the trade-off between goodness-of-fit and model complexity. However, the \(BIC\) incorporates a stronger penalty for models with more parameters. This promotes the selection of simpler models, helping to guard against overfitting. In the context of \(\Delta BIC\), similar principles apply as with \(\Delta AIC\). A lower \(\Delta BIC\) indicates stronger support for a particular model, and the magnitude of the difference is informative \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Model & \(\chi^{2}_{min}\) & \(\chi^{2}_{red}\) & \(AIC\) & \(\Delta AIC\) & \(BIC\) & \(\Delta BIC\) \\ \hline \(\Lambda\)CDM & 1742.41 & 0.9876 & 1748.41 & 0 & 1763.94 & 0 \\ \hline Model & 1733.01 & 0.9864 & 1749.01 & 0.69084 & 1790.41 & 26.4774 \\ \hline \end{tabular} \end{table} Table 2: Summary of \(\chi^{2~{}min}_{\rm tot}\), \(\chi^{2}_{\rm red}\), \(AIC\), \(\Delta AIC\), \(BIC\), \(\Delta BIC\) Figure 10: Behavior of \(Om(z)\) Profile Figure 9: Behavior of \(\{r,q\}\) plane Results deceleration parameterFigure 5 illustrates the changes in the deceleration parameter (DP) with respect to the cosmic redshift (\(z\)) in our new cosmological model. Remarkably, our model demonstrates an excellent agreement with the standard \(\Lambda\)CDM model at higher redshifts. However, the real excitement lies in the future phase of the universe's evolution, where our new model showcases a fascinating phenomenon - a super accelerated expansion (\(q(-1)\approx-0.3\)). This stands in stark contrast to the \(\Lambda\)CDM model, which predicts a de Sitter phase with \(q(-1)=-1\). The discovery of this super-accelerated expansion has significant implications, as it suggests that the universe might experience unprecedented and rapid expansion in the distant future. Indeed, the observed super-accelerated expansion in the new cosmological model could potentially be attributed to dark energy. jerk parameterFigure 6 shows the evolution of the jerk parameter with cosmic redshift (\(z\)) in the context of Cosmology. Our new model exhibits a remarkable departure from the well-established \(\Lambda\)CDM model. The jerk parameter initially experiences a steady decline during the early stages of the universe. However, this decline is eventually halted as the model intersects with the \(\Lambda\)CDM curve around \(z\sim 4.25\). Subsequently, an intriguing reversal in the evolution of the jerk parameter occurs, reaching its minimum value at \(z\sim 1\). After this critical point, our model aligns closely with the \(\Lambda\)CDM predictions, particularly near the present time, i.e., \(z\sim 0\), where the universe continues to display accelerating expansion. snap parameterWe have examined the behavior of the snap parameter in comparison to the widely accepted \(\Lambda\)CDM model. Remarkably, our new model exhibits a significant and consistent departure from the \(\Lambda\)CDM predictions across all ranges of cosmic redshifts (\(z\)) as shown in Figure 7. The snap parameter experiences a steady and continuous decline, showcasing a unique pattern not seen in the standard model. Particularly noteworthy is the behavior of the snap parameter near the present time, at approximately \(z\sim 0\). Our model predicts that the slope of the curve flattens out, leading to an intriguing prediction of \(s(0)\sim 1.6\) for the snap parameter. This deviation from the \(\Lambda\)CDM model provides valuable insights into the dynamics of the universe and suggests alternative mechanisms that might govern cosmic expansion. \(\{r,s\}\) profileIn Figure 8, we present the intriguing evolution of the \(\{r,s\}\) parameter. Our model depicts an interesting journey through several dark energy domains, offering information on the universe's dynamics. Our model begins in the Chalpygin gas-type dark energy area, which is defined by \(r>1\) and \(s0\). After this, the model crosses the fixed \(\Lambda\)CDM point \(\{1,0\}\) as the cosmic evolution advances, signalling a momentous change. Following that, it reaches the Quintessence type dark energy zone, with \(\{r,s\}\) parameter values of \(r<1\) and \(s>0\). This phase suggests a transition away from the classic cosmological model and towards a different type of dark energy behaviour. Our model subsequently goes through another transition, returning to the Chalpygin gas-type dark energy area. This back-and-forth movement across the fixed \(\Lambda\)CDM point highlights the complexity and richness of the dynamics inherent in our proposed model. \(\{r,q\}\) profileFigure 9 illustrates the intriguing evolution of the \(\{r,q\}\) parameter in our Cosmological model. At the beginning of the cosmic evolution, our model resides in the Chalpygin gas-dominated DE region, characterized by \(r>1\) and \(q>0\). This phase represents a unique state of the universe, influenced by specific properties of the Chalpygin gas. As cosmic time progresses, the model enters the Quintessence dominated DE region characterized by \(r<1\) and \(q>0\) after crossing the line associated with constant jerk value of \(j=1\). Following this, the model shows a phase transition from deceleration to acceleration and gradually enters the Chalpygin gas DE region again. \(\mathit{f.}\) om DiagnosticIn Figure 10, we present the interesting trend of the \(Om(z)\) diagnostic with cosmic redshift \(z\). The plot displays a consistent positive slope for all values of redshift, suggesting a Quintessence-type nature of dark energy. This finding indicates that the universe's expansion is driven by a dynamic and evolving form of dark energy. However, what makes the results even more fascinating is the brief interval of cosmic redshift, specifically \(4.5\leq z\leq 6.5\), where the slope of \(Om(z)\) becomes nearly zero. This intriguing behavior hints at the presence of cosmological constant-type dark energy during this specific range of redshifts. Information CriteriaBased on the values presented in Table 2, we can provide a comprehensive comparison between the New Model and the \(\Lambda\)CDM Model using various statistical measures, including \(\chi^{2}_{\rm min}\), \(\chi^{2}_{\rm red}\), \(AIC\), \(\Delta BIC\), \(BIC\), and \(\Delta BIC\). The \(\chi^{2}_{\rm min}\) for the New Model is lower than that of the \(\Lambda\)CDM Model, indicating a better fit of the New Model to the data in terms of minimizing the total \(\chi^{2}\). The \(\chi^{2}_{\rm red}\) values for both models are quite similar, suggesting that both models provide reasonable fits to the data when considering the number of degrees of freedom. The \(AIC\) value for the New Model is higher than that of the \(\Lambda\)CDM Model, indicating that the \(\Lambda\)CDM Model has a better balance between goodness of fit and model complexity according to AIC. The calculated \(\Delta AIC\) of 0.69084 suggests that the New Model is moderately supported compared to the \(\Lambda\)CDM Model. This value falls within the range of \((0,2)\), indicating that the New Model has some level of support according to \(\Delta AIC\). The \(BIC\) value for the New Model is higher than that of the \(\Lambda\)CDM Model, reinforcing the idea that the \(\Lambda\)CDM Model is preferred in terms of model complexity and goodness of fit according to BIC. The \(\Delta BIC\) value of 26.4774 suggests moderate support for the New Model over the \(\Lambda\)CDM Model, as it falls within the range of \((4,7)\). This indicates that the New Model is less favored compared to the \(\Lambda\)CDM Model based on BIC. The evidence suggests that the \(\Lambda\)CDM Model has a better fit and is moderately supported over the New Model. The lower values of \(\chi^{2}_{\rm min}\) and \(AIC\) for the \(\Lambda\)CDM Model indicate its improved goodness of fit and model selection. Additionally, the calculated \(\Delta BIC\) value of 26.4774 indicates that the New Model is less favored compared to the \(\Lambda\)CDM Model, falling within the range of models that are less supported according to BIC. ## XI Discussions and Conclusions The authors of the paper presented a cosmological model to study the evolution of the Universe based on a new parametrization of the deceleration parameter. The model considered a spatially flat, homogeneous, and isotropic Friedmann-Lemaitre-Robertson-Walker (FLRW) universe filled with radiation, dark matter (DM), and dark energy (DE). They derived the Friedmann equations and the energy conservation equation for the universe and considered separate conservation equations for radiation, DM, and DE. The new model proposed for the deceleration parameter was given as \[q_{d}(z)=q_{0}+\frac{\alpha+(1+z)^{\beta}}{q_{1}+q_{2}(1+z)^{\beta}}\] where \(q_{0}\), \(q_{1}\), \(q_{2}\), \(\alpha\), and \(\beta\) are constants. The model was then tested against observational data from various sources, including cosmic chronometers, type Ia supernovae, baryon acoustic oscillations, and cosmic microwave background measurements. The comparison was done using the chi-square statistic, and a MCMC method was employed for parameter estimation and uncertainty estimation. The analysis showed that the model's predictions were in good agreement with the observational data. The best-fit values of the free parameters \(\Omega_{r0}\), \(\Omega_{m0}\), \(q_{2}\), \(q_{1}\), \(q_{0}\), \(\alpha\), and \(\beta\) were obtained using the MCMC method. These values allowed the authors to understand the constraints imposed by the observational data on the model. We also compared our proposed cosmological model with observational data and the well-established \(\Lambda\)CDM paradigm. We found that our model's predictions aligned closely with the observed data from Cosmic Chronometers and type Ia supernovae (SNIa), indicating its ability to describe the universe's expansion history. Additionally, we identified slight discrepancies between our model and the \(\Lambda\)CDM model at higher redshifts, suggesting the presence of unique features in our model at those epochs. The cosmographic results of our new cosmological model demonstrate its remarkable ability to describe the observed universe and its expansion history. The model exhibits excellent agreement with the standard \(\Lambda\)CDM paradigm at higher redshifts. However, the real excitement lies in its prediction of a super-accelerated expansion in the distant future, in contrast to the de Sitter phase predicted by \(\Lambda\)CDM. This finding implies the presence of dark energy driving the accelerated expansion. The evolution of the jerk parameter in our model shows intriguing behavior, with a reversal in its decline around \(z\sim 4.25\), leading to a minimum value around \(z\sim 1\). This departure from the \(\Lambda\)CDM model highlights unique dynamics in our proposed model. Similarly, the snap parameter displays a continuous decline, deviating consistently from the \(\Lambda\)CDM predictions, particularly near the present time. The \(\{r,s\}\) and \(\{r,q\}\) profiles demonstrate fascinating journeys through different dark energy regions, indicating transitions between Chalpygin gas-type and Quintessence-type dark energy behaviors. These transitions further differentiate our model from the standard \(\Lambda\)CDM. The \(Om(z)\) diagnostic suggests that our model is driven by Quintessence-type dark energy with a brief interval of nearly constant behavior hinting at cosmological constant-type dark energy at specific redshifts. Regarding the information criteria, our model has slightly less support than the \(\Lambda\)CDM model, but both models adequately fit the observed data. Our new cosmological model presents a compelling alternative to the \(\Lambda\)CDM paradigm, capturing unique aspects of cosmic expansion and offering valuable insights into the nature of dark energy.
2301.04356
Threshold Voltage Control in Dual-Gate Organic Electrochemical Transistors
Organic electrochemical transistors (OECTs) based on Poly(3,4-ethylenedioxythiophene):poly(styrene sulfonic acid) (PEDOT:PSS) are a benchmark system in organic bioelectronics. In particular, the superior mechanical properties and the ionic-electronic transduction yield excellent potential for the field of implantable or wearable sensing technology. However, depletion-mode operation PEDOT:PSS-based OECTs cause high static power dissipation in electronic circuits, limiting their application in electronic systems. Hence, having control over the threshold voltage is of utmost technological importance. Here we demonstrate PEDOT:PSS-based dual-gate OECTs with solid-state electrolyte where the threshold voltage is seamlessly adjustable during operation. We show that the degree of threshold voltage tuning linearly depends on the gate capacitance, which is a straightforward approach for circuit designers to adjust the threshold voltage only by the device dimensions. The PEDOT:PSS-based dual-gate OECTs show excellent device performance and can be pushed to accumulation-mode operation, resulting in a simplified and relaxed design of complementary inverters.
Hsin Tseng, Anton Weissbach, Juzef Kucinski, Ali Solgi, Rakesh Nair, Lukas M Bongartz, Giuseppe Ciccone, Matteo Cucchi, Karl Leo, Hans Kleemann
2023-01-11T08:50:22Z
http://arxiv.org/abs/2301.04356v1
# Threshold Voltage Control in Dual-Gate Organic Electrochemical Transistors ###### Abstract Organic electrochemical transistors (OECTs) based on Poly(3,4-ethylenedioxythiophene):poly(styrene) sulfonic acid) (PEDOT:PSS) are a benchmark system in organic bioelectronics. In particular, the superior mechanical properties and the ionic-electronic transduction yield excellent potential for the field of implantable or wearable sensing technology. However, depletion-mode operation PEDOT:PSS-based OECTs cause high static power dissipation in electronic circuits, limiting their application in electronic systems. Hence, having control over the threshold voltage is of utmost technological importance. Here we demonstrate PEDOT:PSS-based dual-gate OECTs with solid-state electrolyte where the threshold voltage is seamlessly adjustable during operation. We show that the degree of threshold voltage tuning linearly depends on the gate capacitance, which is a straightforward approach for circuit designers to adjust the threshold voltage only by the device dimensions. The PEDOT:PSS-based dual-gate OECTs show excellent device performance and can be pushed to accumulation-mode operation, resulting in a simplified and relaxed design of complementary inverters. Introduction Organic electrochemical transistors(OECTs) have recently been in the spotlight of research because of their use in bioelectronics [1, 2, 3], neuromorphic computing [4, 5, 6, 7], and biological or chemical sensors [8, 9, 10, 11]. All these application scenarios are based on the conduction mechanism in organic mixed ionic-electronic conductors (OMIECs) [12], enabling efficient translation of ionic or chemical signals into electronic signals and vice versa. In addition, OMIECs offer easy production by printing and excellent mechanical properties [13, 14]. With the development of solid-state electrolytes [15, 16, 17, 18], OECTs might be integrated into wearable or even implantable systems with intelligent sensor function. In a common OECT, ions from the electrolyte penetrate the OMIEC polymer matrix and the distribution of ions in the OMIEC can be manipulated applying a bias voltage to the gate electrode. As the concentration of mobile holes / electrons in the OMIEC is regulated by the ion concentration via an electrochemical redox reaction, the gate bias can be used to control the conductance of the transistor channel and hence switch the transistor on and off [19]. One important device parameter of OECTs for sensing, computing, and circuitry is the threshold voltage (V\({}_{\text{th}}\)). It is the point when the gate bias switches the transistors between high current accumulation regime and low current depletion regime. It has a great technological relevance for example for logic gates where it determines the trip-point of digital inverters. Furthermore, the threshold voltage is also of great importance for sensing applications as in the sub-threshold regime, the current through the transistor exponentially depends on the gate bias offering the highest possible sensitivity of the system (comes with the drawback of losing linearity of the sensor). Hence, having control over the threshold voltage is of utmost technological importance and several strategies have been proposed to tune the threshold voltage by adjusting device parameters or materials. For example, changing the gate electrode material affects the potential drop at the gate electrode and changes the OECT operation regime from capacitive to Faradaic [20], leading to different threshold voltages. Alternatively, modifying the gate electrode with redox-active species or dopants controls the gate's work function [21, 22], also serving the same purpose. In addition, the design of the channel material, such as its chemical structure, influences the interactions with electrolytes and can, therefore, regulate OECTs operation and performance [23, 24]. However, even without manipulating the material system, the threshold voltage can be tuned solely by the device geometry. For example, the gating efficiency of the same gate electrode material can be improved as the gate capacitance is increased [25], and thus reducing the threshold voltage. The issue of threshold control is particularly relevant for the most often used OECT material: Poly(3,4-ethylenedioxythiophene) polystyrene sulfonate (PEDOT:PSS) is a working horse OECTs material because it is easy to process, com mercially available, and shows high transconductance. PEDOT:PSS is, however, highly doped, resulting in depletion-mode (normally-on) transistor behavior. A depletion-mode device is unfavorable for circuitry because of high power dissipation. In particular, for the design of inverter circuits, which are a basic element of digital electronics, the depletion-mode behavior is very disadvantageous as it complicates the inverter design and reduces the noise margin and gain of the circuits. Unfortunately, chemically modifying the gate electrode does not give an accumulation-mode device, although it helps tuning the threshold voltage [21, 22]. On top of that, using this approach, the threshold voltage cannot be adjusted anymore once the device has been manufactured. Although the chemical modification of either the gate electrode or channel material offers control over the threshold voltage, controlling it via the device design is technological preferable as the use of different materials, e.g., for the gate and the channel or additional chemical doping significantly increases the complexity of device and circuit fabrication. Using a dual-gate architecture is an alternative strategy to control the threshold voltage by the device design. This approach has been put forth for conventional organic thin-film transistors and precise control over the threshold voltage as well as tunability during operation have been demonstrated [26, 27, 28]. Two gate insulators are used to isolate the semiconductor channel from the top and bottom gate electrodes in these architectures. Thereby, two channels are formed at opposing interfaces of the semiconductor layer, which are used to regulate the conductance of the transistor. Having separated electrolytes, this design principle could also be adopted to OECTs using gate electrodes at the bottom and top. However, OECTs are usually fabricated in a side-gate configuration due to the conductive nature of the electrolyte. This geometry is advantageous because of its easy processing and high production yield. In contrast to a typical dual-gate thin-film transistor [26], the two gates in a dual-gate OECT in side-gate configuration would be in a shared electrolyte and it is not clear whether the two gates can be used to control the threshold voltage independently or they simultaneously influence the device performance. Recently, Ji et al. demonstrated a dual-liquid-gated OECT using electropolymerization to modify the gate capacitance with PEDOT:PSS [29] and showed that the transconductance can be tuned to some extent. However, accurate control over the threshold voltage was not possible with their approach, and most importantly, they could not make PEDOT:PSS-based OECTs operate as accumulation-mode transistors. Here, we demonstrate PEDOT:PSS-based dual-gate OECTs with solid-state electrolyte where the threshold voltage can be continuously tuned during operation. We show that the degree of tuning the threshold voltage linearly depends on the gate capacitance which is a straight-forward approach for circuit designers to adjust the threshold voltage only by the device dimensions. The PEDOT:PSS-based dual-gate OECTs, which can be densely integrated using conventional photolithography or printing techniques, show excellent device performance and can be pushed to accumulation-mode operation, leading to simplified processing, relaxed design requirements, and improved performance of complementary inverters. ## 2 Results and Discussion Figure 1(a, b) show the schematic layout of a dual-gate OECT. The transistor consists of two in-plane gate electrodes, a sweeping gate (Gate 1) and a controlling gate (Gate 2), the semiconductor channel with source and drain electrodes, and the solid-state electrolyte. PEDOT:PSS is used as the semiconductor channel material as well as for both gate electrodes in order to increase the capacitance of the gate. Using the same material for the gate and the channel reduces the fabrication complexity, and the volumetric capacitance of a PEDOT:PSS-based gate strongly reduces the voltage loss at the gate/electrolyte interface [25]. The capacitance of the PEDOT:PSS-based gate can be scaled by film area [30, 31, 32] on the condition that this film is formed at the same spin-coating process as the channel material, giving the same thickness of 100 nm. To evaluate the effect of the capacitance of the control gate (Gate 2) on the threshold voltage, we only vary the area of Gate 2, from 9200 \(\mu\)m\({}^{2}\), 14700 \(\mu\)m\({}^{2}\), 44100 \(\mu\)m\({}^{2}\), to 60900 \(\mu\)m\({}^{2}\), leaving the area of both channel and Gate 1 and the distance of 30 \(\mu\)m between the gate and the channel fixed (cf. Figure 1). The solid-state electrolyte is inkjet-printed on top of the channel and the gate electrodes, followed by UV-light induced cross-linking [16]. More details on the fabrication process of these integrated OECTs are given in the Experimental Section. Figure 2 presents the electrical characterization of a dual-gate OECT with \(\rm{A_{Gate2}=A_{Gate1}=44100}\ \mu\)m\({}^{2}\). The transfer characteristics are measured at a drain-source bias V\({}_{\rm DS}\) of -0.1 V for Gate 2 bias ranging from 0 V to +1 V in steps of 0.2 V. The transfer curves systematically shift with the applied V\({}_{\rm GS2}\). For a controlling bias V\({}_{\rm GS2}\) of 0 V (grounded), the transfer curve at the far right in Figure 2 (a, b), the effect of the Gate 2 is negligible. The dual-gate OECT behaves like a single-gate OECT and only Gate 1 redistributes ionic charges in the electrolyte. The effect of the V\({}_{\rm GS2}>0\) V on the dual-gate OECT is shown in the curves on the left in Figure 2 (a, b). Polarons in PEDOT:PSS are neutralized by cations driven by V\({}_{\rm GS2}\). To compensate for V\({}_{\rm GS2}\) and achieve the original drain current at V\({}_{\rm GS2}\) of 0 V, the sweeping gate V\({}_{\rm GS1}\) has to be increased to more negative values. Therefore, the transfer curve and hence the threshold voltage is shifted to the left. The interplay between the bias on the two gate electrodes determines the current in the dual-gate OECT at a given drain-source bias. The gate current in Figure 2 (b) is significantly lower than the drain current because of the precise patterning technology of the solid-state electrolyte. As shown in Figure 2(c), output curves are measured at V\({}_{\rm GS2}=\)0 V. The drain current is only affected by V\({}_{\rm GS1}\). Further, in Figure 2(d), the drain current is simultaneously influenced by V\({}_{\rm GS1}\) and V\({}_{\rm GS2}\); at the same scale of x-axis and y-axis, the drain current is close to 0 A, proving that the PEDOT-PSS-based OECT in fact operates as an accumulation-mode transistor (in agreement with the transfer curve shown in Figure 2(a)). It is worth to mention that the effect of inhomogenous dedoping becomes relevant at high V\({}_{\rm DS}\) where the curve is supposed to saturate [33]. In this study, we report on the change of threshold voltage for small V\({}_{\rm DS}\)=-0.1 V where inhomoegenous dedoping can be ignored and the threshold voltage is well defined. If inhomoegenous dedoping comes into play, the threshold voltage becomes a function of the drain-source voltage. Still, the V\({}_{\rm GS2}\) can be used to manipulate the transconductance of the transistor. We postulate that the dual-gate OECT device behavior can be modelled as two parallel gate capacitor connected in series to the channel capacitor, as shown in Figure 1(c). This is because the electrolyte resistance (200k\(\Omega\)) is negligible compared to the shunt resistance of the electrochemical double layers formed at the semiconductor-electrolyte interface (typical leakage current in the range of 10 nA at 1 V as shown in Figure 2 (b)). The function of the solid-state electrolyte does not differ from that of a liquid electrolyte such as NaCl\({}_{\rm(aq)}\). The solid polymer structure of the solid-state electrolyte forms a matrix for the movement of the ionic liquid. As water has been used as a solvent for the solid-state electrolyte, the PEDOT:PSS layer is always in a swollen state, which allows ions from the ionic liquid to move in and out of the PEDOT:PSS layer and thus dopes and dedopes the semiconductor [16]. Accordingly, the voltage only drops across the gate capacitance/ channel capacitance and the solid-state electrolyte can be treated as an equipotential surface. The total effective gate-source voltage influencing the channel is then determined by the total capacitance of C\({}_{\rm Gate1}\) and C\({}_{\rm Gate2}\) and the voltages applied. We extract the threshold voltage (V\({}_{\rm th}\)) from the transfer characteristics in Figure 2(a). The V\({}_{\rm th}\) is defined by plotting the drain current against the gate-source voltage, linearly fitting this curve, and intercepting the value on the x-axis of V\({}_{\rm GS1}\). The V\({}_{\rm DS}\) of -0.1 V is chosen to extract V\({}_{\rm th}\) is to avoid the effect of non-uniform dedoping in our system [33]. Figure 3(a) presents the threshold voltage as a function of the controlling gate V\({}_{\rm GS2}\). The data is in a mean value for 5 devices for each geometry and clearly shows the shift in threshold voltage. It should be noted that these solid-state electrolyte OECTs show a significant hysteresis in the transfer curve [16], which makes it technically speaking impossible to derive a single threshold voltage value. However, using the dual-gate configuration, we observe that the transfer curve including hysteresis is homogeneously shifted. Hence, for a sake of simplicity, we only plot the transfer curve for switching the device from on- to off-state thereby ignoring the hysteresis. When the area of Gate 2 is enlarged, the slope in Figure 3(a) increases, which allows us to turn these PEDOT:PSS-based OECTs from depletion- to accumulation-mode operation. Figure 3(b) shows that the degree of controlling the threshold voltage in dual-gate OECTs linearly scales with the ratio of the gate area (being equal to the ratio of capacitances). Using the equivalent circuit proposed in Figure 1(c), we can predict the scaling of the threshold voltage shift as a function of the gate area ratio by the following expression: \[\mathrm{V_{th}}=\frac{\mathrm{V_{th}^{0}(A_{Gate1}+A_{Gate2}+A_{Channel})}}{A_{ Gate1}}-\frac{\mathrm{V_{GS2}A_{Gate2}}}{A_{Gate1}} \tag{1}\] where the constant \(\mathrm{V_{th}^{0}}\) represents the threshold voltage without the presence of Gate 2. As shown in Figure 3(b), the experimental data fit very well to the model predictions and even without over-sizing Gate 2 significantly, a strong tunability of \(\mathrm{V_{th}}\) is achieved. In fact, accumulation-mode operation can be already achieved if the area of Gate 2 is only 1.38-times larger than the area of Gate 1 (Figure 3(a)). We demonstrate the advantage of this dual-gate OECT technology for logic circuits. As an example, a complementary inverter, combining a p-type and an n-type OECT, is chosen here as the most simple logic circuit. It works as a digital amplifier and is often combined with OECT-based sensors to increase the biosignal sensitivity for bioelectronics [9]. The most common used n-type semiconductor material for OECTs is poly(benzimidazobenzenanthroline) (BBL) [23, 34, 35, 36], with which an OECT operates in an accumulation-mode. Due to the depletion-mode operation of PEDOT:PSS-based OECTs and the large transconductance compared to BBL-based devices, good inverter characteristics can only be achieved if the BBL-based OECT is significantly larger than the PEDOT:PSS-based device. The channel width to length ratio of the BBL-based devices is typically chosen to be at least thousand times larger than the ratio of the PEDOT:PSS-based device (e.g., 16000-times larger in Ref.[9]). A complementary inverter layout with a PEDOT:PSS-based dual-gate OECT (p-type) and a BBL-based OECT (n-type) is shown in Figure 4(a). The input voltage (\(\mathrm{V_{in}}\)) is applied to the common gate, i.e., the gate of the BBL device and the Gate 1 of the PEDOT:PSS-based dual-gate OECT and is swept from 0 V to 0.8 V. The supply voltage \(\mathrm{V_{DD}}\) is set to 0.8 V. A constant voltage at Gate 2 (\(\mathrm{V_{GS2}}=\)0.25 V, 0.5 V, 1 V) is applied during the inverter measurement to control the threshold voltage of the p-channel device. The output voltage (\(\mathrm{V_{out}}\)) is measured to determine the transfer curve of the inverter which is shown in Figure 4(b). As \(\mathrm{V_{GS2}}\) increases, the inverter transfer curve shifts to the left. In particular, the trip point of the inverter is seamlessly adjusted by \(\mathrm{V_{GS2}}\) as it controls the threshold voltage of the dual-gate devices. The dual-gate design of OECTs offers a more robust design and operation of circuits, which can be used to improve the sensitivity of any bioelectronics system and thus contributes greatly to the field of bioelectronics. Figure 1: (a) Top view and cross section configuration of a PEDOT:PSS-based dual-gate OECT, including source and drain electrodes, PEDOT:PSS channel, and two PEDOT:PSS gates symmetric with respect to the channel. All the PEDOT:PSS films have the same thickness of 100 nm. (b) Optical microscopic image of a dual-gate OECT. (c) Equivalent circuit model of the dual-gate OECT. Figure 2: Electrical characterization of the PEDOT:PSS-based dual-gate OECT with \(\rm A_{Gate2}=\rm A_{Gate1}=44100\ \mu m^{2}\). (a) Transfer characteristic in linear scale at \(\rm V_{DS}\) = -0.1 V. \(\rm V_{GS2}\) is fixed for the loop of \(\rm V_{GS1}\) sweeping from -1.5 V to +1 V. (b) Transfer characteristic in logarithmic scale, including the drain current (solid line) and the gate current (dotted line). The same color of the drain current and the gate current means they are under the same \(\rm V_{GS2}\). (c) Output characteristic of the dual-gate OECT at \(\rm V_{GS2}\) = 0 V. (d) Output characteristic at \(\rm V_{GS2}\) = 0.8 V. Figure 3: (a) Threshold voltage of a PEDOT:PSS-based dual-gate OECT as a function of \(\mathrm{V_{GS2}}\) for different gate area ratios. The larger area of Gate 2 (red) gives a steeper slope, namely a larger degree of tuning. Each data point is a mean value for 5 devices and each device is measured 3 times. (b) The degree of threshold voltage tuning increases with the ratio of the gate area. Figure 4: (a) A complementary inverter layout of a PEDOT:PSS-based dual-gate OECT with \((\frac{W}{L})_{p}=5\) and a BBL OECT with \((\frac{W}{L})_{n}=2000\). The input voltage V\({}_{\text{in}}\) applies to the Gate 1 of the PEDOT:PSS-based dual-gate OECT and to the gate of the BBL OECT. V\({}_{\text{GS2}}\) is constantly applied at the Gate 2 of the PEDOT:PSS-based dual-gate OECT. The device configuration of both types of OECTs; the gate electrode of BBL OECT is Ag/AgCl immersed in the ionic liquid. Further device dimension can be found in the Experimental section. (b) Inverter transfer characteristic: as the threshold voltage of the PEDOT:PSS-based dual-gate OECT is tuned by increasing V\({}_{\text{GS2}}\) (from 0.25 V, 0.5 V to 1 V), the transfer curve (solid line) shifts to the left, and the inverter gain (dashed line) increases. Conclusion In conclusion, we demonstrate continuous tuning of the threshold voltage in PEDOT:PSS-based dual-gate OECTs. These dual-gate structures are easy to fabricate, employing the often used side-gate architecture. The threshold voltage scales linearly with the voltage at the control gate (Gate 2), and the degree of tuning linearly increases with the gate area ratio. Furthermore, we modeled the device behavior with an equivalent circuit, and the experimental data fit very well with the model predictions. The PEDOT:PSS-based dual-gate OECTs, which can be densely integrated using conventional photolithography or printing techniques, show excellent device performance, and they can be pushed to accumulation-mode operation, leading to improved performance and relaxed design requirements of complementary OECT inverters. ## 4 Experimental _Device fabrication_: The process of structuring the electrode and channel pattern of a dual-gate OECT follows ref.[8]. Source, drain, and gate electrodes were patterned on a glass substrate with 50 nm Au and 3 nm Cr by photolithography and wet-etching using Standard Gold Etchant and Standard Chromium Etchant. PEDOT:PSS-based solution was prepared with 95 wt.% of PEDOT:PSS (Hearaueus Clevios PH 1000, 1.1 wt.% solids in water, 1:2.5) and 5 wt.% of ethylene glycol. This solution was spin-coated at 3000 rpm on the electrodes and the 100 nm-PEDOT:PSS thin film was patterned by fluorine-based photolithography [37] and dry etching [38] using O\({}_{2}\) and Ar. The solid-state electrolyte precursor solution containing 1 mL deionized water, 750 mg N-isopropylacrylamide, 20 mg N,N'-methylenebisacrylamide, 200 mg 2-hydroxy-4'-(2-hydroxyethyl)-2-methylpropiophenone, and 1.5 mL 1-ethyl-3-methylimidazolium ethyl sulfate [16] was inkjet printed on top of the active area followed by 2 minutes UV crosslinking. The PEDOT:PSS-based dual-gate OECT was stored in the glovebox overnight and then was encapsulated with glass for further measurement. BBL solution was prepared by dissolving 5 mg BBL (Sigma Aldrich) in 1 mL methanesulfonic acid and stirring at 70 \({}^{\circ}\)C overnight. The BBL solution was then spin-coated at 1000 rpm on a gold substrate with W=10 mm, L=5 \(\mu\)m. Afterwards, the BBL film was soaked into ethanol for 1 minute and then dried on a hot plate at 150 \({}^{o}\)C for 5 minutes, and the resulting BBL film is 70 nm. _Electrical characteristics_: Transfer and output characterizations were done with Keithley SMUs controlled by the software SweepMe! BBL OECTs were measured with a Ag/AgCl gate and ionic liquid 1-ethyl-3-methylimidazolium ethyl sulfate in an ambient condition, giving V\({}_{\rm th}\) = 0.11 V and g\({}_{\rm m}\) = 0.73 mS.
2308.10628
Strangeness production in neutron star matter
Based on a dynamical model on particle production, the production and fraction of exotic components in neutron star matter are analyzed. It is found that there exists a small fraction of strangeness in twice saturation density matter. For five times saturation density matter, the fraction of strange baryons can be as high as 25-50\%, depending on the equation of state used. The neutron-proton asymmetry of dense matter does not significantly impact the strangeness fraction in neutron star matter. This research provides new insights into the strange components in neutron stars.
Gao-Chan Yong
2023-08-21T10:57:17Z
http://arxiv.org/abs/2308.10628v2
# Strangeness production in neutron star matter ###### Abstract Based on a dynamical model on particle production, the production and fraction of exotic components in neutron star matter are analyzed. It is found that there exists a small fraction of strangeness in twice saturation density matter. For five times saturation density matter, the fraction of strange baryons can be as high as 25-50%, depending on the equation of state used. The neutron-proton asymmetry of dense matter does not significantly impact the strangeness fraction in neutron star matter. This research provides new insights into the strange components in neutron stars. Neutron stars are one of the densest objects in our universe, with densities exceeding that of atomic nuclei and immense gravitational fields. They are formed from the remnants of supernova explosions and are primitively supposed to be primarily made up of neutrons. However, a series of theoretical investigations have shown that neutron stars may also contain strange matter, a type of matter composed of strange quarks [1; 2; 3; 4; 5; 6; 7; 8; 9; 10]. This "strangeness" in neutron star matter has sparked interest in physical community as it could provide valuable insights into the strong nuclear force and the behavior of matter under extreme conditions [11; 12; 13; 4; 7]. The presence of strangeness in neutron stars could have significant implications for our understanding of these objects [14]. It could affect the internal structure and cooling of neutron stars [15; 16; 17; 18; 19]. Strangeness could also affect the outcome of neutron star mergers [20], potentially producing unique gravitational wave signals [21]. Strangeness softens equation of state of dense matter in general, thus might claim its role in the shock-wave/neutromolealyed-shock process, presumably changing the explosion in the formation of core-collapse supernovae [22; 23; 24; 25]. Studying the properties and abundance of strangeness in neutron stars is therefore an important area of research for astrophysicists and particle and nuclear physicists. By observing the properties and behavior of neutron stars, and comparing these with theoretical models and simulations, scientists hope to gain a better understanding of the nature of matter under extreme conditions, as well as the processes that govern the evolution of these objects over time [16]. There are several theoretical methods to study strange particles in neutron stars. One method involves using various nuclear many-body theories to calculate the properties of strange particles in dense matter [6; 13]. Another method is to use effective field theories to describe the interaction between strange particles and nucleons [26]. Additionally, some researchers use perturbative quantum chromodynamics to study strange quark matter [27; 28]. Also one can use astrophysical observations of neutron stars to infer the properties of strange particles within them [29; 30]. In all the above methods, the interactions between strangeness and non-strangeness in dense matter are, in fact, rarely rectified by comparing with high-energy nuclear experiments in terrestrial laboratory. Relativistic heavy-ion collisions can reveal the interactions among particles at short distance or high baryonic densities. Determination of the strong nuclear force including strangeness in the context of high densities is the core question of "hyperon puzzle" [12] and affects the fraction of strangeness. While the fraction of strangeness conversely affects the bulk stiffness of neutron star matter or the maximum mass of neutron stars. The interplay of the fraction of strangeness and the equation of state of neutron star matter complicates the question on "hyperon puzzle". To first determine the fraction of strangeness in neutron star matter, in the present study, I use a relativistic heavy-ion collision transport model, which is frequently used to simulate particle productions in heavy-ion collisions, to study the strangeness production in neutron star matter through box simulation. The used collision dynamical model is a mode of the A Multi-Phase Transport (AMPT) model [31], which only deals with pure hadron cascade with hadronic mean-field potentials (dubbed as AMPT-HC) [32]. In the AMPT-HC model, the Woods-Saxon nucleon density distribution and local Thomas-Fermi approximation are used to initialize the position and momentum of each nucleon in colliding projectile and target. In addition to the usual elastic and inelastic collisions, hadron potentials with the test-particle method are applied to nucleons, baryon resonances, strangenesses as well as their antiparticles [32; 33]. In the model, \(\pi\), \(\rho\), \(\omega\), \(\eta\), \(K\), \(K^{*}\), \(\phi\), \(N\), \(\Delta\), \(N^{*}(1440)\), \(N^{*}(1535)\), \(\Lambda\), \(\Sigma\), \(\Xi\) and \(\Omega\) are included. Since the form of single nucleon potential at high momenta and high densities is still less known, to make minimum assumption, here we use the density-dependent single nucleon mean-field potential \(U(\rho)=\alpha\frac{\rho}{\rho_{0}}+\beta(\frac{\rho}{\rho_{0}})^{\sigma}\) with \(\alpha=(-29.81-46.9\frac{k+44.73}{k-166.32})\) MeV, \(\beta=23.45\frac{k+255.73}{k-166.32}\) MeV, \(\sigma=\frac{k+44.73}{k-11.05}\) (where \(\rho_{0}\) and \(k\) stand for nuclear saturation density and incompressibility of nuclear matter, respectively) to model the stiffness of nuclear matter, i.e., the equation of state (EoS) of nuclear matter [34]. As for the asymmetric part of the EoS, which is crucial to neutron star matter, a form of \(E_{sym}(\rho/\rho_{0})=34.5(\rho/\rho_{0})^{\gamma}\) is employed [35], where \(\gamma=0.5\) or \(1.5\) for the default soft symmetry energy and a stiff symmetry energy as counterpart, respectively. In the AMPT-HC model, strangeness productions from baryon-baryon, meson-baryon as well as meson-meson scatterings were detailedly specified in Refs. [31; 32; 33] and references therein. The form of kaon potential is taken from Ref. [36] while no mean-field potential is used for pions. For strange baryons \(\Lambda\), \(\Sigma\), \(\Xi\) and \(\Omega\), we adopt the quark counting rule asserting that these strange baryons interact with other baryons only through their non-strange (2/3, 2/3, 1/3, 0) constituents [37; 38]. Therefore, \(YN\) potential is \(2/3U_{N}\) (where \(Y\) stands for \(\Lambda\) or \(\Sigma\), \(U_{N}\) is the single nucleon potential); \(YY\) potential is \(4/9U_{N}\); \(Y\Xi\) potential is \(2/9U_{N}\) and \(\Xi\Xi\) potential is \(1/9U_{N}\). There is no mean-field potential for the three-strange-quark \(\Omega\). Since the neutron star matter is a long-standing and stable matter in the universe and one cannot simulate particle-particle scatterings relating to strangeness productions in box using infinite time steps. In practice, I employ a finite time steps which can saturate strangeness production quickly without introducing Pauli-blockings. Introducing the Pauli-blockings would prolong the saturation time in box simulations but does not evidently affects the results in the present study. This is because the Pauli-blocking just prevents a scattering with a larger probability but it would happen if it has more chances to scatter. If the reaction time is infinite, the Pauli-blocked scattering would have more chances to scatter. In the simulations, for a specific scattering channel, besides net charge conservation, the net strangeness is conserved. However, during the collisions, some strangenesses are short life and decay soon, the net strangeness is thus not conserved in the whole simulation process. Nuclear matter computation given by nuclear many-body models is frequently seen in the studies of properties of neutron star matter as well as bulk properties of nuclear matter in nucleus or heavy-ion collisions. Around nuclear saturation density, since there are a lot of empirical values from nuclear experiments to use, most nuclear many-body models could give reasonable predictions. While beyond saturation density, model extrapolations usually give divergence, simply because the strong interactions among different particles used at high baryon densities in nuclear many-body models are less confirmed. Hadronic transport model dealing with particle-particle scatterings at high energies naturally tackles the interactions among different particles at high densities. Since the results of particle production given by transport models are frequently compared with nuclear experimental data from facilities worldwide, the scattering matrix element (determined by the strong interaction) among different particles at high densities in heavy-ion collisions are frequently adjusted. Therefore, hadronic transport model for relativistic heavy-ion collisions could give some certain properties of strongly interacting matter. Figure 1 shows nuclear matter in a box with periodic boundary condition, simulated by the pure hadronic transport model for relativistic heavy-ion collisions AMPT-HC. In the simulations, besides particle-particle collisions, mean-field potentials are added for most baryons and some mesons. The mean-field potential plays an important role in the long evolution process of dense matter. Coordinates of initial nucleons are randomly distributed in box while the initial momentum of each nucleon is decided by local Thomas-Fermi approximation with Fermi momentum calculated by its local density, i.e., \(p_{F}=[3\pi^{2}\hbar^{3}\rho(r)_{n,p}]^{1/3}\). In this study, as the zero-temperature neutron stars are the most frequently discussed, the initial temperature of simulation is set to be zero. To see how the non-nucleon particles are produced in dense matter, Figure 2 denoting time evolution of the non-nucleon particle productions in dense matter is plotted. It is seen that the non-nucleon particles are gradually produced as time increases. The non-strange particles \(\Delta\) resonance and \(\pi\) meson are first produced due to their low production thresholds. Their yields soon saturate as time increases, suggesting the balance of production and absorption is reached. As time increases, the non-strange particles experiencing multi-scatterings have enough energies to surpass threshold energies of strangeness productions. One thus sees that the singly strange particles \(K^{+}\), \(\Lambda\), \(\Sigma^{+}\) are produced at later time and gradually reach saturation. The associated production of single strangeness is clearly demonstrated in Figure 2. When there are enough singly strange particles in dense matter, the doubly strange \(\Xi^{-}\) is ready to produce. It is clearly shown that the doubly strange \(\Xi^{-}\) begins to produce at the latest time and gradually reaches saturation after 1000 fm/c. Overall, there is a rapid saturation on the production of non-nucleon particles in even denser Figure 1: Sketch of initial nuclear matter (using 1 run 1 test particle per nucleon as an example exhibition) at five times saturation density, i.e., \(5\rho_{0}\), in a box with periodic boundary condition. The dimensions of the cubic box are a = b = c = 5 fm. The position of the center of box is (0, 0, 0). In practice, a particle that leaves the box on one side should enter it from the opposite side with the same momentum. In real simulations, 10 test-particle per nucleon and 1000 runs were employed. The time step is 0.2 fm/c and the code runs and stops at 1000 fm/c. matter. From Figure 2, one sees that in the static dense matter, nucleons, resonances, mesons as well as single and double strangenesses all may exist. The existence of non-nucleon particles is supposed to alter the bulk properties of dense matter. In neutron star matter, the number of neutrons is generally considered to be about 9 times the number of protons. It is thus necessary to see if the previous result on the productions of non-nucleon particles changes in neutron star matter with super large asymmetry of neutron number and proton number. Also it is interesting to see if the symmetry energy plays a role in the production of non-nucleon particles in dense and asymmetric matter. Figure 3 shows the typical particle (\(\Delta^{0}\), \(\pi^{+}\), \(K^{+}\), \(\Lambda\), \(\Sigma^{+}\) and \(\Xi^{-}\)) productions in dense symmetric (N = Z) and asymmetric (N/Z = 9) matter and the effects of EoS with the variety of symmetry energy are also shown. Firstly, it is seen that the asymmetry of dense matter in fact less affects the productions of non-nucleon particles. This is not only because the productions of these particles are less isospin-dependent but also most asymmetry effects have been smoothed out after many scatterings among particles. While the symmetry energy affects the productions of non-nucleon particles, especially at low densities. The stiff symmetry energy (\(\gamma\) = 1.5) increases the average energy per nucleon, causes more energetic nucleon-nucleon collisions. Therefore more non-nucleon particles are produced. Secondly, the fractions of \(\pi^{+}\), \(K^{+}\) Figure 2: Time evolution of several typical particle (\(\Delta^{0}\), \(\pi^{+}\), \(K^{+}\), \(\Lambda\), \(\Sigma^{+}\) and \(\Xi^{-}\)) productions per initial baryon in dense matter with, respectively, 2, 3 and 5 times nuclear saturation density employing a pure hadronic transport model. mesons are overall larger than those of \(\Delta^{0}\), \(\Lambda\), \(\Sigma^{+}\) and \(\Xi^{-}\) except for twice saturation density with a soft EoS. The fractions of other particles could be estimated via isospin symmetry of related reaction channels. Finally, Figure 3 further demonstrates that even at two times saturation density, non-nucleon particles still occupy certain proportions. One thus should be careful when predicting the properties of dense matter above twice saturation density via nucleon-only many-body approaches. The basic question of "hyperon puzzle" in the neutron star interior is the fraction of strangeness in neutron star matter. To show the fraction of strange baryon in neutron star matter, ratios of the strange baryon number over the total baryon number are demonstrated for symmetric and asymmetric matter with the variety of EoS. From Figure 4, it is seen that the fractions of the strange baryons become larger for even denser matter. For dense matter at 2\(\rho_{0}\), one can see that the fractions of strange baryons are about 4% for symmetric (N = Z) and asymmetric (N/Z = 9) matter with a soft EoS. While such fractions increase evidently (roughly from 4% to 10% or 20%) either for a stiff symmetry energy ("\(\gamma\) = 1.5") or further for a large incompressibility coefficient (\(k\) = 380 MeV). But the effects of the EoS on the fractions of strange baryons become less evident for even denser matter. For dense matter at 5\(\rho_{0}\), the fractions of strange baryons could be as high as 25-50%, depending on the EoS employed. Large fraction of the strange baryon in dense matter call for clear hyperon-nucleon and hyperon-hyperon interactions while studying the properties of neutron star matter via many-body calculations. It is gratifying to see that the study of hyperon-nucleon interaction is reignited at RHIC through the measurements of \(\Lambda\) hyperon and hypernuclei directed flows [39]. The nuclear EoS affects the fraction of the strange baryon in dense, neutron star matter. While the strange composition in neutron star matter alters the stiffness of neutron star matter [6]. The interplay of the EoS and the fraction of strangeness complicates the studies on the properties and the structure of dense, neutron star matter. The present research on strangeness production in neutron star matter does not necessarily represent the actual situation inside neutron stars, but it provides new insights into the strange components in neutron stars. In summary, based on a relativistic transport model for heavy-ion collisions, the fractions of non-nucleon particles, especially for strange baryons in dense, neutron star matter are studied via box calculations. It is found that the fractions of strange baryons are not affected by the asymmetry of neutron number and proton number in dense matter, but the employed equation of state evidently affects the fractions of strange baryons. There exists a small fraction of strangeness in twice saturation density matter. For five times saturation density matter, the fraction of strange baryons can be as high as 25-50%, depending on the equation of state used. Large fraction of the strange baryons in dense matter call for clear hyperon-nucleon and hyperon-hyperon interactions so as to determine the bulk properties of the neutron star matter. This work is supported by the National Natural Science Foundation of China under Grant Nos. 12275322, 12335008 and the Strategic Priority Research Program of Chinese Academy of Sciences with Grant No. XDB34030000.
2306.10384
Solar-type eclipsing binary KIC 4832197: Physical properties and intrinsic variability of the components
Comprehensive analysis of optical spectroscopy and space photometry of the solar type eclipsing binary system KIC\,4832197 is presented. The system is composed of F7V + F9V components with masses of $M_{1}=1.16\pm0.12~M_{\odot}$, $M_{2}=1.07\pm0.10~M_{\odot}$ and radii of $R_{1}=1.26\pm0.04~R_{\odot}$, $R_{2}=1.03\pm0.03~R_{\odot}$. Position of the components on $Log~T_{eff}-Log~L/L_{\odot}$ plane suggests an age of $2.8\pm0.8~Gyr$ for the system. Inspection of out-of-eclipse brightness in time reveals wave-like variability pattern, whose amplitude and shape quickly change in order of days. Frequency analysis of this variability results in two significant peaks in amplitude spectrum, which are interpreted as rotational modulation of spots on the components. Assuming both spots are on the same component, a lower limit for differential rotation coefficient is computed as $k=0.12$, which is weaker compared to the solar value of $k_{\odot} = 0.189$.
Orkun Özdarcan, Hasan Ali Dal, Ezgi Yoldaş
2023-06-17T15:57:35Z
http://arxiv.org/abs/2306.10384v1
Solar-Type Eclipsing Binary KIC 4832197: Physical Properties and Intrinsic Variability of the Components ###### Abstract Comprehensive analysis of optical spectroscopy and space photometry of the solar type eclipsing binary system KIC 4832197 is presented. The system is composed of F7V + F9V components with masses of \(M_{1}=1.16\pm 0.12\) M\({}_{\odot}\), \(M_{2}=1.07\pm 0.10\) M\({}_{\odot}\) and radii of \(R_{1}=1.26\pm 0.04\) R\({}_{\odot}\), \(R_{2}=1.03\pm 0.03\) R\({}_{\odot}\). Position of the components on \(Log\)\(T_{eff}-Log\)\(L/L_{\odot}\) plane suggests an age of \(2.8\pm 0.8\)\(Gyr\) for the system. Inspection of out-of-eclipse brightness in time reveals wave-like variability pattern, whose amplitude and shape quickly change in order of days. Frequency analysis of this variability results in two significant peaks in amplitude spectrum, which are interpreted as rotational modulation of spots on the components. Assuming both spots are on the same component, a lower limit for differential rotation coefficient is computed as \(k=0.12\), which is weaker compared to the solar value of \(k_{\odot}=0.189\). (stars:) binaries: eclipsing -- stars: fundamental parameters -- stars: individual (KIC 4832197) -- stars: activity + Footnote †: slugcomment: Manuscript for _Revista Mexicana de Astronomía y Astrofísica_ (2007) ## 0.1 Introduction Observed rotational modulation of brightness in a light curve of a solar-type star is interpreted as a strong photometric evidence of cool star spots on the stellar surface, which co-rotate with the surface. These spots might emerge in various locations on the surface of the star as the manifestation of the magnetic activity, which is commonly observed among solar-type stars. Such a light curve allows one to trace temporal and spatial evolution of spots, as well as determining photometric period of the star. Finding photometric period of a single star is crucial because that period can be considered as the rotation period the star. In the case of binary stars, it is possible to determine photometric period and orbital period separately. If the components of the binary system are solar-type stars, then comparison between the photometric period and the orbital period provides hints on surface differential rotation, which is one of the key parameter that drives dynamo mechanism in stellar interiors. Until the era of very high precision space photometry, photometric studies of the magnetic activity on solar-type stars had relied on long-term time-series photometry obtained from the dedicated ground-based telescopes (Henry & Eaton 1995; Strassmeier et al. 1997; Rodono et al. 2001) or all sky surveys (Pojmanski 1997; Kochanek et al. 2017). These sources provided photometric data with an uncertainty of a few percent of magnitude (rarely in milimag level; e.g. Henry & Eaton 1995) and enabled to trace short and long-term magnetic activity behaviour of solar-type stars in terms of mean brightness, light curve amplitude and the photometric period. After entering the era of groundbreaking space photometry, astronomers met extremely high precision photometric data obtained in a wide wavelength range. Especially, \(Kepler\) space telescope (Borucki et al. 2010; Howell et al. 2014) reached photometric precision down to a few tens of part-per-million. Such a precision not only enabled to detect a planetary transit in a light curve, which is primary mission of \(Kepler\) space telescope, but also sub-milimag amplitude variability of stars. In the case of magnetic activity of solar-type stars, such precision allowed detection of very small amplitude rotational modulation of brightness, which could not be distinguished in ground-based photometry due to the typical observational scatter of a few per-cent magnitude. In addition to its very high precision, \(Kepler\) photometry spans over four years without any considerable time gap, which enables one to trace low amplitude photometric signs of magnetic activity of solar-type stars. All these properties encouraged researchers for more detailed and comprehensive photometric studies on stellar flare and differential rotation on large sample of stars (Balona 2015; Reinhold & Gizon 2015) or on individual targets, (particularly eclipsing binaries; Yoldas & Dal 2021; Ozdarcan et al. 2018a). In some cases, \(Kepler\) photometry reveals effects of two separate variability mechanisms, (e.g. pulsation and cool spot activity), in an eclipsing binary system (see, e.g., Ozdarcan & Dal 2017). In this study, we present comprehensive analysis of a solar-type eclipsing binary system KIC 4832197. Our analysis is based on ground-based medium resolution optical spectroscopy and \(Kepler\) photometry. KIC 4832197 takes attention with its very shallow eclipse depths in its light curve and remarkable out-of-eclipse variability with an amplitude that is comparable to the eclipse depths. Neither eclipses nor out-of-eclipse variability of the system has been discovered until the advent of very high precision \(Kepler\) photometry. The system was included in the planet candidate catalogue due to its shallow eclipse depths (Coughlin et al. 2016). However no confirmed exoplanet in this system has been reported so far. KIC 4832197 appeared as late A spectral type star according to Tycho-2 measurements (\(B-V=0\fm 202\pm 0\fm 111\), Hog et al. 2000). On the other hand, more recent and precise broad-band \(UBV\) colours and magnitudes of the system are \(V=11\fm 673\pm 0\fm 018\), \(B-V=0\fm 496\pm 0\fm 030\) and \(U-B=-0\fm 002\pm 0\fm 032\) (Everett et al. 2012), which indicate F7 spectral type for the system (Gray 2005). Remarkable out-of-eclipse variability may indicate cool spot activity or pulsations on one or both components. Furthermore, shallow eclipses may indicate a possible third light, which may reduce eclipse amplitudes in the light curve depending on its contribution to the total light of the system. All these properties of KIC 4832197 are promising not only for testing stellar evolution models alone, but also for further studies on the variability mechanism currently running in one or both components. Analysis of very high precision \(Kepler\) photometry is excellent for such studies. In this context, we combine ground-based optical spectroscopy and space photometry to determine physical properties of the system and its evolutionary status. We further analyse brightness variability at out-of-eclipse phases, which can shed light into the currently running variability mechanism on the components of KIC 4832197. We organize the remaining parts of our study as follows. We describe observational data and reduction in the next section. Section 3 includes light time variation analysis, spectroscopic and photometric modelling of the system, including atmospheric parameter estimation of each component, spectroscopic orbit of the system and light curve analysis. In the last section, we summarize our findings and give discussion via analysis results. ## 0.2 Data ### Spectroscopy Medium resolution optical spectrum of KIC 4832197 were obtained at TUBITAK National Observatory (TNO) with 1.5 m Russian - Turkish telescope and Turkish Faint Object Spectrograph Camera (TFOSC2). An Andor DW436-BV 2048 \(\times\) 2048 pixels CCD camera with a pixel size of 13.5 \(\times\) 13.5 \(\mu m^{2}\) was used in observations, which allows recording optical spectra between 3900 - 9100 A in 11 echelle orders. This instrumental set-up provided an average resolution of \(R=\lambda/\Delta\lambda=2700\pm 500\) around \(\lambda=5500\) A wavelength region. Observations were carried out eight nights between July 2014 and April 2017. Ten spectra were recorded in total. Footnote 2: [https://tug.tubitak.gov.tr/en/teleskoplar/rtt150-telescope-0](https://tug.tubitak.gov.tr/en/teleskoplar/rtt150-telescope-0) Conventional procedures for reducing echelle spectrum images are followed. These steps are applied under IRAF environment3 and starts with removal of bias level and continues by division of bias-removed object and calibration lamp images by normalized flat-field image. Then, scattered light correction and cosmic rays removal steps are applied to the bias and flat-field corrected images. Finally, object and calibration lamp spectra are extracted from the echelle orders. Wavelength calibration of extracted object spectra is done by using a Fe-Ar calibration lamp spectrum recorded at the same observing night. After completing standard reduction steps, all object spectra are normalized to the unity by applying 4th or 5th order cubic spline functions. Footnote 3: The Image Reduction and Analysis Facility is hosted by the National Optical Astronomy Observatories in Tucson, Arizona at URL iraf.noao.edu ### \(Kepler\) photometry Very high precision space photometry of KIC 4832197 was obtained by \(Kepler\) spacecraft. The spacecraft recorded images in 6.02 second exposure time with 0.52 second read-out time in a broad wavelength range between 4100 A and 9100 A (Gilliland et al., 2010). Broad wavelength range allows to record more photons in a single exposure, which increases precision, at the expense of losing colour information. From recorded images, two data sets were created depending on two separate integration times; 58.9 second (short cadence data) and 29.4 minute (long cadence data). These integrations were grouped as separate data sets, where each set covered approximately three months and called as quarter, except the first quarter, which covered ten days of commissioning phase and called quarter zero (Q0). During operation of the spacecraft over 4 years (between 2009 and 2013) photometric data were collected for eighteen quarters in total (from Q0 to Q17). Continuous long cadence photometry obtained in each quarter is available for the majority of \(Kepler\) targets, including KIC 4832197. Long cadence photometry of KIC 4832197 for each quarter is obtained from Mikulski Archive for Space Telescopes (MAST). Prior to analyses, it is necessary to remove instrumental effects from the long cadence data. For each quarter, simple aperture photometry (SAP) fluxes are considered. These fluxes are de-trended as described in Slawson et al. (2011) and Prsa et al. (2011). De-trended fluxes are then normalized to the unity. Analyses and modelling process are based on these normalized fluxes. We show the long cadence light curve of KIC 4832197 in Figure 1. Figure 1: In panel a, long cadence light curve of KIC 4832197 is shown. In panels b and c, different portions of the long cadence light curve are shown, where the shape of the light curve remarkably changes. ## 3 Analysis ### Light time variations The first step of the analysis is to determine mid-eclipse times of KIC 4832197 for primary eclipses. In principle, mid-eclipse times can be determined straightforwardly by applying Kwee-van Woerden method (Kwee & van Woerden, 1956) to observations close by to mid-eclipse time. An alternative way is to fit a function to the observational data around estimated mid-eclipse time and determine the extremum point of this function, which corresponds to the mid-eclipse time. These methods may work flawlessly for a light curve of an ordinary eclipsing binary, which shows symmetric eclipse light curves with respect to the mid-eclipse time and exhibits flat or slightly distorted maxima at out-of-eclipse phases. Such light curves indicate the absence of intrinsic variability for any of the component. Looking at Figure 1, one may easily notice the variable nature of the light curve. Because of the relatively short orbital period and integration time of long cadence data, a few data points can be found around the expected mid-eclipse time for a given orbital cycle. Combination of these two effects gives a complex shape to the light curve and makes the methods mentioned above improper for precise determination of mid-eclipse times. In such cases, one of the most reliable way is to find a best-fitting light curve model and then only adjust mid-eclipse time of the primary minimum in the model for each cycle. We follow exactly this way for precise determination of mid-eclipse times from long cadence data. Actually, this is an iterated process, which starts by preparing a phase-folded light curve with initial light elements, i.e. an ephemeris reference time (\(T_{0}\)) and an orbital period (\(P\)), then continues by finding the best-fitting light curve model. In the case of KIC 4832197, initial \(T_{0}\) and \(P\) values are adopted from Kepler Eclipsing Binary Catalogue4(Prsa et al., 2011; Slawson et al., 2011) given in Equation 1: Footnote 4: [http://keplerebs.villanova.edu/](http://keplerebs.villanova.edu/) \[T_{0}(\mathrm{BJD})=2,454,954.965798+1\fd 8954655\ \times\ E. \tag{1}\] After finding the best-fitting light curve model, \(T_{0}\) is adjusted for each orbital cycle separately, by keeping all other model parameters fixed. In this step, time based long cadence data are considered, instead of phase-folded light curve. Thus, mid-eclipse time of each orbital cycle can be determined precisely together with formal errors. Practical application of this method is done by Wilson-Devinney code (see later sections for details). After determination of mid-eclipse times, a simple eclipse time variation (etv) diagram is constructed and linear corrections are determined and applied to both \(T_{0}\) and \(P\). Then the whole process is repeated by using corrected \(T_{0}\) and \(P\). In most cases, two iterations are fairly enough to obtain a self consistent \(T_{0}\), \(P\) and light curve model parameters. Converged solution leads to the corrected light elements given in Equation 2: \[T_{0}(\mathrm{BJD})=2,454,954.9661(2)+1\fd 8954650(4)\ \times\ E. \tag{2}\] The numbers in parentheses are shown statistical uncertainties for the last digit of corresponding parameter. These uncertainties are computed from linear least squares fit. \(T_{0}\) and \(P\) values given in Equation 2 are adopted for further analyses and kept fixed. Resulting eclipse time variation diagram is shown in Figure 2. In the figure, irregular undulating variation is noticeable among the scatter, with an approximate amplitude of \(0\fd 003\). Since that pattern does not repeat itself strictly during four years, it is not likely to attribute it to a possible third body. More likely explanation of this pattern might be the out-of-eclipse variability of the system, which can be notice in b and c panels of Figure 1. ### Radial velocities and spectroscopic orbit Next step of our analysis is to determine radial velocities of each component from each observed spectrum and model the spectroscopic orbit of the system. In order to determine radial velocities of the components, we use optical spectrum of HD 184499 (\(T_{eff}=5743\), log \(g=4.07\); Prugniel et al. 2011) as a template spectrum, which was recorded with the same instrumental set-up at night of 20th August, 2014. Then, each observed spectrum of KIC 4832197 was cross-correlated with the spectrum of HD 184499 by following the method proposed by Tonry & Davis (1979). Practical application of this method was done by \(fxcor\) task (Fitzpatrick 1993) under IRAF environment. All clear Figure 2: ETV diagram for KIC 4832197. absorption lines, except strongly blended or very broad spectral lines, were considered in 5th and 6th echelle orders, which cover wavelength range between 4900 A and 5700 A. In this wavelength range, it was possible to detect strong signals of both components. We show cross-correlation functions of two spectra of KIC 4832197 recorded at two separate orbital quadratures, in Figure 3. Determined heliocentric radial velocities are tabulated in Table 1. Preliminary inspection of the phase-folded light curve shows that the mid-primary and the mid-secondary eclipses precisely occur at 0.0 and 0.5 orbital phases, respectively. This finding strongly suggests circular orbit. Therefore, spectroscopic orbit of the system is determined under zero eccentricity (i.e. \(e=0\)) assumption. In this case, longitude of the periastron (\(\omega\)) is undefined, thus \(T_{0}\) value found in light time variation analysis step is adopted instead of periastron passage time. \(T_{0}\) and \(P\) values are kept fixed during the modelling, while radial velocity semi-amplitudes of the components (\(K_{1}\) and \(K_{2}\)), center-of-mass velocity of the system (\(V_{\gamma}\)) are adjusted. Application of linear least squares fitting method to the observed radial velocities results in the best-fitting spectroscopic orbit parameters tabulated in Table 2. Agreement between observed radial velocities and best-fitting model is shown in Figure 4. ### Spectral type Medium resolution TFOSC spectra allow to determine spectral types and global atmospheric parameters of each component of KIC 4832197. A spec Figure 3: Cross-correlation functions of two observed spectra recorded around orbital quadratures. P and S denote the primary and the secondary component, \(\phi\) shows orbital phase. \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline HJD & Orbital & Exposure & SNR & Primary & Secondary \\ (24 00000+) & Phase & time (s) & & V\({}_{r}\) & \(\sigma\) & V\({}_{r}\) & \(\sigma\) \\ \hline 56842.4581 & 0.7936 & 3200 & 120 & 61 & 10 & -150 & 11 \\ 56842.4963 & 0.8138 & 3200 & 95 & 58 & 9 & -146 & 10 \\ 56843.3937 & 0.2872 & 3200 & 90 & -141 & 10 & 70 & 15 \\ 56887.3009 & 0.4516 & 3200 & 140 & -79 & 9 & -8 & 11 \\ 56887.5063 & 0.5599 & 3200 & 95 & 7 & 11 & -74 & 13 \\ 56888.3307 & 0.9949 & 3200 & 120 & -43 & 6 & — & — \\ 57592.3910 & 0.4395 & 3600 & 90 & -81 & 9 & -1 & 9 \\ 57600.4912 & 0.7130 & 3600 & 110 & 59 & 11 & -154 & 13 \\ 57617.3447 & 0.6045 & 3600 & 90 & 17 & 10 & -106 & 17 \\ 57853.5170 & 0.2031 & 2700 & 100 & -135 & 13 & 72 & 16 \\ \hline \hline \end{tabular} \end{table} Table 1: SUMMARY OF SPECTROSCOPIC OBSERVATIONS ALONG WITH MEASURED RADIAL VELOCUITIES AND THEIR CORRESPONDING STANDARD ERRORS (\(\sigma\)) IN km s\({}^{-1}\). SNR DENOTES SIGNA-TO-NOISE RATIO AROUND 5500ÅWAVELENGTH. \begin{table} \begin{tabular}{c c} \hline \hline Parameter & Value \\ \hline \(P_{\rm orb}\) (day) & 1.8954655 (fixed) \\ \(T_{0}\) (HJD24 00000+) & 54954.965798 (fixed) \\ \(V_{\gamma}\) (km s\({}^{-1}\)) & \(-\)41\(\pm\)2 \\ \(K_{1}\) (km s\({}^{-1}\)) & 104\(\pm\)4 \\ \(K_{2}\) (km s\({}^{-1}\)) & 113\(\pm\)5 \\ \(e\) & 0 (fixed) \\ \(a\sin i\) (R\({}_{\odot}\)) & 8.15\(\pm\)0.26 \\ \(M\sin^{3}i\) (M\({}_{\odot}\)) & 2.03\(\pm\)0.14 \\ Mass ratio (\(q=M_{2}/M_{1}\)) & 0.92\(\pm\)0.06 \\ rms1 (km s\({}^{-1}\)) & 5 \\ rms2 (km s\({}^{-1}\)) & 4 \\ \hline \hline \end{tabular} \end{table} Table 2: BEST-FITTING SPECTROSCOPIC ORBIT PARAMETERS OF KIC 4832197. \(M_{1}\) AND \(M_{2}\) DENOTE THE MASSES OF THE PRIMARY AND THE SECONDARY COMPONENT, RESPECTIVELY, WHILE \(M\) SHOWS THE TOTAL MASS OF THE SYSTEM AND \(A\sin I\) IS THE PROJECTED SEMI-MAJOR AXIS, DEPENDING ON THE ORBITAL INCLINATION \(I\). trum recorded around any of the orbital quadrature is fairly suitable for atmospheric parameter determination of individual components. Because, spectral lines of both components are often separated from each other along wavelength axis sufficiently and can be distinguished at those orbital phases. This is exactly observed in TFOSC spectrum recorded at 30th July 2016 night, corresponding 0.7130 orbital phase (see lower panel of the Figure 3). Spectral types and atmospheric parameters are determined from this spectrum. Determination of spectral types and atmospheric parameters is an iterated process as in the case of determination of mid-eclipse times described in Section 3.1. During analysis, we fix micro-turbulence velocity of each component to 2 km s\({}^{-1}\). Actually, analysing high resolution optical spectra, it is possible to determine micro-turbulence velocity and [Fe/H] abundance simultaneously via Blackwell diagram (Blackwell & Shallis, 1979). However, due to the insufficient resolution of TFOSC spectra, we are unable to do so. Instead, we implicitly assume 2 km/s micro-turbulence velocity and fix this value during analysis. Although there isn't any strict relation in the literature to estimate micro-turbulence velocities reliably, limited observational studies indicate that 2 km s\({}^{-1}\)of micro-turbulence velocity is proper for solar type stars (Landstreet et al., 2009). It is possible to apply a constraint on logarithm of gravity (log \(g\)) values of the components. To do this, spectroscopic orbit model parameters are combined with preliminary light curve solution parameters. This step allows Figure 4: Phase-folded radial velocities of the primary (blue) and the secondary (red) components are shown as filled circles. The best-fitting spectroscopic orbit model is over-plotted for each component as continuous line. computation of masses and radii of the components. Then, log \(g\) values are computed via computed masses and radii of the components. Computed log \(g\) values are fixed in spectral type analysis and effective temperatures of the components (\(T_{eff1}\) and \(T_{eff2}\)) are estimated together with overall metallicity (\([Fe/H]\)). Then, estimated temperature of the primary component and overall metallicity are adopted as fixed parameters and combined light curve and radial velocity modelling is repeated. Two or three iteration is fairly enough to reach a self-consistent solution. Effective temperatures and overall metallicity are estimated by spectrum synthesizing method. ATLAS9 (Castelli & Kurucz, 2004) model atmospheres, which adopts plane-parallel atmosphere assumption, are used for synthetic spectrum computation. A grid of synthetic spectra are computed for temperature range of 6000 K and 7000 K with a step of 100 K. This computation is repeated for metallicities between solar (\([Fe/H]=0\)) and sub-solar (\([Fe/H]=-1.0\)), with a step of 0.25. Practical computation of synthetic spectra are done by a Python framework, \(iSpec\) software (Blanco-Cuaresma et al., 2014). Among various radiative transfer codes provided in \(iSpec\), Spectrum code (Gray & Corbally, 1994) is adopted. \(iSpec\) also includes a comprehensive line list compiled from third version of the Vienna atomic line database (VALD3, Ryabchikova et al., 2015). Each computed synthetic spectrum is convolved with a Gaussian line spread function to reduce spectral resolution to the resolution of TFOSC spectra. After that point, a trial synthetic spectrum is chosen for each component among computed spectra and composite spectrum of the system is computed by those spectra. In order to compute composite spectrum of the system, each individual spectrum is shifted in wavelength axis with respect to the radial velocity of the corresponding component and scaled with respect to the square of ratio of the radii of the components. Self-consistent effective temperatures and overall metallicity are determined in the third iteration, which gives \(T_{eff1}=6300\) K, \(T_{eff2}=6100\) K and \([\mathrm{Fe/H}]=-0.25\). Estimated uncertainties are around 200 K for temperatures and 0.25 for metallicity. Adopted ratio of radii is \(R_{1}/R_{2}=1.27\) and log \(g\) values are log \(g_{1}=4.30\) and log \(g_{2}=4.47\) (see next section for details). These results indicate F7V and F9V spectral types for the primary and the secondary components (Gray, 2005), respectively. Observed TFOSC spectrum and best-fitting composite spectrum are shown in Figure 5. ### Light curve analysis and evolutionary status Long cadence photometric data of KIC 4832197 include 65190 individual data points in total, which means large amount of CPU time is required to find a best-fitting light curve model. Therefore, an average phase-folded light curve is prepared by computing average in every 0.01 phase at out-of eclipse phases. Around mid-eclipse phases, phase step is adopted as 0.0005 in order to detect ingress and egress phases of eclipses precisely. Average phase-folded light curve includes 556 data points in total, which enormously reduces required CPU time for light curve modelling. Weighting of each normal data point in phase-folded average light curve is done by considering total number of data points, which produces the normal data point. Light curve modelling is done by 2015 version of the well-known Wilson-Devinney (WD) eclipsing binary light curve modelling code (Wilson & Devinney, 1971; Wilson & Van Hamme, 2014). Practical application with WD code is done by user friendly Python GUI \(PyWD2015\)(Guzel & Ozdarcan, 2020). \(PyWD2015\) allows users to use almost all features of the 2015 version of the WD code, expect subsets. Furthermore, in addition to capabilities of the WD code, \(PyWD2015\) includes many useful features and small tools to speed up modelling process and tracing successive iterations visually. Two most critical parameters for an accurate light curve modelling are \(T_{eff1}\) and \(q\). These have already been determined in previous sections, are kept fixed during modelling. Effective temperatures of the components clearly indicate that both components possess convective outer envelope, thus gravity darkening (\(g\)) and albedo (\(A\)) values of each component are set to 0.32 (Lucy, 1967) and 0.5 (Rucinski, 1969), respectively. Low resolution of TFOSC spectra does not allow to determine reliable rotational velocities of the components, so considering non-eccentric orbit of the system, synchronous rotation is implicitly assumed for both components by fixing rotation parameter (\(F\)) of each component to the unity. Logarithmic limb darkening law (Klinglesmith & Sobieski, 1970) is adopted for both components and limb darkening coefficients are adopted from van Hamme (1993) for Kepler passband. Adjustable param Figure 5: Observed (blue) and computed composite spectrum (red) for KIC 4832197 in three different portions of the optical wavelengths. Residuals from the best-fitting model are plotted as shifted upwards by 0.5 (blue) for a better viewing purpose. eters are inclination of the orbit (\(i\)), effective temperature of the secondary component (\(T_{eff2}\)), dimensionless potentials of the components (\(\Omega_{1}\), \(\Omega_{2}\)) and luminosity of the primary component (\(L_{1}\)). Coarse and fine grid numbers for stellar surface are set to 60. Modelling attempts with a possible third light contribution do not yield a reasonable model, which mostly indicate extremely small or negative third light contribution, therefore modelling process is carried out with no third light contribution. Convergence is not very quick due to the shallow eclipse depths, however, successive iterations converge slow but firmly to a global minimum. Model parameters for the best-fitting light curve are tabulated in Table 3. In Figure 6, phase-folded light curve and and best-fitting light curve model are shown. Separate close-up view of each eclipse is shown in Figure 7. TABLE 3 BEST-FITTING LIGHT CURVE MODEL PARAMETERS FOR KIC 4832197. \(\langle R_{1}\rangle\) AND \(\langle R_{2}\rangle\) SHOW MEAN FRACTIONAL RADII OF THE PRIMARY AND THE SECONDARY COMPONENTS, RESPECTIVELY. UNCERTAINT OF ADJUSTED PARAMETERS ARE INTERNAL TO THE WILSON-DEVINNEY CODE AND GIVEN IN PARENTHESES FOR THE LAST DIGITS, EXCEPT \(T_{2}\). UNCERTAINTY OF \(T_{2}\) IS ASSUMED TO BE THE SAME AS \(T_{1}\). \begin{tabular}{c c} \hline Parameter & Value \\ \hline \(q\) & 0.92 (fixed) \\ \(T_{1}(K)\) & 6500 (fixed) \\ \(g_{1}\) = \(g_{2}\) & 0.32 \\ \(A_{1}\) = \(A_{2}\) & 0.5 \\ \(F_{1}\) = \(F_{2}\) & 1.0 \\ \(i\) (\({}^{\circ}\)) & 75.67(5) \\ \(T_{2}(K)\) & 6060(200) \\ \(\Omega_{1}\) & 7.605(22) \\ \(\Omega_{2}\) & 8.520(44) \\ \(L_{1}/(L_{1}\)+\(L_{2})\) & 0.639(3) \\ \(x_{1},x_{2}\) & 0.685, 0.700 \\ \(y_{1},y_{2}\) & 0.270, 0.259 \\ \(\langle r_{1}\rangle,\langle r_{2}\rangle\) & 0.1498(5), 0.1219(8) \\ Model rms & 2.8 \(\times\) 10\({}^{-4}\) \\ \hline \end{tabular} Combination of the best-fitting spectroscopic orbit and light curve model parameters allows to compute absolute dimension of the system as well as its distance. Both \(UBV\) colours (Everett et al. 2012) and optical spectrum of Figure 6: Phase-folded long cadence Kepler observations and the best-fitting light curve model of KIC 4832197 are shown along with residuals from the model. Figure 7: Close-up view of observations and the best-fitting light curve model around eclipse phases. the system indicate F7 spectral type, which means reddening and interstellar extinction should be negligible, if any. Trial plotting of the system on \(UBV\) colour-colour diagram gives \(E(B-V)=0\fm 009\), which is pretty below the observational error of \(B-V\), hence might be ignored for distance computation. Considering \(P\), \(K_{1}\), \(K_{2}\), \(i\), \(\langle r_{1}\rangle\) and \(\langle r_{2}\rangle\) along with the published \(UBV\) magnitudes, absolute dimension of the system is computed by JKTABSDIM code5(Southworth et al., 2005) and tabulated in Table 4. Calibrations given in Kervella et al. (2004) is adopted for bolometric correction and distance computation. Footnote 5: [https://www.astro.keele.ac.uk/jkt/codes/jktabsdim.html](https://www.astro.keele.ac.uk/jkt/codes/jktabsdim.html) Comparing the locations of the components with stellar evolutionary tracks on \(Log~{}T_{eff}-Log~{}L/L_{\odot}\) plane, it is possible to estimate the age of the system. In Figure 8, components of KIC 4832197 are plotted along with stellar evolutionary tracks computed by PAdova and TRieste Stellar Evolution Code (PARSEC, Bressan et al., 2012). The best-matching isochrone to the positions of the components has log(age)=9.45, which indicates 2.8 Gyr age for the system. Estimated uncertainty for this age is approximately 0.8 Gyr. Primary source of this uncertainty is the uncertainty of estimated effective temperatures and its propagation on computed luminosities. Both components are still on the main sequence, however, the primary component is almost half-way through its main sequence life. Figure 8: Positions of the primary and the secondary components on \(Log~{}T_{eff}-Log~{}L/L_{\odot}\) plane. Dashed lines show evolutionary tracks for different masses. Each track is labelled with its corresponding mass. Continuous curve shows the isochrone for log(age)=9.45. The tracks are for slightly sub-solar metallicity with \(Y=0.273\) and \(Z=0.014\). out-of-eclipse phases (see Figure 1, lower panels). For further investigation on this variability, best-fitting light curve model is subtracted from observations and residuals from the model are obtained. In principle, these residuals do not include variability due to eclipses and proximity effects of the components and give hints on intrinsic variability of one or both components. Residuals from the best-fitting light curve model is obtained with PyWD2015 by inputting all long cadence data into WD code and make a single differential corrections iteration. Obtained residuals are plotted in Figure 9. Quick changes in residual light curve shape and amplitude are remarkable. In order to see the reflection of this variability in frequency-amplitude plane, Lomb-Scargle periodogram (Lomb 1976; Scargle 1982) is applied to whole long cadence residuals via python script pysca(Herzberg & Glogowski 2014). pysca allows quick and practical computation of amplitude spectrum and extract significant frequencies above a defined noise level. Adopting 24.498 cycle/day (c/d) as nyquist frequency, single pysca run results in the amplitude spectrum given in Figure 10. Adopted nyquist frequency corresponds to the twice (\(\approx 59\) min) of integration time of a single long cadence exposure. In the resulting amplitude spectrum, two dominant peaks are clearly seen around the orbital frequency but no signal appear at precise location of the orbital frequency. Frequency values of the two dominant peaks are tabulated in Table 5, together with the orbital frequency. It is also possible to trace the behaviors of the detected frequency peaks Figure 9: Residuals from the best-fitting light curve model are plotted against time (\(BJD\)). Panel \(a\) shows the whole long cadence residuals, while panels \(b\) and \(c\) show different portions of the whole set. Figure 10: Amplitude spectrum of light curve residuals. The uppermost panel show the whole amplitude spectrum between 0 and 24.498 c/d. Middle panel focuses on 0 and 2 c/d where dominant peaks appear. Lowermost panel shows detailed view of the most dominant two peaks and the orbital frequency. In each panel, the orbital frequency is marked with a vertical dashed (red) line. Figure 11: Close view of amplitude spectrum of light curve residuals for each quarter of Kepler data. Panels are focused on frequency region around \(f_{orb}\), which is shown by vertical (red) dashed line. Each panel is labelled with its corresponding quarter in upper left corner. Figure 12: Phase folded light curve residuals with respect to \(f_{1}\) and \(f_{2}\) frequencies. Each frequency is shown in corresponding panel. for each Kepler quarter separately. Close view of the \(f_{1}\) and \(f_{2}\) peaks are shown in Figure 11 for each Kepler quarter, separately. It is apparent that \(f_{1}\) strongly appears in most of quarters while \(f_{2}\) weakens or disappears in some quarters. The corresponding frequencies of the first two dominant peaks in the amplitude spectrum give the phase folded residual light curves shown in Figure 12. Phase folding is done with respect to the reference ephemeris given in Equation 2 and corresponding period values of the frequencies listed in Table 5. Wave-like light curve pattern can be noticed easily in both panels of the figure. This picture appears to be the reflection of double-humped structure of the light curve (see Figure 1, panel \(b\) and \(c\)), which is frequently observed through four years of long cadence data. Estimated spectral types of the components in Section 3.3 suggests that these \(f_{1}\) and \(f_{2}\) frequencies are likely the result of cool spot activity on one or both components. ## 0.4 Summary and Discussion Analyses of medium resolution ground-based optical spectroscopy and very high precision Kepler photometry show that KIC 4832197 is composed of F7V and F9V stars possessing slightly sub-solar metallicity ([Fe/H]=\(-0.25\pm 0.25\)). The components moves around the center-of-mass of the system on a circular orbit with a period of \(1\fd 8954650\). Kepler long cadence light curve of the system is dominated by a wave-like brightness variability whose amplitude is comparable to the eclipse depths. In some epochs, amplitude of this variability exceeds the amplitude of eclipses, which are very shallow and hardly exceeds 0.01 in normalized flux unit. Shallow eclipse depths could be result of a possible third light contribution, however, the best-fitting light curve model indicates no third light contribution. ETV diagram does not show any sign for a possible third body, but only a scatter of \(\pm\) 5 minute with some vaguely undulating patterns among this scatter. Remaining possible explanation is that the undulating pattern should be the reflection of the variability observed at out-of-eclipse phases. Considering spectral types of the components, such irregular patterns in etv diagrams could be attributed to the intrinsic photometric variability originating from magnetic activity of one or both components. (Balaji et al., 2015). Comparing etv diagrams of KIC 4832197 and a very active eclipsing binary KIC 12418816 (Dal & Ozdarcan, 2018, see Fig. 1 in their study), it is clearly seen that our target system does not exhibit clear patterns, which indicates much lower level of magnetic activity. Combining the best-fitting spectroscopic orbit and light curve models, absolute physical properties of the components of the system are computed. Using these parameters, components are plotted on \(Log\)\(T_{eff}-Log\)\(L/L_{\odot}\) plane. Positions of the components on this plane suggests 2.8\(\pm\)0.8 Gyr age for the system. The primary component of KIC 4832197 appears to burnt almost half of its main sequence fuel. \(UBV\) colours (Everett et al., 2012) of the system enables to estimate interstellar reddening. Trial plotting of the \(U-B\) and \(B-V\) colours of the system on \(UBV\) colour-colour diagram yields \(E(B-V)=0\fm 009\). This indicates 452\(\pm\)40 pc of distance. Considering the reported observational errors of \(UBV\) colours and small amount of \(E(B-V)\), interstellar reddening can be neglected. In this case, the distance becomes 459\(\pm\)40 pc. Two distance values agrees within the computed statistical error. Computed error of the distance is mainly dominated by the estimated 200 K uncertainty of the effective temperatures of the components. The distance of the system based on precise parallax measurement of GAIA (Gaia Collaboration et al., 2016, 2022; Babusiaux et al., 2022) is given as 449\(\pm\)2 pc, which is in good agreement with the computed distance in this study. Removing the best-fitting light curve model from long cadence Kepler light curve, residual light curve is obtained, which shows clear brightness variability at out-of-eclipse phases. Residual light curve exhibits remarkable changes, which has a time scale of a few orbital cycle. These changes occur both in amplitude and shape of the light curves. For instance, an asymmetric single-humped light curve can become a double-humped one with a noticeable amplitude change in a few days and then can restore itself to an another asymmetric light curve with a different amplitude. Considering spectral types of the components the most likely explanation for the source of the out-of-eclipse brightness variability is cool spot activity on one or both components. Such a spot activity is the result of magnetic activity in cool stars and causes to appear emission features in particular activity-sensitive spectral lines through the optical spectrum (e.g. H\(\alpha\), Ca ii H& K lines). However, no emission features are observed neither in H\(\alpha\) nor in Ca ii H& K lines in TFOSC spectra of KIC 4832197. Nevertheless, several very weak stellar flares, which can be considered photometric evidence of cool spot activity, are detected in long cadence Kepler light curve of the system. Such a situation was observed before for eclipsing binary KIC 9451096 (Ozdarcan et al., 2018b), where no emis sion was detected in activity sensitive lines but significant number of flares were detected in Kepler light curve. On the other hand, although the amplitude of the rotational modulation signal reaches up to 0.01 in normalized flux unit, this is still a low amplitude brightness variability, thus observing no emission in activity-sensitive spectral lines may not be an unexpected case. Actually, low level of magnetic activity is in agreement with physical properties of the system. Masses of the components are slightly higher than the Sun, which means that both of them possess more shallow outer convective envelope compared to the Sun. Therefore, one may expect decrease in the level of magnetic activity. Comparing KIC 4832197 with KIC 12418816(Dal & Ozdarcan, 2018), which is a lower mass eclipsing binary with shorter orbital period and high level of magnetic activity, confirms this situation. Components of KIC 12418816 possess deeper convective envelope compared to the components of our target. As a result of combination of faster rotation and deeper convective envelopes, high level of magnetic activity is not surprising in KIC 12418816. In the case of KIC 4832197, the situation is opposite, hence low level of magnetic activity is not an unexpected situation for KIC 4832198. Frequency analysis of residual light curve shows two dominant frequency peaks around the orbital frequency (see amplitude spectrum plotted in Figure 10). However, lowermost panel of the figure clearly shows that almost no signal appears at the exact location of the orbital frequency. It means that the best-fitting light curve model precisely represents binarity effects. As a result, removing the best-fitting light curve model from observations perfectly removes the binarity effects from long cadence light curve. Comparing frequencies (\(f_{1}\), \(f_{2}\) and \(f_{orb}\)) and their statistical error listed in Table 5, one can notice that the separation of frequencies exceeds the statistical errors. No major splitting is observed for the peaks of \(f_{1}\) and \(f_{2}\), except a remarkable side component of \(f_{1}\) located at slightly higher frequency compared to \(f_{1}\). We interpret that these frequencies are the results of two separate spots or spot groups located at different latitudes on the surface of one or both components. Looking at lowermost panel of the Figure 10, it is quickly seen that \(f_{1}\) and \(f_{2}\) frequencies are smaller and larger than \(f_{orb}\) respectively. Assuming that both components possess solar type differential rotation on their surfaces, spot (or spot group) causing variability with \(f_{1}\) frequency is located at lower latitudes, which is expected to rotate faster than the latitude rotating with \(f_{orb}\). That latitude that rotates with \(f_{orb}\) is often called co-rotation latitude). In this case, the other spot (or spot group), which rotates with \(f_{2}\) frequency must be located at higher latitudes compared to the co-rotation latitude. These latitudes are expected to rotate slower than the co-rotation latitude as well. Further inspection on these frequencies reveals that \(f_{1}\) persistently appear in each separate Kepler quarter while \(f_{2}\) weakens or disappears occasionally. In this case, spot or spot groups related to \(f_{1}\) frequency is more persistent and exhibit stronger activity compared to the spot or spot group related to \(f_{2}\) frequency. Four years of persistence of \(f_{1}\) frequency is not surprising in the scope of solar and stellar connection since similar persistent active longitudes are reported previously for our Sun (Berdyugina & Usoskin 2003). With the current data, it is difficult to distinguish which component possesses spot(s) on its surface. However, we may make an implicit assumption by considering two mechanisms, depth of the outer convective envelope and the rotation, which trigger the magnetic activity on cool stars. Although the system is composed of two nearly similar stars, the secondary component is 500 K cooler than the primary. It means that the secondary star possesses deeper convective envelope than the primary star. Considering that the axial rotation period of the components are synchronized to the orbital period, this finding indicates that the secondary component may exhibit stronger magnetic activity than the primary component, because of its deeper convective outer envelope. Then, we may assume that the origin of spots is the cool secondary component. Assuming that both \(f_{1}\) and \(f_{2}\) come from spot or spot groups on the cool secondary component, a lower limit for differential rotation can be set. Following the procedure described in Hall & Busby (1990), we compute corresponding period for \(f_{1}\) and \(f_{2}\), and adopt them as minimum (\(P_{min}\)) and maximum (\(P_{max}\)) periods. Under solar type differential rotation assumption, \(P_{min}\) can be adopted as equatorial rotation period. Then, relative surface shear can be computed via \(P_{max}-P_{min}/P_{min}=kf\), where \(k\) is the differential rotation coefficient and \(f\) is a constant depending on the range of spot forming latitudes. Assuming that \(f\) varies between 0.5 and 0.7, which corresponds to 45 degrees of latitude range for spot forming, \(k\) varies between 0.14 and 0.10, which indicates an average \(\bar{k}=0.12\). This is half of the solar differential rotation coefficient (\(k_{\odot}=0.189\), see Hall & Busby (1990) and references therein) but twice of the differential rotation coefficient of the eclipsing binary KIC 9451096 (\(k=0.069\), Ozdarcan et al. 2018b). This finding appears contradictory to the theory of stellar magnetic activity since KIC 9451096 possesses higher level of magnetic activity but weaker differential rotation compared to our target system in this study. Increasing the number of precisely analysed solar type eclipsing binaries will provide a global look to the photometric properties of magnetic activity on eclipsing binaries and reflection of differential rotation, which is one of the two key mechanisms responsible for magnetic activity in cool stars, on light curves. Continuous photometric data provided by space telescopes is the key to achieve this purpose. ## Acknowledgments We acknowledge anonymous referee for his/her valuable comments that improved the quality of the paper. We thank TUBITAK for partial support in using RTT150 (Russian-Turkish 1.5-m telescope in Antalya) with project number 14BRTT150-667. This paper includes data collected by the Kepler mission. Funding for the Kepler mission is provided by the NASA Science Mission Directorate.
2302.02277
SE(3) diffusion model with application to protein backbone generation
The design of novel protein structures remains a challenge in protein engineering for applications across biomedicine and chemistry. In this line of work, a diffusion model over rigid bodies in 3D (referred to as frames) has shown success in generating novel, functional protein backbones that have not been observed in nature. However, there exists no principled methodological framework for diffusion on SE(3), the space of orientation preserving rigid motions in R3, that operates on frames and confers the group invariance. We address these shortcomings by developing theoretical foundations of SE(3) invariant diffusion models on multiple frames followed by a novel framework, FrameDiff, for learning the SE(3) equivariant score over multiple frames. We apply FrameDiff on monomer backbone generation and find it can generate designable monomers up to 500 amino acids without relying on a pretrained protein structure prediction network that has been integral to previous methods. We find our samples are capable of generalizing beyond any known protein structure.
Jason Yim, Brian L. Trippe, Valentin De Bortoli, Emile Mathieu, Arnaud Doucet, Regina Barzilay, Tommi Jaakkola
2023-02-05T01:47:02Z
http://arxiv.org/abs/2302.02277v3
# \(\mathrm{SE}(3)\) diffusion model with application to protein backbone generation ###### Abstract The design of novel protein structures remains a challenge in protein engineering for applications across biomedicine and chemistry. In this line of work, a diffusion model over rigid bodies in 3D (referred to as _frames_) has shown success in generating novel, functional protein backbones that have not been observed in nature. However, there exists no principled methodological framework for diffusion on \(\mathrm{SE}(3)\), the space of orientation preserving rigid motions in \(\mathbb{R}^{3}\), that operates on frames and confers the group invariance. We address these shortcomings by developing theoretical foundations of \(\mathrm{SE}(3)\) invariant diffusion models on multiple frames followed by a novel framework, \(\mathrm{FrameDiff}\), for learning the \(\mathrm{SE}(3)\) equivariant score over multiple frames. We apply \(\mathrm{FrameDiff}\) on monomer backbone generation and find it can generate designable monomers up to 500 amino acids without relying on a pre-trained protein structure prediction network that has been integral to previous methods. We find our samples are capable of generalizing beyond any known protein structure.1 Footnote 1: Code: [https://github.com/jasonkyuym/se3_diffusion](https://github.com/jasonkyuym/se3_diffusion) ## 1 Introduction The ability to engineer novel proteins holds promise in developing bio-therapeutics towards global health challenges such as SARS-COV-2 (Arunachalam et al., 2021) and cancer (Quijano-Rubio et al., 2020). Unfortunately, efforts to engineer proteins have required substantial domain knowledge and laborious experimental testing. To this end, protein engineering has benefited from advancements in deep learning by automating knowledge acquisition from data and improving efficiency in designing proteins (Ding et al., 2022). Generating a novel protein satisfying specified structural or functional properties is the task of _de novo_ protein design (Huang et al., 2016). In this work, we focus on generating protein backbones. A protein backbone consists of \(N\) residues, each with four heavy atoms rigidly connected via covalent bonds, \(\mathbb{N}-\mathbb{C}_{\alpha}-\mathbb{C}-\mathbb{O}\). Computationally designing novel backbones is technically challenging due to the coupling of structure and sequence: atoms that comprise protein structure must adhere to physical and chemical constraints while being "designable" in the sense that there exists a sequence of amino acids which folds to that structure. We approach this problem with diffusion generative modeling which has shown promise in recent work (see Sec. 6). A main technical challenge is to combine expressive geometric deep learning methods that operate on protein structures with diffusion generative modeling. Because the \(\mathbb{N}-\mathbb{C}_{\alpha}-\mathbb{C}\) atoms for each residue may be described accurately as a frame (Fig. 1A), many successful computational methods for both protein structure prediction (Jumper et al., 2021) and design (Watson et al., 2022) represent backbone structures by an element of the Lie group \(\mathrm{SE}(3)^{N}\). Moreover, since the biochemical function of proteins is imparted by the relative geometries of the atoms (and so is invariant to rigid transformations) these methods typically utilize \(\mathrm{SE}(3)\) equivariant neural networks.2 While De Bortoli et al. (2022); Huang et al. (2022) have extended diffusion modeling to Riemannian manifolds (such as \(\mathrm{SE}(3)\)), these works do not readily provide tractable training procedures or accommodate inclusion of geometric invariances. Footnote 2: \(\mathrm{SE}(3)^{N}\) is the manifold of \(N\) frames while \(\mathrm{SE}(3)\) equivariance refers to the equivariance on global rotations and translations. Modeling \(\mathrm{SE}(3)^{N}\) poses theoretical challenges and current deep learning methods have outpaced theoretical foundations. Notably, Watson et al. (2022) demonstrated a diffusion model (RFdiffusion) to generate novel protein-binders with high, experimental-verified affinities, but relied on a heuristic denoising loss and required pretraining on protein structure prediction. Our goal is to bridge this theory-practice gap and determine if pretraining is necessary. The present work contributes to both the theory and methodology of \(\mathrm{SE}(3)\) diffusion models with applications to protein backbone generation. In Sec. 3, we characterize the distribution of the Brownian motion on compact Lie groups (in particular \(\mathrm{SO}(3)\)) in a form for denoising score matching (DSM) training and define a forward process on \(\mathrm{SE}(3)^{N}\) that allows for separation of translations and rotations. We show an \(\mathrm{SE}(3)\) invariant process on \(\mathrm{SE}(3)^{N}\) can only be made by keeping the diffusion process centered at the origin since no \(\mathbb{R}^{3}\) invariant probability measure exists. Sec. 4 then applies these contributions to develop \(\mathrm{FrameDiff}\), an \(\mathrm{SE}(3)\) invariant diffusion model on \(\mathrm{SE}(3)^{N}\) for protein backbones that follows the correct DSM training, uses 4-fold fewer parameters than RFdiffusion, and does not require pretraining. Finally, Sec. 5 provides experiments on protein monomers where we report that \(\mathrm{FrameDiff}\) can generate designable and diverse protein backbones up to length 500 that are occasionally novel compared to known proteins. Compared to other methods, we report _in-silico_ designability success rates that are second only to RFdiffusion. Our contributions will enable further advancements in \(\mathrm{SE}(3)\) diffusion methodology that underlies RFdiffusion and \(\mathrm{FrameDiff}\) for proteins as well as other domains such as robotics where \(\mathrm{SE}(3)\) and other Lie groups are used. ## 2 Preliminaries and Notation Backbone parameterization.We adopt the backbone frame parameterization used in AlphaFold2 (AF2) (Jumper et al., 2021). Here, an \(N\) residue backbone is parameterized by a collection of \(N\) orientation preserving rigid transformations, or _frames_, that map from fixed coordinates \(\mathbb{N}^{*},\mathbb{C}^{*}_{\alpha},\mathbb{C}^{*},\mathbb{O}^{*}\in \mathbb{R}^{3}\) centered at \(\mathbb{C}^{*}_{\alpha}=(0,0,0)\) (Fig. 1A). Each fixed coordinate assumes chemically idealized bond angles and lengths measured experimentally (Engh and Huber, 2012). For each residue indexed by \(n\), the backbone main atom coordinates are given by \[[\mathbb{N}_{n},\mathbb{C}_{n},(\mathbb{C}_{\alpha})_{n}]=T_{n}\cdot[\mathbb{ N}^{*},\mathbb{C}^{*},\mathbb{C}^{*}_{\alpha}], \tag{1}\] where \(T_{n}\) is a member of the special Euclidean group \(\mathrm{SE}(3)\), the set of orientation preserving rigid transformations in Euclidean space. Each \(T_{n}\) may be decomposed into two components \(T_{n}=(r_{n},x_{n})\) where \(r_{n}\in\mathrm{SO}(3)\) is a \(3\times 3\) rotation matrix and \(x_{n}\in\mathbb{R}^{3}\) represents a translation; for a coordinate \(v\in\mathbb{R}^{3}\), \(T_{n}\cdot v=r_{n}v+x_{n}\) denotes the action of \(T_{n}\) on \(v\). Together, we collectively denote all \(N\) frames as \(\mathbf{T}=[T_{1},\dots,T_{N}]\in\mathrm{SE}(3)^{N}\). With an additional torsion angle \(\psi\), we may construct the backbone oxygen by rotating \(\mathbb{O}^{*}\) around the bond between \(\mathbb{C}_{\alpha}\) and \(\mathbb{C}\). App. 1.1 provides additional details on this mapping and idealized coordinates. Diffusion modeling on manifolds.To capture a distribution over backbones in \(\mathrm{SE}(3)^{N}\) we build on the Riemannian score based generative modeling approach of De Bortoli et al. (2022). We briefly review this approach. The goal of Riemannian score based generative modeling is to sample from a distribution \(\mathbf{X}^{(0)}\sim p_{0}\) supported on a Riemannian manifold \(\mathcal{M}\) by reversing a stochastic process that transforms data into noise. One first constructs an \(\mathcal{M}\)-valued _forward process_\((\mathbf{X}^{(t)})_{t\geq 0}\) that evolves from \(p_{0}\) towards an invariant density3\(p_{\mathrm{inv}}(x)\propto\mathrm{e}^{-U(x)}\) following Footnote 3: density w.r.t. the volume form on \(\mathcal{M}\). \[\mathrm{d}\mathbf{X}^{(t)}=-\tfrac{1}{2}\nabla U(\mathbf{X}^{(t)})\mathrm{d}t +\mathrm{d}\mathbf{B}^{(t)}_{\mathcal{M}},\qquad\mathbf{X}^{(0)}\sim p_{0}, \tag{2}\] where \(\mathbf{B}^{(t)}_{\mathcal{M}}\) is the Brownian motion on \(\mathcal{M}\). The time-reversal of this process is given by the following proposition. **Proposition 2.1** (Time-reversal, De Bortoli et al. (2022)).: _Let \(\mathrm{T}_{\mathrm{F}}>0\) and \(\overleftarrow{\mathbf{X}}^{(t)}\) given by \(\overleftarrow{\mathbf{X}}^{(0)}\stackrel{{ d}}{{=}}\mathbf{X}^{( \mathrm{T}_{\mathrm{F}})}\) and_ \[\mathrm{d}\overleftarrow{\mathbf{X}}^{(t)}=\{\tfrac{1}{2}\nabla U(\overleftarrow {\mathbf{X}}^{(t)})+\nabla\log p_{\mathrm{T}_{\mathrm{F}}-t}(\overleftarrow{ \mathbf{X}}^{(t)})\}\mathrm{d}t+\mathrm{d}\mathbf{B}^{(t)}_{\mathcal{M}},\] _where \(p_{t}\) is the density of \(\mathbf{X}^{(t)}\). Then under mild assumptions on \(\mathcal{M}\) and \(p_{0}\) we have that \(\overleftarrow{\mathbf{X}}^{(t)}\stackrel{{ d}}{{=}}\mathbf{X}^{( \mathrm{T}_{\mathrm{F}}-t)}\)._ Diffusion modeling in Euclidean space is a special case of Prop. 2.1. However, generative modeling using this reversal Figure 1: Method overview. **(A)** Backbone parameterization with frames. Each residue along the protein chain shares the same structure of backbone atoms due to the fixed bonds between each atom. Performing the GramSchmidt operation on vectors \(v_{1},v_{2}\) results in rotation matrix \(r\) that parameterizes the \(\mathbb{N}-\mathbb{C}_{\alpha}-\mathbb{C}\) placements with respect to the frame translation, \(x\), set to the \(\mathbb{C}_{\alpha}\) coordinates. An additional torsion angle, \(\psi\), is required to determine the placement of the oxygen atom, \(\circ\). **(B)** Inference is performed by sampling \(N\) frames initialized from the reference distribution over rotations and translations. Then a time-reversed \(\mathrm{SE}(3)\) diffusion is run from \(t=T\) to \(t=0\) at which point the \(\psi\) angle is predicted. The final frames and \(\psi\) angles are used to construct the protein backbone atoms. beyond the Euclidean setting requires additional mathematical machinery, which we now review. **Riemannian gradients and Brownian motions.** In the above, \(\nabla U(x)\) and \(\nabla\log p_{t}(x)\) are _Riemannian gradients_ taking values in \(\mathrm{Tan}_{x}\mathcal{M}\), the tangent space of \(\mathcal{M}\) at \(x\), and depend implicitly on the choice of an inner product on \(\mathrm{Tan}_{x}\mathcal{M}\), denoted by \(\langle\cdot,\cdot\rangle_{\mathcal{M}}\). Similarly, the Brownian motion relies on \(\langle\cdot,\cdot\rangle_{\mathcal{M}}\) through the Laplace-Beltrami operator, \(\Delta_{\mathcal{M}}\), which dictates its density through the Fokker-Planck equation in the absence of drift; if \(\pi_{t}\) is the density of the \(\mathbf{B}^{(t)}_{\mathcal{M}}\) then \(\partial_{t}\pi_{t}=\frac{1}{2}\Delta_{\mathcal{M}}\pi_{t}\). We refer the reader to Lee (2013) and Hsu (2002) for background on differential geometry and stochastic analysis on manifolds. **Denoising score matching.** The quantity \(\nabla\log p_{t}\) is called the Stein score and is unavailable in practice. It is approximated with a score network \(s_{\theta}(t,\cdot)\) trained by minimizing a denoising score matching (DSM) loss \[\mathcal{L}(\theta)=\mathbb{E}[\lambda_{t}\|\nabla\log p_{t|0}(\mathbf{X}^{(t )}|\mathbf{X}^{(0)})-s_{\theta}(t,\mathbf{X}^{(t)})\|^{2}], \tag{3}\] where \(p_{t|0}\) is the density of \(\mathbf{X}^{(t)}\) given \(\mathbf{X}^{(0)}\), \(\lambda_{t}>0\) a weight, and the expectation is taken over \(t\sim\mathcal{U}([0,\mathrm{T}_{\mathrm{F}}])\) and \((\mathbf{X}^{(0)},\mathbf{X}^{(t)})\). For an arbitrarily flexible network, the minimizer \(\theta^{*}=\mathrm{argmin}_{\theta}\mathcal{L}(\theta)\) satisfies \(s_{\theta^{*}}(t,\cdot)=\nabla\log p_{t}\). **Lie groups** are Riemannian manifolds with an additional group structure, i.e. there exists an operator \(*:G\times G\to G\) such that \((G,*)\) is a group and \(*\) as well as its inverse are smooth. We define the left action as \(L_{g}(h)=g*h\) for any \(g,h\in G\) and its differential is denoted by \(\mathrm{d}(L_{g})(h):\mathrm{Tan}_{g}G\rightarrow\mathrm{Tan}_{gh}G\). \(\mathrm{SO}(3)\), \(\mathrm{SE}(3)\) and \(\mathbb{R}^{3}\) are all Lie groups. For any group \(G\), we denote \(\mathfrak{g}\) its Lie algebra. We refer to Sola et al. (2018) for an introduction to Lie groups. **Additional notation.** Superscripts with parentheses are reserved for time, i.e. \(x^{(t)}\). Uppercase is used to denotes random variables, e.g. \(X\sim p\), and lower case is used for deterministic variables. Bold denotes concatenated versions of variables, e.g. \(\mathbf{x}=(x_{1},\ldots,x_{N})\) or processes \((\mathbf{X}^{(t)})_{t\in[0,\mathrm{T}_{\mathrm{F}}]}\). ## 3 Diffusion models on \(\mathrm{SE}(3)\) Parameterizing flexible distributions over protein backbones, leveraging the Riemannian diffusion method of Sec. 2 to \(\mathrm{SE}(3)^{N}\), requires several ingredients. First, in Sec. 3.1 we develop a forward diffusion process on \(\mathrm{SE}(3)\), then Sec. 3.2 derives DSM training on compact Lie groups, using \(\mathrm{SO}(3)\) as the motivating example. At this point, a diffusion model on \(\mathrm{SE}(3)^{N}\) is defined. Next, we desire \(\mathrm{SE}(3)\) invariance where the \(\mathrm{SE}(3)^{N}\) data distribution is invariant to global rotations and translations. Sec. 3.3 will show this is not possible without centering the process at the origin and having a \(\mathrm{SO}(3)\)-equivariant neural network. ### Forward diffusion on \(\mathrm{SE}(3)\) In contrast to Euclidean space and compact manifolds, no canonical forward diffusion on \(\mathrm{SE}(3)^{N}\) exists, and we must define one. This entails (a) choosing an inner product on \(\mathrm{SE}(3)\) to define a Brownian motion and (b) choosing a reference measure for the forward diffusion. We begin with the inner product, which we derive from the canonical inner products for \(\mathrm{SO}(3)\) and \(\mathbb{R}^{3}\) which we recall below-see Carmo (1992). For \(u,v\in\mathfrak{so}(3)\) and \(x,y\in\mathbb{R}^{3}\) \[\langle u,v\rangle_{\mathrm{SO}(3)}=\mathrm{Tr}(uv^{\top})/2\;\mathrm{and}\; \langle x,y\rangle_{\mathbb{R}^{3}}=\sum_{i=1}^{3}x_{i}y_{i},\] In the next proposition, we show that, under an appropriate choice of inner product, \(\mathrm{SE}(3)\) can be identified with \(\mathrm{SO}(3)\times\mathbb{R}^{3}\) from a _Riemannian_ point of view, thereby providing a Laplace-Beltrami operator and a well-defined Brownian motion. **Proposition 3.1** (Metric on \(\mathrm{SE}(3)\)).: _For any \(T\in\mathrm{SE}(3)\) and \((a,x),(a^{\prime},x^{\prime})\in\mathrm{Tan}_{T}\mathrm{SE}(3)\) we define \(\langle(a,x),(a^{\prime},x^{\prime})\rangle_{\mathrm{SE}(3)}=\langle a,a^{ \prime}\rangle_{\mathrm{SO}(3)}+\langle x,x^{\prime}\rangle_{\mathbb{R}^{3}}\). We have:_ 1. _for any_ \(f\in\mathrm{C}^{\infty}(\mathrm{SE}(3))\) _and_ \(T=(r,x)\in\mathrm{SE}(3)\)_,_ \(\nabla_{T}f(T)=[\nabla_{r}f(r,x),\nabla_{x}f(r,x)]\)_,_ 2. _for any_ \(f\in\mathrm{C}^{\infty}(\mathrm{SE}(3))\) _and_ \(T=(r,x)\in\mathrm{SE}(3)\)_,_ \(\Delta_{\mathrm{SE}(3)}f(T)=\Delta_{\mathrm{SO}(3)}f(r,x)+\Delta_{\mathbb{R}^{3 }}f(r,x)\)_,_ 3. _for any_ \(t>0\)_,_ \(\mathbf{B}^{(t)}_{\mathrm{SE}(3)}=[\mathbf{B}^{(t)}_{\mathrm{SO}(3)},\mathbf{B }^{(t)}_{\mathbb{R}^{3}}]\) _with independent_ \(\mathbf{B}^{(t)}_{\mathrm{SO}(3)}\) _and_ \(\mathbf{B}^{(t)}_{\mathbb{R}^{3}}\)_._ Other choices of metric for \(\mathrm{SE}(3)\) are possible, leading to different definitions of the exponential and Brownian motion. Our choice has the advantage of simplicity and allows to treat \(\mathrm{SO}(3)\) and \(\mathbb{R}^{3}\) forward processes independently (conditionally on \(\mathbf{T}^{(0)}\)). For the invariant density of \(T=(r,x)\), we choose \(p_{\mathrm{inv}}^{\mathrm{SE}(3)}(T)\propto\mathcal{U}^{\mathrm{SO}(3)}(r)\)\(\mathcal{N}(x;0,\mathrm{Id}_{3})\). The associated forward process \((\mathbf{T}^{(t)})_{t\geq 0}=(\mathbf{R}^{(t)},\mathbf{X}^{(t)})_{t\geq 0}\) is given according to (2) and Prop. 3.1 by \[\mathrm{d}\mathbf{T}^{(t)}=[0,-\tfrac{1}{2}\mathbf{X}^{(t)}]\mathrm{d}t+[ \mathrm{d}\mathbf{B}^{(t)}_{\mathrm{SO}(3)},\mathrm{d}\mathbf{B}^{(t)}_{ \mathbb{R}^{3}}]. \tag{4}\] ### Denoising score matching on \(\mathrm{SE}(3)\) As a consequence of Prop. 3.1 and the independence of the rotational and translational components of the forward process, we have \(\nabla_{\mathbf{T}^{(t)}}\log p_{t|0}(\mathbf{T}^{(t)}|\mathbf{T}^{(0)})=[\nabla_{ \mathbf{R}^{(t)}}\log p_{t|0}(\mathbf{R}^{(t)}|\mathbf{R}^{(0)}),\nabla_{\mathbf{ X}^{(t)}}\log p_{t|0}(\mathbf{X}^{(t)}|\mathbf{X}^{(0)})]\) and we can compute these quantities _independently_ over the rotation and translation components. **Denoising score matching on \(\mathrm{SO}(3)\).** The forward process \((\mathbf{R}^{(t)})_{t\geq 0}\) is simply the Brownian motion on \(\mathrm{SO}(3)\), and is defined by the heat kernel, see Hsu (2002). We obtain \(p_{t|0}\) analytically as a series as a special case of the decomposition of the heat kernel for compact Lie groups. **Proposition 3.2** (Brownian motion on compact Lie groups).: _Assume that \(\mathcal{M}\) is a compact Lie group, where for any \(\ell\in\mathbb{N}\)\(\chi_{\ell}\) is the character associated with the irreducible unitary representation of dimension \(d_{\ell}\). Then \(\chi_{\ell}:\ \mathcal{M}\rightarrow\mathbb{R}\) is an eigenvector of \(\Delta\) and there exists \(\lambda_{\ell}\geq 0\) such that \(\Delta\chi_{\ell}=-\lambda_{\ell}\chi_{\ell}\). In addition, we have for any \(t>0\) and \(x^{(0)},x^{(t)}\in\mathcal{M}\), \(p_{t|0}(x^{(t)}|x^{(0)})=\sum_{\ell\in\mathbb{N}}d_{\ell}e^{-\lambda_{\ell}t/ 2}\chi_{\ell}((x^{(0)})^{-1}x^{(t)})\)._ Combining Prop. 3.2 and the explicit expression of irreducible characters for \(\mathrm{SO}(3)\) provides an explicit expression for the density transition kernel \(\mathbf{B}^{(t)}_{\mathrm{SO}(3)}\). In App. E.1, we showcase another application of our method by computing the heat kernel on \(\mathrm{SU}(2)\). **Proposition 3.3** (Brownian motion on \(\mathrm{SO}(3)\)).: _For any \(t>0\) and \(r^{(0)},r^{(t)}\in\mathrm{SO}(3)\) we have that \(p_{t|0}(r^{(t)}|r^{(0)})=\mathrm{IGSO}_{3}(r^{(t)};r^{(0)},t)\) given by \(\mathrm{IGSO}_{3}(r^{(t)};r^{(0)},t)=f(\omega(r^{(0)}{}^{\top}r^{(t)}),t)\), where \(\omega(r)\) is the rotation angle in radians for any \(r\in\mathrm{SO}(3)\)--its length in the axis-angle representation4-- and_ Footnote 4: See App. C.3 for details about the parameterization of \(\mathrm{SO}(3)\). \[f(\omega,t)=\sum_{\ell\in\mathbb{N}}(2\ell+1)\mathrm{e}^{-\ell(\ell+1)t/2} \frac{\sin((\ell+1/2)\omega)}{\sin(\omega/2)}. \tag{5}\] Prop. 3.3 agrees with previous proposed expressions of the law of the Brownian motion (Nikolayev & Savyolov, 1970; Leach et al., 2022) up to a two-fold deceleration of time. This deceleration is crucial to the correct application of Prop. 2.1 (see App. E.3 for details). Accurate values of the Brownian density (5) can easily be obtained by truncating the series. Also, although exact sampling is not available, accurate samples can be obtained by numerically inverting the cdf (Leach et al., 2022). Moreover, this density allows computation of the conditional score required by the \(\mathrm{dsm}\) loss. **Proposition 3.4** (Score on \(\mathrm{SO}(3)\)).: _For \(t>0\), \(r^{(0)},r^{(t)}\in\mathrm{SO}(3)\), we have_ \[\nabla\log p_{t|0}(r^{(t)}\mid r^{(0)})=\tfrac{r^{(t)}}{\omega^{(t)}}\log\{r^ {(0,t)}\}\frac{\partial_{\omega}f(\omega^{(t)},t)}{f(\omega^{(t)},t)},\] _with \(r^{(0,t)}=r^{(0)\top}r^{(t)}\), \(\omega^{(t)}=\omega(r^{(0,t)})\) and \(\log\) the inverse of the exponential on \(\mathrm{SO}(3),\) i.e. the matrix logarithm._ **Denoising score matching on \(\mathbb{R}^{3}\).** The process \((\mathbf{X}^{(t)})_{t\geq 0}\) is an Ornstein-Uhlenbeck process, see (4), (also called VP-SDE (Song et al., 2021)) and converges geometrically to \(\mathcal{N}(0,\mathrm{Id})\). In addition, \(p_{t|0}(x^{(t)}|x^{(0)})=\mathcal{N}(x^{(t)};\mathrm{e}^{-t/2}x^{(0)},(1- \mathrm{e}^{-t})\operatorname{Id}_{3})\) and the corresponding conditional score can be computed explicitly as \[\nabla\log p_{t|0}(x^{(t)}|x^{(0)})=(1-\mathrm{e}^{-t})^{-1}(\mathrm{e}^{-t/2 }x^{(0)}-x^{(t)}).\] ### \(\mathrm{SE}(3)\) invariance through centered \(\mathrm{SE}(3)^{N}\) In this subsection, we show how one can construct a diffusion process over \(\mathrm{SE}(3)^{N}\) that is invariant to global translations and rotations. Formally, we want to design a measure \(\mu\) on \(\mathrm{SE}(3)^{N}\) such that for any \(T_{0}\in\mathrm{SE}(3)\), and measurable \(\mathbf{A}\subset\mathrm{SE}(3)^{N},\ \mu(\mathbf{A})=\mu(\{T_{0}\cdot\mathbf{T}\,\ \mathbf{T}\in \mathbf{A}\})\), where for any \(\mathbf{T}=(T_{1},\cdots,T_{N})\), \(T_{0}\cdot\mathbf{T}=(T_{0}T_{1},\ldots,T_{0}T_{N})\). Unfortunately, there exists no probability measure on \(\mathrm{SE}(3)^{N}\) which is \(\mathrm{SE}(3)\) invariant since there exists no probability measure on \(\mathbb{R}^{3N}\) which is \(\mathbb{R}^{3}\) invariant. As a result, no output of a \(\mathrm{SE}(3)^{N}\)-valued diffusion model can be \(\mathrm{SE}(3)\) invariant. However, we will show \(\mathrm{SE}(3)\) invariance is achieved by keeping the diffusion process always centered at the origin. **From \(\mathrm{SE}(3)\) to \(\mathrm{SO}(3)\) invariance.** We show that we can construct an invariant _measure_ on \(\mathrm{SE}(3)^{N}\) by keeping the center of mass fixed to zero, i.e. \(\sum_{n=1}^{N}x_{n}=0\). Formally, this defines a subgroup \([(r_{1},x_{1}),\ldots,(r_{N},x_{N})]\in\mathrm{SE}(3)^{N}_{0}\), which we refer as _centered_\(\mathrm{SE}(3)\). Note that \(\mathrm{SE}(3)^{N}_{0}\) is still a Lie group and \(\mathrm{SO}(3)\) is a subgroup of \(\mathrm{SE}(3)^{N}_{0}\). **Proposition 3.5** (Disintegration of measures on \(\mathrm{SE}(3)^{N}\)).: _Under mild assumptions5, for every \(\mathrm{SE}(3)\)-invariant measure \(\mu\) on \(\mathrm{SE}(3)^{N}\), there exist \(\eta\) an \(\mathrm{SO}(3)\)-invariant probability measure on \(\mathrm{SE}(3)^{N}_{0}\) and \(\bar{\mu}\) proportional to the Lebesgue measure on \(\mathbb{R}^{3}\) such that_ Footnote 5: See App. G for a precise statement. \[\mathrm{d}\mu([(r_{1},x_{1}),...,(r_{N},x_{N})])=\mathrm{d}\bar{ \mu}(\tfrac{1}{N}\sum_{i=1}^{N}x_{i})\] \[\times\mathrm{d}\eta([(r_{1},x_{1}-\tfrac{1}{N}\sum_{i=1}^{N}x_{i}),...,(r_{N},x_{N}-\tfrac{1}{N}\sum_{i=1}^{N}x_{i})]).\] The previous proposition is based on the _disintegration of measures_(Pollard, 2002). The converse is also true. In practice this means that in order to define a \(\mathrm{SE}(3)\)-invariant measure on \(\mathrm{SE}(3)^{N}\) one needs only to define a \(\mathrm{SO}(3)\)-invariant measure on \(\mathrm{SE}(3)^{N}_{0}\). This is the goal of the next paragraph. **Diffusion models on \(\mathrm{SE}(3)^{N}_{0}\).** A simple modification of the forward process (4) yields a stochastic process on \(\mathrm{SE}(3)^{N}_{0}\). Indeed consider \((\mathbf{T}^{(t)})_{t\geq 0}\) on \(\mathrm{SE}(3)^{N}\) given by \[\mathrm{d}\mathbf{T}^{(t)}=[0,-\tfrac{1}{2}\mathrm{P}\mathbf{X}^{(t)}]\mathrm{ d}t+[\mathrm{d}\mathbf{B}^{(t)}_{\mathrm{SO}(3)^{N}},\mathrm{Pd}\mathbf{B}^{(t)}_{ \mathbb{R}^{3N}}], \tag{6}\] where \(\mathrm{P}\in\mathbb{R}^{3N\times 3N}\) is the projection matrix removing the center of mass \(\tfrac{1}{N}\sum_{n=1}^{N}x_{n}\). Then \((\mathbf{T}^{(t)})_{t\geq 0}=(\mathbf{R}^{(t)},\mathbf{X}^{(t)})_{t\geq 0}\) is a stochastic process on \(\mathrm{SE}(3)^{N}_{0}\) with invariant measure \(\mathrm{P}_{\#}(\mathcal{N}(0,\mathrm{Id})^{\otimes N})\otimes\mathcal{U}( \mathrm{SO}(3))^{\otimes N}\)6. We note that such 'center of mass free' systems have been pro posed for continuous normalizing flows and discrete time diffusion models (Kohler et al., 2020; Xu et al., 2022). An application of Props. 2.1 and 3.1 shows that the backward process \((\overleftarrow{\mathbf{T}}^{(t)})_{t\in[0,\mathrm{Tr}_{\mathrm{F}}]}=([ \overleftarrow{\mathbf{R}}^{(t)},\overleftarrow{\mathbf{X}}^{(t)}])_{t\in[0, \mathrm{Tr}_{\mathrm{F}}]}\) is given by \[\mathrm{d}\overleftarrow{\mathbf{R}}^{(t)} =\nabla_{r}\log p_{\mathrm{TF}-t}(\overleftarrow{\mathbf{T}}^{(t )})\mathrm{d}t+\mathrm{d}\mathrm{B}^{(t)}_{\mathrm{SO}(3)^{N}}, \tag{7}\] \[\mathrm{d}\overleftarrow{\mathbf{X}}^{(t)} =\mathrm{P}\{\tfrac{1}{2}\widehat{\mathbf{X}}^{(t)}+\nabla_{x} \log p_{\mathrm{TF}-t}(\overleftarrow{\mathbf{T}}^{(t)})\}\mathrm{d}t+ \mathrm{Pd}\mathbf{B}^{(t)}_{\mathbb{R}^{3N}}.\] As in Sec. 3.2, we have \(p_{t|0}((\mathbf{r}^{(t)},\mathbf{x}^{(t)})|(\mathbf{r}^{(0)},\mathbf{x}^{(0)} ))=p_{t|0}(\mathbf{r}^{(t)}|\mathbf{r}^{(0)})p_{t|0}(\mathbf{x}^{(t)}|\mathbf{ x}^{(0)})\), where these densities additionally factorizes along each of the residues. In App. J.1, we use the forward process (6) for training and the backward process (7) for sampling in App. J.2. **Invariance and equivariance on Lie groups.** Finally, we want the output of the backward process, i.e. the distribution of \((\mathbf{R}^{(t)},\mathbf{X}^{(t)})\) given by (7) to be \(\mathrm{SO}(3)\)-invariant so that the associated measure on \(\mathrm{SE}(3)^{N}\) given by Prop. 3.5 is \(\mathrm{SE}(3)\)-invariant. To do so we use the following result. **Proposition 3.6** (\(G\)-invariance and SDEs).: _Let \(G\) be a Lie group and \(H\) a subgroup of \(G\). Let \(\mathbf{X}\) associated with \(\mathrm{d}\mathbf{X}^{(t)}=b(t,\mathbf{X}^{(t)})\mathrm{d}t+\Sigma^{1/2} \mathrm{d}\mathbf{B}^{(t)},\) with bounded coefficients, where \(\mathbf{B}^{(t)}\) is a Brownian motion associated with a left-invariant metric. Assume that the distribution of \(\mathbf{X}^{(0)}\) is \(H\)-invariant and that for any \(t\geq 0\) and \(h\in H\), \(\Sigma(\mathrm{d}L_{h}.\nabla p_{t})=\mathrm{d}L_{h}.(\Sigma\nabla p_{t})\) and \(b\circ L_{h}=\mathrm{d}L_{h}.b\), i.e. the drift \(b\) is equivariant w.r.t the action of \(H\), then the distribution of \(\mathbf{X}^{(t)}\) is \(H\)-invariant for any \(t\geq 0\)._ The proof can be extended to non-bounded coefficients under appropriate assumption on the growth of \(b\). As a consequence of Prop. 3.6 we obtain the announced invariance. **Corollary 3.7** (\(\mathrm{SO}(3)\)-invariance of (7)).: _Let \((\overleftarrow{\mathbf{R}}^{(t)},\overleftarrow{\mathbf{X}}^{(t)})\) given by (7). Assume that the coefficients of (7) are bounded. Then the distribution of \((\overleftarrow{\mathbf{R}}^{(t)},\overleftarrow{\mathbf{X}}^{(t)})\) is \(\mathrm{SO}(3)\)-invariant._ Equation (3.7) is still true if \([\nabla_{r}\log p_{t},\nabla_{x}\log p_{t}]\) is replaced with \([s^{r}_{\theta},s^{x}_{\theta}]\) with \(s^{r}_{\theta}\) and \(s^{x}_{\theta}\)\(\mathrm{SO}(3)\)-equivariant neural networks, see Sec. 4.1. ## 4 Protein backbone diffusion model We now describe \(\mathrm{FrameDiff}\), a diffusion model for sampling protein backbones by modeling frames based on the centered \(\mathrm{SE}(3)^{N}\) stochastic process in Sec. 3. In Sec. 4.1, we describe our neural network to learn the score using frame and torsion predictions. Sec. 4.2 presents a multi-objective loss involving score matching and auxiliary protein structure losses. Additional details for training and sampling are postponed to Apps. J.1 and J.2. ### \(\mathrm{FramePred}\): score and torsion prediction In this section, we provide an overview of our score and torsion prediction network; technical details are given in App. I.2. Our neural network to learn the score is based on the structure module of AlphaFold2 (AF2) (Jumper et al., 2021), which has previously be adopted for diffusion by Anand and Achim (2022). Namely, it performs iterative updates to the frames over a series of \(L\) layers using a combination of _spatial_ and _sequence_ based attention modules. Let \(\mathbf{h}_{\ell}=[h^{1}_{\ell},\dots,h^{N}_{\ell}]\in\mathbb{R}^{N\times D_{h}}\) be the node embeddings of the \(\ell\)-th layer where \(h^{n}_{\ell}\) is the embedding for residue \(n\). \(\mathbf{z}_{\ell}\in\mathbb{R}^{N\times N\times D_{z}}\) are edge embeddings with \(z^{nm}_{\ell}\) being the embedding of the edge between residues \(n\) and \(m\). Fig. 2 shows one single layer of our neural network. Spatial attention is performed with Invariant Point Attention (IPA) from AF2 which can attend to closer residues in coordinate space while a Transformer (Vaswani et al., 2017) allows for capturing interactions along the chain structure. We found including the Transformer greatly improved training and sample quality. As a result, the computational complexity of \(\mathrm{FrameDiff}\) is quadratic in backbone length. Unlike AF2, we do not use \(\mathrm{StopGradient}\) between rotation updates. The updates are \(\mathrm{SE}(3)\)-invariant since IPA is \(\mathrm{SE}(3)\)-invariant. We utilize fully connected graph structure where each residue attends to every other node. Updates to the node embeddings are propagated to the edges in \(\mathrm{EdgeUpdate}\) where a standard message passing edge update is performed. \(\mathrm{BackboneUpdate}\) is taken from AF2 (Algorithm 23), where a linear layer is used to predict translation and rotation updates to each frame. Feature initialization follows Trippe et al. (2022) where node embeddings are initialized with residue indices and timestep while edge embeddings additionally get relative sequence distances. Edge embeddings are additionally initialized through self-conditioning (Chen et al., 2022) with a binned pairwise distance matrix between the model's \(\mathbb{C}_{\alpha}\) predictions. All coordinates are represented in nanometers. Our model also outputs a prediction of the \(\boldsymbol{\psi}\) angle for each residue, which positions the backbone oxygen atom with respect to the predicted frame. Putting it all together, our neural network with weights \(\theta\) predicts the denoised frame and torsion angle, \[(\overleftarrow{\mathbf{T}}^{(0)},\boldsymbol{\hat{\psi}})=\mathrm{FramePred} (\mathbf{T}^{(t)},t;\theta),\ \ \widehat{\mathbf{T}}^{(0)}=(\overleftarrow{\mathbf{R}}^{(0)},\widehat{\mathbf{X}}^{(0 )}).\] **Score parameterization.** We relate the \(\mathrm{FrameDiff}\) prediction to a score prediction via \(\nabla_{\mathbf{T}^{(t)}}\log p_{t|0}(\mathbf{T}^{(t)}\mid\widehat{\mathbf{T}} ^{(0)})=\{(s^{x}_{\theta}(t,\mathbf{T}^{(t)})_{n},s^{x}_{\theta}(t,\mathbf{T} ^{(t)})_{n})\}_{n=1}^{N}\) where the predicted score is computed separately for the rotation and translation of each residue, \(s^{x}_{\theta}(t,\mathbf{T}^{(t)})_{n}=\nabla_{\mathbf{R}^{(t)}_{n}}\log p_{t|0} (\mathbf{R}^{(t)}_{n}|\hat{\mathbf{R}}^{(0)}_{n})\) and \(s^{x}_{\theta}(t,\mathbf{T}^{(t)})_{n}=\nabla_{\mathbf{X}^{(t)}_{n}}\log p_{t|0} (\mathbf{X}^{(t)}_{n}|\hat{\mathbf{X}}^{(0)}_{n})\). ### Training losses Learning the translation and rotation score amounts to minimizing the DSM loss given in (3). Following Song et al. (2021), we choose the weighting schedule for the rotation component as \(\lambda_{t}^{\mathrm{r}}=1/\mathbb{E}[\|\nabla\log p_{t|0}(\mathbf{R}_{n}^{(t)}| \mathbf{R}^{(0)})\|_{\mathrm{SO}(3)}^{2}]\); with this choice, the expected loss of the trivial prediction \(\hat{R}^{(0)}=R^{(t)}\) is equal to \(1\) for every \(t\). For translations, we use \(\lambda_{t}^{\mathrm{x}}=(1-\mathrm{e}^{-t})/\mathrm{e}^{-t/2}\) so (3) simplifies as \[\mathcal{L}_{\mathrm{dsm}}^{\mathrm{x}}=\tfrac{1}{N}\sum_{n=1}^{N}\|X_{n}^{(0 )}-\hat{X}_{n}^{(0)}\|^{2}.\] We find this choice is beneficial to avoid loss instabilities near low \(t\) (see Karras et al. (2022) for more discussion) where atomic accuracy is crucial for sample quality. There is also the physical interpretation of directly predicting the \(\mathbb{C}_{\alpha}\) coordinates. Our \(\mathrm{SE}(3)\) DSM loss is \(\mathcal{L}_{\mathrm{dsm}}=\mathcal{L}_{\mathrm{dsm}}^{\mathrm{r}}+\mathcal{ L}_{\mathrm{dsm}}^{\mathrm{x}}\). **Auxiliary losses.** In early experiments, we found that \(\mathrm{FrameDiff}\) with \(\mathcal{L}_{\mathrm{dsm}}\) generated backbones with plausible coarse-grained topologies, but unrealistic fine-grained characteristics, such as chain breaks or steric clashes. To discourage these physical violations, we use two additional losses to learn torsion angle \(\psi\) and directly penalize atomic errors in the last steps of generation. Let \(\Omega=\{\mathbb{N},\mathbb{C},\mathbb{C}_{\alpha},\mathbb{O}\}\) be the collection of backbone atoms. The first loss is a direct MSE on the backbone (bb) positions, \[\mathcal{L}_{\mathrm{bb}}=\tfrac{1}{4N}\sum_{n=1}^{N}\sum_{a\in\Omega}\|a_{n}^ {(0)}-\hat{a}_{n}^{(0)}\|^{2}.\] Next, define \(d_{ab}^{nm}=\|a_{n}^{(0)}-b_{m}^{(0)}\|\) as the true atomic distance between atoms \(a,b\in\Omega\) for residue \(n\) and \(m\). The predicted pairwise atomic distance is \(\hat{d}_{ab}^{nm}=\|\hat{a}_{n}^{(0)}-\hat{b}_{m}^{(0)}\|\). Similar in spirit to the distogram loss in AF2, the second loss is a local neighborhood loss on pairwise atomic distances, \[\mathcal{L}_{\mathrm{2D}}=\tfrac{1}{Z}\sum_{n,m=1}^{N}\sum_{a,b\in\Omega} \mathds{1}\{d_{ab}^{nm}<0.6\}\|d_{ab}^{nm}-\hat{d}_{ab}^{nm}\|^{2},\] \(Z=(\sum_{n,m=1}^{N}\sum_{a,b\in\Omega}\mathds{1}\{d_{ab}^{nm}<0.6\})-N\). where \(\mathds{1}\{d_{ab}^{nm}<0.6\}\) is a indicator variable to only penalize atoms that within 0.6nm (i.e. \(6\)A). We apply auxiliary losses only when \(t\) is sampled near 0 (\(t<\mathrm{T}_{\mathrm{F}}/4\) in our experiments) during which the fine-grained characteristics emerge. The full training loss can be written, \[\mathcal{L}=\mathcal{L}_{\mathrm{dsm}}+w\cdot\mathds{1}\{t<\tfrac{\mathrm{T}_ {\mathrm{F}}}{4}\}(\mathcal{L}_{\mathrm{bb}}+\mathcal{L}_{\mathrm{2D}}), \tag{8}\] where \(w>0\) is a weight on these additional losses. We find a including a high weight (\(w=25\) in our experiments) leads to improved sample quality with fewer steric clashes and chain breaks. Training follows standard diffusion training over the empirical data distribution \(p_{0}\). A full algorithm (Alg. 3) is provided in the appendix. ``` 1:Require:\(\theta,N,\mathrm{T}_{\mathrm{F}},N_{\mathrm{steps}},\zeta,\epsilon\) 2:\(\gamma=(1-\epsilon)/N_{\mathrm{steps}}\)\(\triangleright\) Compute step-size 3:\(\mathbf{T}^{(\mathrm{T}_{\mathrm{F}})}\sim P_{\#}\underline{p}_{\mathrm{ inv}}^{\mathrm{SE}(3)^{N}}\)\(\triangleright\) Sample from invariant density 4:for\(t=\mathrm{T}_{\mathrm{F}},\mathrm{T}_{\mathrm{F}}-\gamma,\mathrm{T}_{ \mathrm{F}}-2\gamma,\dots,\epsilon\)do 5:\(\dot{\mathbf{T}}^{(0)},\_{*}=\mathrm{FramePred}(\mathbf{T}^{(t)},t;\theta)\) 6:\(\{(s_{\theta,n}^{\mathrm{r}},s_{\theta,n}^{\mathrm{x}})\}_{n=1}^{N}=\nabla_{ \mathbf{T}^{(t)}}\log p_{t|0}(\mathbf{T}^{(t)}\mid\dot{\mathbf{T}}^{(0)})\) 7:for\((R_{n}^{(t)},X_{n}^{(t)})=T_{1}^{(t)},\dots,T_{N}^{(t)}\)do 8:\(Z_{n}^{\mathrm{x}}\sim\mathcal{N}(0,\mathrm{Id}_{3})\)\(\triangleright\) Tangent Gaussian 9:\(W_{n}^{\mathrm{x}}=P\gamma[\tfrac{1}{2}X_{n}^{(t)}+s_{\theta,n}^{\mathrm{x}}]+ \zeta\sqrt{\gamma}Z_{n}^{\mathrm{x}}\) 10:\(W_{n}^{\mathrm{x}}=PW_{n}^{\mathrm{x}}\)\(\triangleright\) Remove CoM from increment 11:\(Z_{n}^{\mathrm{r}}\sim\mathcal{TN}_{R_{n}^{(t)}}(0,\mathrm{Id})\)\(\triangleright\) Tangent Gaussian 12:\(W_{n}^{\mathrm{r}}=\gamma s_{\theta,n}^{\mathrm{g}}+\zeta\sqrt{\gamma}Z_{n}^{ \mathrm{r}}\) 13:\(T_{n}^{(t-\gamma)}=\exp_{T_{n}^{(t)}}\{(W_{n}^{\mathrm{r}},W_{n}^{\mathrm{x}})\}\) 14:return\(\mathrm{FramePred}(\mathbf{T}^{(\epsilon)},\epsilon;\theta)\) ``` **Algorithm 1**\(\mathrm{FrameDiff}\) sampling of protein backbones **Centering of training examples.** Each training example \(\mathbf{X}^{(t)}\), is centered at zero in accordance with Eq. (6). From a practical perspective, this centering leads to lower variance loss estimates than without centering. In particular, variability in the center of mass of \(\mathbf{X}^{(t)}\) would lead to corresponding variability in \(\mathrm{FrameDiff}\)'s frame predictions as a result of the architecture's \(\mathrm{SE}(3)\) equivariance. By centering training Figure 2: Single layer of \(\mathrm{FrameDiff}\). Each layer takes in the current node embedding \(\mathbf{h}_{\ell}\), edge embedding \(\mathbf{z}_{\ell}\), frames \(\mathbf{T}_{\ell}\), and initial node embedding \(\mathbf{h}_{0}\). Rectangles indicate trainable neural networks. Node embeddings are first updated using IPA with a skip connection. Before Transformer, the initial node embeddings and post-IPA embeddings are concatenated. After transformer, we include a skip connection with post-IPA embeddings. The updated node embeddings \(\mathbf{h}_{\ell+1}\) are then used to update edge embeddings \(\mathbf{z}_{\ell+1}\) as well as predict frame updates \(\mathbf{T}_{\ell+1}\). See App. 1.2 for in-depth architecture details. examples, we eliminate this variability and thereby reduce the variance of the \(\mathcal{L}_{\mathrm{dsm}}^{\mathrm{x}}\) and of gradient estimates. ### Sampling Alg. 1 provides our sampling procedure. Following De Bortoli et al. (2022), we use an Euler-Maruyama discretization of Eq. (7) with \(N_{\mathrm{steps}}\) steps implemented as a geodesic random walk. Each step involves samples \(Z_{n}^{\mathrm{x}}\) and \(Z_{n}^{\mathrm{r}}\) from Gaussian distributions defined in the tangent spaces of \(X_{n}^{(t)}\) and \(R_{n}^{(t)}\), respectively. For translations, this is simply the usual Gaussian distribution on \(\mathbb{R}^{3}\), \(Z_{n}^{\mathrm{r}}\sim\mathcal{N}(0,\mathrm{Id}_{3})\). For rotations, we sample the coefficients of orthonormal basis vectors of the Lie algebra \(\mathfrak{so}(3)\) and rotate them into the tangent space to generate \(Z_{n}^{\mathrm{r}}\sim\mathcal{TN}_{R_{n}^{(t)}}(0,\mathrm{Id})\) as \(Z_{n}^{\mathrm{r}}=R_{n}^{(t)}\sum_{i=1}^{3}\delta_{i}\mathbf{e}_{i}\), where \(\delta_{i}\overset{iid}{\sim}\mathcal{N}(0,1)\) and \(\mathbf{e}_{1},\mathbf{e}_{2},\mathbf{e}_{3}\) are orthonormal basis vectors (see App. C.2 for details). Because we found that the backbones commonly destabilized in the final steps of sampling, we truncate sampling trajectories early, at a time \(\epsilon>0\). Following Watson et al. (2022), we explore generating from the reverse process with noise downscaled by a factor \(\zeta\in\left[0,1\right].\) For simplicity of exposition, we so far have assumed that the forward diffusion involves a Brownian motion without a diffusion coefficient; in practice we set \(\mathrm{T}_{\mathrm{F}}=1\) and consider different diffusion coefficients for the rotation and translation (see App. I.3). ## 5 Experiments We evaluate \(\mathrm{FrameDiff}\) on monomer backbone generation. We trained \(\mathrm{FrameDiff}\) with \(L=4\) layers on a filtered set of 20312 backbones taken from the Protein Data Bank (PDB) (Berman et al., 2000). Our model comprises 17.4 million parameters and was trained for one week on two A100 Nvidia GPUs. See App. J.1 for data and training details. We analyzed our samples in terms of designability (if a matching sequence can be found), diversity, and novelty. Comparison to prior protein backbone diffusion models is challenging due to differences in training and evaluation among them. We compared ourselves with published results from two promising protein backbone diffusion models for protein design: Chroma (Ingraham et al., 2022) and RFdiffusion (Watson et al., 2022). We refer to Sec. 6 for details on these and other diffusion methods. ### Monomeric protein generation and evaluation We assess \(\mathrm{FrameDiff}\)'s performance in unconditional generation of _monomeric_ protein backbones. In this section, we detail our inference and evaluation procedure. **Designability.** A generated backbone is meaningful only if there exists an amino acid sequence which folds to that structure. We follow Trippe et al. (2022) and assess backbone designability with _self-consistency_ evaluation: a fixed-backbone sequence design algorithm proposes sequences, these sequences are input to a structure prediction algorithm, and self-consistency is assessed as the best agreement between the sampled and predicted backbones (see Fig. 5). In this work, we use \(\mathrm{ProteinMPNN}\) at temperature 0.1 to generate \(N_{\mathrm{seq}}\) sequences for \(\mathrm{ESMFold}\)(Lin et al., 2022) to predict structures. We quantify self-consistency through both TM-score (scTM, higher is better) and \(\mathcal{C}_{\alpha}\)-RMSD (scRMSD, lower is better). Chroma reports using scTM\(>0.5\) as the designable criterion. However, it was shown scRMSD\(<2\)A provides a more stringent filter, particularly for long (e.g. 600 amino acid) backbones on which \(0.75\) scTM can be attained for very structurally different backbones (Watson et al., 2022). **Diversity.** We quantify the diversity of backbones sampled by \(\mathrm{FrameDiff}\) through the number of distinct structural clusters. In particular, for a collection of backbone samples we use MaxCluster (Herbert and Sternberg, 2008) to hierarchically cluster backbones with a 0.5 TM-score threshold. We report diversity as the proportion of unique clusters: (number of clusters) / (number of samples). **Novelty.** We assess the ability of \(\mathrm{FrameDiff}\) to generalize beyond the training set and produce novel backbones by comparing the similarity to known structures in the PDB. We use FoldSeek (van Kempen et al., 2022) to search for similar structures and report the highest TM-scores of samples to any chain in PDB, which we refer to as pdbTM. ### Results We analyze \(\mathrm{FrameDiff}\) monomer samples on designability, diversity, and novelty. On designability, we briefly compare \(\mathrm{FrameDiff}\)'s samples with backbone generation diffusion models Chroma and RFdiffusion. However, we note that the training and evaluation set-ups are significantly different across \(\mathrm{FrameDiff}\), Chroma, and RFdiffusion. Using scTM\(>0.5\) as the designable criterion, Chroma reported designability of 55% with 100 designed sequences (\(N_{\mathrm{seq}}=100\)). Lengths are between 100 and 500 and sampled proportionally "1/length". However, this heavily biases performance towards shorter lengths and leads to additional length variability across evaluations. Instead, we sample 10 backbones at every length [100, 105,..., 495, 500] in intervals of 5 (810 total samples) such that lengths are fixed and distributed uniformly. Table 1 reports \(\mathrm{FrameDiff}\) metrics as we vary different sampling parameters. We notice a stark improvement in designability by changing the noise scale \(\zeta=0.5\) at the cost of lower diversity. Increasing \(N_{\mathrm{seq}}\) also improves designability but at a significant compute cost. The reported results use \(N_{\rm steps}=500\); however decreasing to \(N_{\rm steps}=100\) with a low noise scale still resulted in designable backbones. With \(N_{\rm steps}=100\), generation of a 100 amino acid backbone takes 4.4 seconds on an A100 GPU; compared to RFdiffusion, this is more than an order an magnitude speed-up.7 Footnote 7: Watson et al. (2022) report 150 seconds (34-fold slower) for 100 amino acid backbones on an A4000 GPU. Using \(\zeta=1.0\), \(N_{\rm steps}=500\), \(N_{\rm seq}=8\), we perform ablations of self-conditioning and auxiliary losses described in Sec. 4.2. The best model incorporates all components. In the last row, we explored removing \(\mathcal{L}_{\rm dsm}\) from Eq. (8) and found it necessary to achieve any designable samples. We leave hyperparameter searches to future work. In Fig. 3A, we evaluate scRNAly across four lengths. \(\mathrm{FrameDiff}\) is able to generate designable samples without pretraining; by contrast, RFdiffusion demonstrated the capacity to generate designable sequences only when initialized with pre-trained weights. More training data (i.e. training on complexes) and neural network parameters could help close the gap to RFdiffusion's reported performance. Finally, RFdiffusion uses an all-to-all pairwise TM-align to measure diversity of its samples with clustering at 0.6 TM-score threshold. We perform an equivalent diversity evaluation using maxcluster with 0.6 TM-score threshold in Table 3 where we find a high degree of diversity (\(>\)0.5) that is comparable with RFdiffusion. App. J.4 shows more results and visualizations. We next investigated the similarity of each sample to known structures in PDB. In Fig. 3B, we plot the novelty (pdbTM) as a function of designability (scRMSD). As expected, designability decreases with longer lengths. Samples with low scRNAly tend to have high similarity with the PDB. Our interest is in the lower left hand quadrant where scRMSD \(<2.0\) and pdbTM \(<0.6\). Fig. 3C illustrates two examples of \(\mathrm{FrameDiff}\) samples that are designable and novel. We additionally find ESMFold to be highly confident, predicted LDDT (pLDDT) \(>0.7\), for these samples. Our experiments indicate \(\mathrm{FrameDiff}\) is capable of learning complex distributions over protein monomer backbone that are designable, diverse, and in some cases novel compared to known protein structures. When used with decreased noise-scale, \(75\%\) of samples across a range of lengths were designable by scTM\(>0.5\); by contrast, all prior works reporting this metric not involving pretrained networks (see Sec. 6) have reported below 55% designability. However, due to differences in training and evaluation across these methods and ours, we refrain making state-of-the-art claims. ## 6 Related work **Diffusion models on proteins.** Past works have developed diffusion models over different representations of protein structures without pretraining (Wu et al., 2022; Trippe et al., 2022; Anand and Achim, 2022; Qiao et al., 2022; Ingraham et al., 2022). Out of these methods, Chroma (Ingraham et al., 2022) reported the highest designability metric by diffusing over backbone atoms with a non-isotropic diffu \begin{table} \begin{tabular}{c c c c c c} \hline \hline Noise scale \(\zeta\) & 1.0 & 0.5 & 0.1 & 0.1 & 0.1 \\ \(N_{\rm steps}\) & 500 & 500 & 500 & 500 & 100 \\ \(N_{\rm seq}\) & 8 & 8 & 8 & 100 & 8 \\ \hline \(>0.5\) scTM (\(\uparrow\)) & 49\% & 74\% & 75\% & 84\% & 74\% \\ \(<2\)å scRMSD (\(\uparrow\)) & 11\% & 23\% & 28\% & 40\% & 24\% \\ Diversity (\(\uparrow\)) & 0.75 & 0.56 & 0.53 & 0.54 & 0.55 \\ \hline \hline \end{tabular} \end{table} Table 1: \(\mathrm{FrameDiff}\) sample metrics. \begin{table} \begin{tabular}{c|c c c c} \hline \hline \multirow{2}{*}{\(>0.5\) scTM(\(\uparrow\))} & Self & \multirow{2}{*}{\(\mathcal{L}_{\rm 2D}\)} & \multirow{2}{*}{\(\mathcal{L}_{\rm bb}\)} & \multirow{2}{*}{\(\mathcal{L}_{\rm dsm}\)} \\ & & cond. & & & \\ \hline 49\% & ✓ & ✓ & ✓ & ✓ \\ 42\% & & ✓ & ✓ & ✓ \\ 22\% & & & ✓ & ✓ \\ 16\% & & & & ✓ \\ 0\% & ✓ & ✓ & ✓ & \\ \hline \hline \end{tabular} \end{table} Table 2: \(\mathrm{FrameDiff}\) ablations. Figure 3: Designability, diversity, and novelty of \(\mathrm{FrameDiff}\) generated backbones with \(\zeta=0.1\), \(N_{\rm steps}=500\), \(N_{\rm seq}=100\). **(A)** scRNAly based on 100 backbone samples of each length 70, 100, 200, 300 for \(N_{\rm seq}=8,100\) plotted in the same manner as done in RFdiffusion. **(B)** Scatter plot of Designability (scRMSD) vs. novelty (pdbTM) across lengths. **(C)** Selected samples from panel (B) of novel and highly designable samples. Left: sampled backbones from FrameDiff. Middle: best ESMFold predictions with high confidence (pLDDT) Right: samples aligned with their closest PDB chain. sion based on statistically determined covariance constraints. Compared to these works, we develop a principled \(\mathrm{SE}(3)\) diffusion framework over protein backbones that demonstrates improved sample quality over methods that do not use \(\mathrm{SE}(3)\) diffusion. Most similar to our work is RFdiffusion (Watson et al., 2022) which formulated the same forward diffusion process over \(\mathrm{SE}(3)^{N}\), but with modifications to the rotation loss and reverse step that deviates from theory. While not outperforming RFdiffusion, FrameDiff enjoys several benefits such as being principled, having 1/4 the number of neural network weights, and not requiring expensive pretraining on protein structure prediction. **Diffusion models on manifolds.** A general framework for continuous diffusion models on manifolds was first introduced in De Bortoli et al. (2022) extending the work of Song et al. (2021) to Riemannian manifolds. Concurrently, Huang et al. (2022) introduced a similar framework extending the maximum likelihood approach of Huang et al. (2021). Some manifolds have been considered in the setting of diffusion models for specific applications. In particular, Jing et al. (2022) consider the product of tori for molecular conformer generation, Corso et al. (2022) on the product space \(\mathbb{R}^{3}\times\mathrm{SO}(3)\times\mathrm{SO}(2)^{m}\) for protein docking applications and Leach et al. (2022) on \(\mathrm{SO}(3)\) for rotational alignment. Finally, we highlight the work of Urain et al. (2022) who introduce \(\mathrm{SE}(3)\)-diffusion models for robotics applications. One major theoretical and methodological difference with the present work is that we develop a principled diffusion model on this Lie group ensuring that at optimality we recover the exact backward process. ## 7 Discussion Protein backbone generation is a fundamental task in _de novo_ protein design. Motivated by the success of rigid-body frame representation of proteins, we developed an \(\mathrm{SE}(3)\)-invariant diffusion models on \(\mathrm{SE}(3)^{N}\) for protein modelling. We laid the theoretical foundations of this method, and introduced \(\mathrm{FrameDiff}\), a instance of this framework, equipped with an \(\mathrm{SE}(3)\)-equivariant score network which needs not to be pretrained. We empirically demonstrated \(\mathrm{FrameDiff}\)'s ability to generate designable and diverse samples. Even with stringent filters, we find our samples can generalize beyond PDB, although we note that claims of generating novel proteins requires experimental characterization. Our results are competitive with those reported in Chroma and RFdiffusion. However, differences in training and evaluation confound rigorous comparisons between the methods. One important research direction is to extend \(\mathrm{FrameDiff}\) to conditional generative modeling tasks, such as probabilistic sequence-to-structure prediction which is key to capture functional motion (Lane, 2023), but also probabilistic scaffolding design given a functional motif (Trippe et al., 2022). In terms of methodology, we stress the importance of understanding and questioning the benefits and necessity of pretraining with protein structure prediction which heavily relies on evolutionary couplings that are unused in _de novo_ protein design. Given our preliminary results, we hypothesize scaling up \(\mathrm{FrameDiff}\) to train with more data and improvements in sampling designable backbones could overcome the need for pretraining with protein structure prediction. Finally, we highlight that the key aspects of our theoretical contributions-- the general form of Brownian motions that is amenable to DSM along with sub-group invariance --are applicable to general Lie groups. Of particular interest are \(\mathrm{SO}(3)\) in robotics (Barfoot et al., 2011) and \(\mathrm{SU}(2)\) in Lattice QCD (Albergo et al., 2021). ## Software and data The core set of tools in Python (Van Rossum & Drake Jr, 1995) enabled this work, including PyTorch (Paszke et al., 2019), Hydra (Yadan, 2019), Numpy (Harris et al., 2020), Scipy (Virtanen et al., 2020), Matplotlib (Hunter, 2007), Pandas (McKinney et al., 2010) and OpenFold (Ahdritz et al., 2022). ## Acknowledgements The authors thank Hannes Stark, Gabriele Corso, Bowen Jing, Felix Faltings, David Juergens, Joseph Watson, Nathaniel Bennett, Luhuan Wu and David Baker for helpful discussion and feedback. EM is supported by an EPSRC Prosperity Partnership EP/T005386/1 between Microsoft Research and the University of Cambridge. JY is supported in part by an NSF-GRFP. JY, RB, and TJ acknowledge support from NSF Expeditions grant (award 1918839: Collaborative Research: Understanding the World Through Code), Machine Learning for Pharmaceutical Discovery and Synthesis (MLPDS) consortium, the Abdul Latif Jameel Clinic for Machine Learning in Health, the DTRA Discovery of Medical Countermeasures Against New and Emerging (DOMANE) threats program, the DARPA Accelerated Molecular Discovery program and the Sanofi Computational Antibody Design grant. AD acknowledges support from EPSRC grants EP/R034710/1 and EP/R018561/1.
2303.15422
KPEval: Towards Fine-Grained Semantic-Based Keyphrase Evaluation
Despite the significant advancements in keyphrase extraction and keyphrase generation methods, the predominant approach for evaluation mainly relies on exact matching with human references. This scheme fails to recognize systems that generate keyphrases semantically equivalent to the references or diverse keyphrases that carry practical utility. To better assess the capability of keyphrase systems, we propose KPEval, a comprehensive evaluation framework consisting of four critical aspects: reference agreement, faithfulness, diversity, and utility. For each aspect, we design semantic-based metrics to reflect the evaluation objectives. Meta-evaluation studies demonstrate that our evaluation strategy correlates better with human preferences compared to a range of previously proposed metrics. Using KPEval, we re-evaluate 23 keyphrase systems and discover that (1) established model comparison results have blind-spots especially when considering reference-free evaluation; (2) large language models are underestimated by prior evaluation works; and (3) there is no single best model that can excel in all the aspects.
Di Wu, Da Yin, Kai-Wei Chang
2023-03-27T17:45:38Z
http://arxiv.org/abs/2303.15422v4
# KPEval: Towards Fine-grained Semantic-based Evaluation of ###### Abstract Despite the significant advancements in keyphrase extraction and keyphrase generation methods, the predominant approach for evaluation only relies on exact matching with human references and disregards reference-free attributes. This scheme fails to recognize systems that generate keyphrases that are semantically equivalent to the references or keyphrases that have practical utility. To better understand the strengths and weaknesses of different keyphrase systems, we propose a comprehensive evaluation framework consisting of six critical dimensions: naturalness, faithfulness, saliency, coverage, diversity, and utility. For each dimension, we discuss the desiderata and design semantic-based metrics that align with the evaluation objectives. Rigorous meta-evaluation studies demonstrate that our evaluation strategy correlates better with human preferences compared to a range of previously used metrics. Using this framework, we re-evaluate 18 keyphrase systems and further discover that (1) the best model differs in different dimensions, with pre-trained language models achieving the best in most dimensions; (2) the utility in downstream tasks does not always correlate well with reference-based metrics; and (3) large language models exhibit a strong performance in reference-free evaluation. ## 1 Introduction Building systems to automatically predict keyphrases of a document is a long lasting research problem in the Information Retrieval (IR) and the Natural Language Processing (NLP) communities Witten et al. (1999); Hulth (2003); Meng et al. (2017). With the availability of large annotated keyphrase datasets, pre-trained text representations, and deep learning-based methods, numerous keyphrase systems have been proposed, claiming significant performance improvements Meng et al. (2017); Boudin (2018); Chan et al. (2019), _i.a._). However, their performance assessment is largely based on performing exact matching between stemmed predictions and human references. This approach is known to be insufficient, as it fails to credit synonyms and parent/child concepts that are spelled differently from the references Zesch and Gurevych (2009). Despite this, there has been limited progress in adopting better metrics. Our literature survey of 64 papers between 2017 and 2022 (appendix A) suggests that exact matching is still the primary evaluation method (used by 63/64 papers) and only 24/64 papers calculate other metrics to complement exact matching. In addition, these studies mainly focus on _reference-based_ evaluation, while ignoring _reference-free_ measures such as diversity Wang et al. (2016); Bahuleyan and El Asri (2020) or retrieval effectiveness Boudin and Gallina (2021) which might be more relevant for real-world applications of keyphrase systems. To address the drawbacks of exact matching, a few approaches have been proposed. One branch of solution uses n-gram matching Kim et al. (2010), approximate matching Zesch and Gurevych (2009); Luo et al. (2021), or evaluating with name variations Chan et al. (2019). By design, these methods still struggle to recognize synonyms of keyphrases. Another approach relies on the semantic representation for matching Jarmasz and Barriere (2004), including metrics based on pre-trained language models such as BertScore Koto et al. (2022); Glazkova and Morozov (2022). However, BertScore is a token-level matching score that may not adequately capture the semantics of individual keyphrases. In fact, we find that it's correlation with human judgments is even weaker than exact matching. To better understand the strengths of different keyphrase systems, in this paper, we propose a novel _fine-grained semantic-based_ keyphrase evaluation framework that addresses the limitations of existing methods. This framework considers six dimensions: _naturalness_, _faithfulness_, saliency_, _coverage_, _diversity_, and _utility_. For each dimension, we operationalize its measurement with advanced semantic-based metrics. To build strong phrase representations for evaluation, we design a novel keyphrase-aware contrastive learning scheme. Through comprehensive human studies in the reference-based evaluation setting, we find that (1) label variations are prevalent in keyphrase annotation and exact matching struggles to handle them, and that (2) the proposed semantic matching metric outperforms exact matching by more than 0.15 absolute points in Kendall's Tau in terms of the correlation with human in evaluating saliency and coverage. More crucially, when used to match a phrase against a set of references, exact match has a much lower agreement with human compared to semantic matching, suggesting the latter's superior explainability. We also confirm that our phrase embedding model significantly outperforms a range of publicly available models in assigning high similarity to synonym pairs and low similarity to unrelated phrases, which is crucial for evaluating keyphrases. Using this framework, we benchmark 18 unsupervised, supervised, and black-box keyphrase extraction and generation systems on two datasets, KP20k (Meng et al., 2017) and KPTimes (Gallina et al., 2019). Notably, we find that (1) different types of models excel in different dimensions, with pre-trained models achieving the best in most dimensions; (2) the utility in downstream applications does not always agree with reference-based metrics; and (3) GPT-3.5 (Ouyang et al., 2022) exhibits near state-of-the-art performance in a range of reference-free metrics. We hope our study can provide a novel perspective for rethinking the goal, design, and evaluation of keyphrase systems. Our implementation is available upon request and will be released when the paper is formally published. ## 2 Preliminaries In this section, we outline the keyphrase systems and datasets we will study in this paper and discuss their related work. We select 18 models from three main categories, as summarized in Table 5 in appendix. Note that our goal is to include strong systems of diverse types rather than exhaustively benchmarking all available systems. ### Keyphrase Systems Keyphrase extraction systemsTraditionally, keyphrase extraction is done in an unsupervised manner where noun phrase candidates are ranked with heuristics (Hulth, 2003; Mihalcea and Tarau, 2004; Wan and Xiao, 2008; Bougouin et al., 2013; Boudin, 2018; Liang et al., 2021; Ding and Luo, 2021; Zhang et al., 2022). Other supervised methods use feature-based ranking (Witten et al., 1999), Figure 1: An illustration of the proposed framework. We consider six desired properties of keyphrase systems and design semantic-based metrics to faithfully evaluate them. sequence labeling (Zhang et al., 2016; Luan et al., 2017; Sahrawat et al., 2019), and task-specific objectives with pre-trained language models (Song et al., 2021, 2022). We evaluate five keyphrase extraction methods: **[M1]** TF-IDF (Jones, 1972), **[M2]** TextRank (Mihalcea and Tarau, 2004), **[M3]** MultipartiteRank (Boudin, 2018), **[M4]** Kea (Witten et al., 1999), and **[M5]** BERT+CRF (Sahrawat et al., 2019; Wu et al., 2022b). Keyphrase generation systemsThe keyphrase generation task, introduced by Meng et al. (2017), requires a model to generate keyphrases that may not present in the input document. Models are usually trained using three types of supervised objectives: _One2One_ - generating one keyphrase give a document (Meng et al., 2017); _One2Seq_ - generating a sequence of keyphrases given a document (Yuan et al., 2020); or _One2Set_ - generating a set of keyphrases given a document (Ye et al., 2021). Various approaches have been developed, including incorporating linguistic constraints (Zhao and Zhang, 2019), exploiting semi-supervised learning signals from titles (Ye and Wang, 2018), hierarchical modeling of phrases and words (Chen et al., 2020), incorporating reinforcement learning (Chan et al., 2019; Luo et al., 2021) or GANs (Swaminathan et al., 2020), unifying keyphrase extraction with generation (Chen et al., 2019; Ahmad et al., 2021), utilizing pre-trained language models (Chowdhury et al., 2022; Kulkarni et al., 2022; Gao et al., 2022; Wu et al., 2022b,c), and unsupervised keyphrase generation (Shen et al., 2022). We select nine keyphrase generation methods to evaluate: **[M6]** CatSeq (Yuan et al., 2020), **[M7]** CatSeqTG+2RF1 (Chan et al., 2019), **[M8]** ExHiRD-h (Chen et al., 2020), **[M9]** Transformer (Ye et al., 2021), **[M10]** SetTrans (Ye et al., 2021), **[M11]** SciBERT-G (Wu et al., 2022b), **[M12]** BART-large (Wu et al., 2022b), **[M13]** KeyBART (Kulkarni et al., 2022), and **[M14]** SciBART-large fine-tuned on OAGKX (Wu et al., 2022b). Large language models and APIsRecently, the emergent ability of pre-trained large language models for few-shot in-context learning has gained much attention (Brown et al., 2020). For keyphrase generation, we benchmark GPT-3.5, which is trained on instruction following with human preferences (Ouyang et al., 2022). We discuss our prompting strategy in detail in appendix section F. Inspired by Ribeiro et al. (2020), we also include two keyphrase extraction APIs from Amazon and Microsoft. We denote them as: **[M15]** zero-shot prompting text-davinci-003, **[M16]** five-shot prompting text-davinci-003, **[M17]** the Amazon Comprehend API, and **[M18]** the Azure Cognitive Services API. ### Evaluation Setup DatasetsWe use two widely-used keyphrase datasets: KP20k (Meng et al., 2017) with over 500k computer science papers from online digital libraries and KPTimes (Gallina et al., 2019) with over 250k news documents collected from New York Times. KP20k's keyphrase annotations are obtained from the metadata of the papers, while KPTimes's keyphrases are assigned by expert editors. We report performance on KP20k's 20k test set and on KPTimes' 10k in-distribution test set. Implementation detailsWe provide implementation details of the evaluated keyphrase systems in appendix F. We solicit the results from the original authors and carefully re-implement them if the original results are not available. To ensure a fair comparison, we only consider the top 10 predictions from M1-M4 and M17-M18. ## 3 A Fine-grained Evaluation Framework This section defines the keyphrase evaluation problem and proposes a multi-dimensional evaluation framework. We then introduce the metrics to evaluate each of the dimensions, including (1) semantic-based matching for saliency and coverage, (2) lexical and semantic overlap for diversity, (3) retrieval-based evaluation for utility, and (4) model-based evaluation for naturalness and faithfulness. ### Problem Formulation The _keyphrase evaluation_ problem for a single document can be formulated with a 4-element tuple \((\mathcal{X},\mathcal{Y},\mathcal{P},\mathcal{C})\). \(\mathcal{X}\) is the input document. \(\mathcal{Y}=\{y_{1},...,y_{n}\}\) is a list of \(n\) reference phrases written by human. \(\mathcal{P}=\{p_{1},...,p_{m}\}\) is a list of \(m\) predictions made by a model \(\mathcal{M}\) on \(\mathcal{X}\). To focus on semantic-based evaluation, we do not differentiate between keyphrases that appear in \(\mathcal{X}\) ("present keyphrases") and those that do not ("absent keyphrases"). As the definition of keyphrases is domain-dependent, we introduce \(\mathcal{C}\), a set of corpus documents, to represent the domain. A _keyphrase metric_ is a function \(f\) whose value \(f(\mathcal{X},\mathcal{Y},\mathcal{P},\mathcal{C})\in\mathbb{R}\) reflects a certain quality of \(\mathcal{M}\) \(f\) is a phrase-level metric if it is calculated as the average of metric values calculated on each \(p_{i}\in\mathcal{P}\). \(f\) is reference-free if its calculation is independent of \(\mathcal{Y}\), or reference-based otherwise. In practice, the final score for \(\mathcal{M}\) is often the averaged score over a set of testing documents. ### How to define a good keyphrase system? Deciding the evaluation goals is crucial for building evaluation systems. We argue that the following six unique properties should be considered when evaluating keyphrase exaction or generation systems: 1. _Naturalness_: \(p_{i}\) is a natural and grammatically correct phrase. 2. _Faithfulness_: all keyphrases in \(\mathcal{P}\) are covered in \(\mathcal{X}\), i.e., free of hallucination. 3. _Saliency_: \(p_{i}\) carries salient information associated with \(\mathcal{X}\). As the saliency varies with the domain, we use \(\mathcal{Y}\) as a proxy for the set of salient information associated with \(\mathcal{X}\). 4. _Coverage_: how much salient information in \(\mathcal{X}\) is covered by \(\mathcal{P}\). 5. _Diversity_: the degree to which \(\mathcal{P}\) contains diverse concepts with minimal repetition. 6. _Utility_: the extent to which \(\mathcal{P}\) facilitates downstream applications, such as retrieval. Table 1 outlines the assumptions of these properties. Naturalness, faithfulness, and saliency are targeted at evaluating individual phrases, while coverage, diversity, and utility evaluate the entire set \(\mathcal{P}\). Faithfulness and utility require \(\mathcal{X}\), while saliency and coverage require \(\mathcal{Y}\). To gauge the utility, \(\mathcal{C}\) is required for specifying the domain. Next, we discuss how we operationalize these dimensions. ### Saliency and Coverage _Desiderata_: _semantically similar phrases should be credited_, _matching should be at phrase level_. Precision and recall with exact matching can capture saliency and coverage, while they struggle at dealing with semantic similarity. Conversely, BertScore [22] calculated with concatenated predictions and references captures semantic similarity, but its token-wise matching with contextual representations built in one encoding pass prevents evaluating the semantic similarity among individual phrases. To enjoy the advantage of both, we introduce _phrase-level semantic-based_ matching and define semantic precision (\(SemP\)), recall (\(SemR\)), and F1 (\(SemF1\)) as follows: \[SemP(\mathcal{P},\mathcal{Y})=\frac{\sum_{p\in\mathcal{P}}\max_{p\in\mathcal{ Y}}(1\text{(}sim(p,y)>\alpha\text{)}\text{-}sim(p,y))}{|\mathcal{P}|}\] \[SemR(\mathcal{P},\mathcal{Y})=\frac{\sum_{p\in\mathcal{Y}}\max_{p\in\mathcal{ P}}(1\text{(}sim(p,y)>\alpha\text{)}\text{-}sim(p,y))}{|\mathcal{Y}|}\] \[SemF1(\mathcal{P},\mathcal{Y})=\frac{2\text{.}SemP(\mathcal{P},\mathcal{Y}) \text{-}SemR(\mathcal{P},\mathcal{Y})}{SemP(\mathcal{P},\mathcal{Y})+SemR( \mathcal{P},\mathcal{Y})}\] where \(sim\) is a function defining the similarity between the representation of two phrases and \(\alpha\) is a hyperparameter to filter out the noise in \(sim\)1. To quantify whether \(\mathcal{P}\) covers the overall semantics of \(\mathcal{Y}\), we also introduce an overall semantic coverage index \(SemCov\), defined as Footnote 1: We use \(\alpha=0\) throughout the paper. \(\alpha\) is included in the formulation to satisfy application-specific needs. \[SemCov(\mathcal{P},\mathcal{Y})=sim(union(\mathcal{P}),union(\mathcal{Y}))\] where \(union\) is a function that computes a representation for a set of phrases. To define the metrics, the key ingredient is the \(sim\) function, which encodes two phrases and computes their similarity. While alternatives for the representation space exist, such as hyperbolic embedding [22], we choose dense embedding and cosine similarity to take advantage of available models pre-trained with large corpus: \[sim(p,q)=cos\_sim(h_{p},h_{q})=\frac{h_{p}^{T}h_{q}}{\|h_{p}\|\cdot\|h_{q}\|},\] \[union(\mathcal{P})=max\_pooling(h_{p_{1}},...,h_{p_{m}}),\] where \(h_{p}\) and \(h_{q}\) are the representations for phrase \(p\) and \(q\) obtained by average pooling the embedding model's hidden state of the last layer. For \(union\), we use max pooling to preserve the salient dimensions from each phrase. For the embedding model, we fine-tune a pre-trained sentence-level paraphrase model from Reimers and Gurevych \begin{table} \begin{tabular}{l|c c c c} Property & single kp & input & reference & corpus \\ \hline _Naturalness_ & \(\bigvee\) & ✗ & ✗ & ✗ \\ _Faithfulness_ & \(\bigvee\) & ✗ & ✗ & ✗ \\ _Saliency_ & \(\bigvee\) & ✗ & ✗ & ✗ \\ _Coverage_ & ✗ & ✗ & ✗ & ✗ \\ _Diversity_ & ✗ & ✗ & ✗ & ✗ \\ _Utility_ & ✗ & ✗ & ✗ & ✗ \\ \hline \end{tabular} \end{table} Table 1: Assumptions underlying the important properties of keyphrase systems: whether they operate on a single keyphrase (single kp) and whether they require input, reference, or the entire corpus to evaluate. (2019)2 using a keyphrase-aware contrastive learning objective based on SimCSE Gao et al. (2021). Given a batch consisting of \(B\) phrases, the training loss \(\mathcal{L}_{simcse}\) is expressed as Footnote 2: We download and fine-tune the model from [https://huggingface.co/sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2). \[\mathcal{L}_{simcse}=\tfrac{1}{B}\sum_{i=1}^{B}-\log\frac{e^{sim(h_{i},h_{i}^{ \prime})/\tau}}{\sum_{j=1}^{B}e^{sim(h_{i},h_{j}^{\prime})/\tau}},\] where \(h_{i}\) and \(h_{i}^{\prime}\) are the representations of phrase \(i\) obtained using two separate forward passes. The goal of this contrastive fine-tuning stage is to discourage the clustering of unrelated phrases in the representation space while maintaining a high alignment between semantically similar phrases. We fine-tune the model on \(\mathcal{L}_{simcse}\) using 1.04 million keyphrases from the training set of KP20k Meng et al. (2017), KPTimes Gallina et al. (2019), StackEx Yuan et al. (2020), and OpenKP Xiong et al. (2019). The detailed training setup and hyper-parameters are presented in the appendix C. ### Diversity _Desiderata: reward more semantically distinct concepts and penalize self-repetitions._ Generating unique concepts with minimal repetition is a desirable property of keyphrase systems. We modify one lexical and one semantic diversity metric used in Bahuleyan and El Asri (2020). For the lexical metric, we use \(dup\_token\_ratio\), the percentage of duplicate tokens after stemming. For the semantic metric, we calculate \(emb\_sim\), the average of pairwise similarity, using the same phrase embedding model for saliency and coverage. \[emb\_sim(\mathcal{P})=\tfrac{\sum_{i=1}^{M}\sum_{j=1}^{M}1(i\neq j)sim(p_{i}, p_{j})}{M(M-1)}\] Different from Bahuleyan and El Asri (2020), diversity evaluation is reference-free in our approach. While penalizing self-repetitions, our metrics do not penalize generating excessive new concepts3. Footnote 3: As a result, metrics such as the orthogonal regularization term used by CatSeqD Yuan et al. (2020) are not suitable for our purposes, as the term naturally increases with \(|\mathcal{P}|\). ### Utility _Desiderata: reward predictions that enable effective and efficient retrieval of the document._ Information Retrieval (IR) is an important downstream application for keyphrases Jones and Staveley (1999); Song et al. (2006); Kim et al. (2013). To directly evaluate whether \(\mathcal{M}\) can generate useful keyphrases for IR-related downstream tasks, we propose two metrics to measure (1) retrieval effectiveness and (2) retrieval efficiency of \(\mathcal{P}\). Different from the query expansion setup in Boudin and Gallina (2021), we use a setup similar to Wu et al. (2022) where the keyphrases are used as queries to retrieve \(\mathcal{X}\) from \(\mathcal{C}\). We measure the retrieval _effectiveness_ using the Reciprocal Rank at \(k\) (\(RR@k\)), which is the reciprocal of \(\mathcal{X}\)'s rank if \(\mathcal{X}\) can be retrieved in top \(k\) documents using \(\mathcal{P}\) as the query and 0 otherwise. \(RR@k\) rewards high retrieval effectiveness, but does not require important phrases to be ranked high in \(\mathcal{P}\). As a result, a sub-optimal model may achieve high \(RR@k\) inefficiently by generating a lot of moderately salient keyphrases. To detect this behavior, we introduce a novel _efficiency_ metric \(Spare_{base}@k\), defined as \[Spare_{base}@k(\mathcal{X},\mathcal{P},\mathcal{C})=1-\tfrac{\min(base,j)}{base},\] where \(j\) is the minimum index such that querying with \(\{p_{1},\dots,p_{j}\}\) allows \(\mathcal{X}\) to be retrieved in top \(k\) documents. \(base\) is the maximum number of phrases to consider. Intuitively, \(\mathcal{P}\) with a high \(Spare\) is able to retrieve \(\mathcal{X}\) with fewer top-ranked phrases, thus having a higher retrieval efficiency. We compute the scores by averaging results obtained from three retrieval systems: (1) BM25 Robertson and Walker (1994), (2) a single-document encoder for dense similarity search4, and (3) re-ranking (2)'s results using a cross-encoder5. Footnote 4: We use [https://huggingface.co/sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) Footnote 5: We use [https://huggingface.co/cross-encoder/ms-marco-MiniLM-L-6-v2](https://huggingface.co/cross-encoder/ms-marco-MiniLM-L-6-v2) ### Naturalness and Faithfulness _Desiderata: ill-formed phrases and hallucinations should be penalized._ With the advance of neural methods that may produce more non-natural phrases or factual hallucinations, it is increasingly crucial to measure the naturalness and faithfulness of the predictions. To evaluate these two dimensions, we introduce a novel formulation that leverages metric models that are pre-trained to align with human preferences. Specifically, we use UniEval Zhong et al. (2022) that evaluates in a boolean QA fashion: the score is the probability of the model generating "Yes" given a prompt normalized by the probability of the model generating either "Yes" or "No". The model we use is pre-trained on a range of tasks including natural language inference, question answering, and linguistically related tasks, and then fine-tuned on evaluating the coherence, consistency, fluency, and relevance of summaries [22]6. We leverage its expertise in fluency and consistency to evaluate naturalness and faithfulness of keyphrases. We convert the evaluation tasks into sentence-level inputs to better align with the model's knowledge. We use the following prompt for assessing the naturalness of \(p_{i}\): Footnote 6: We use the model hosted at [https://huggingface.co/MingZhong/unieval-sum](https://huggingface.co/MingZhong/unieval-sum). question: Is this a natural utterance? </s> utterance: This is an article about p_i. To evaluate the faithfulness of \(p_{i}\) with respect to the input \(\mathcal{X}\), we use the following prompt: question: Is this claim consistent with the document? </s> summary: the concept p_i is mentioned or described in the document. </s> document: X In summary, our framework evaluates six key properties of keyphrase systems, offering fine-grained semantic evaluation with high interpretability. Next, we present human studies supporting our evaluation strategies and discuss empirical results from re-evaluating keyphrase systems. ## 4 Label variation challenges keyphrase evaluation: an annotation study We conduct a human study that simulates an annotation setup to investigate three questions: (1) Is the definition of "keyphrases" agreed by annotators? (2) How much does keyphrase annotations vary? (3) How does this affect keyphrase evaluation? We select the first 50 test documents from KP20k and KPTimes. Each document's labels are combined with keyphrases predicted by four systems: M5 (BERT+CRF), M6 (CatSeq), M10 (SetTrans), and M13/M14 (in-domain BART with task-adaptive pre-training). Then, three annotators are presented with the title, body, and these seed phrases re-ordered alphabetically. They are then asked to write keyphrases that capture the most salient information. We explicitly clarify that they may select from the given phrases or write new ones. We use Amazon Mechanical Turk for this study and appendix G presents further details. **This variation necessitates using metrics beyond exact matching.** Figure 4 presents the scores of the human annotations and the models' outputs calculated with the references provided by the dataset. We observe that F1@M based on exact matching fails to credit human-written keyphrases when they are not identical to the references, whereas semantic coverage suggests that the annotations have a high semantic overlap with the reference. This result further motivates the need for fine-grained and semantic-based evaluation. ## 5 Matching with label variation: a meta-evaluation study After confirming the label variation in keyphrase annotations, we conduct a rigorous meta-evaluation study of reference-based evaluation strategies for saliency and coverage. Specifically, we compare the precision, recall, and F1 scores of reference-based metrics against the human-labeled counterparts. In addition, we compare exact matching, substring matching, BertScore precision, and semantic matching with different embeddings for matching a phrase to a set of reference phrases. ### Experiment setup We conduct a reference-based human evaluation of five representative models (M3, M6, M10, M13/M14, and M16) on KP20k and KPTimes. We select the first 50 documents from each dataset and ask three annotators to rate the semantic similarity between each predicted phrase and the closest phrase in the references, as well as between each reference phrase and its closest predicted phrase. The experiments are conducted on Amazon Mechanical Turk. Details about the protocol and pricing are provided in appendix H. We aggregate human annotations to **compute document-level precision, recall, and F1 scores**, and **semantic similarity decisions** for matching a single phrase to a set of phrases. We collect 1500 document-level annotations with 13401 phrase-level decisions. The interval Krippendorff's alpha is 0.735 for KP20k and 0.788 for KPTimes for matching predictions to sets of references, and 0.750 for KP20k and 0.741 for KPTimes for matching reference phrases to sets of predictions. ### Phrase-level semantic-based matching aligns well with human preference Use the document-level annotations, we evaluate seven reference-based metrics: Exact Matching, Substring Matching, R-precision (Zesch and Gurevych, 2009), Rouge-L (Lin, 2004), BertScore (Zhang et al., 2020), BartScore (Yuan et al., 2021), and the proposed Semantic Matching. Substring Matching is implemented as concluding a match between two phrases if any one is the substring of the other. Note that we apply Porter Stemmer (Porter, 1980) before calculating Exact Matching, Substring Matching, and R-precision. We use input-level bootstrap resampling with 1000 samples to calculate the 95% confidence interval of Kendall's Tau for KP20k, following Deutsch et al. (2021), and report the results in Figure 6. Note that the corresponding metric's precision, recall, and F1 are used to calculate the correlation in the Precision, Recall, and F1 subplots, except for R-precision. Surprisingly, we find that exact matching outperforms Rouge-L, BertScore, and BartScore, making it a strong baseline. The proposed semantic matching achieves state-of-the-art correlation with human in Precision and F1. Similar patterns are Figure 4: Humans score poorly for exact matching F1 while outperforms models in semantic coverage. Figure 5: An example case. For human scores, we take the average among three annotators and normalize the scores to a [0,1] range. Semantic matching makes similar decisions compared to human evaluation. observed on KPTimes (appendix D). Next, we use the single matching decisions to compare between different matching strategies. Table 2 presents the pearson correlation (\(r\)), spearman correlation (\(\rho\)), and Kendall's Tau (\(\tau\)). Interestingly, we find that exact matching's high correlation with human in the aggregated score setting may be _for a wrong reason_: phrase-level correlations suggest that the exact matching significantly underperforms semantic matching, with a difference of 10 absolute points in Kendall's Tau and more than 15 absolute points in Pearson correlation and Spearman correlation. BertScore precision achieves the lowest correlation in this setting, suggesting that it's not suitable for judging keyphrases. Figure 5 presents a qualitative analysis. Although a number of predictions are semantically related to the references, exact matching gives a score 0. On the other hand, substring matching and BertScore are overly permissive. The proposed semantic matching's output is closest to human evaluation and has a reasonable breakdown at the phrase level. \(SemCov\) gives a reasonable estimation of the overall semantic coverage of the prediction. ### Benchmarking embeddings for matching Next, we investigate the embedding choice for semantic matching. Using the same set of single-phrase matching decisions, we compare Phrase-BERT Wang et al. (2021), SpanBERT Joshi et al. (2020), SimCSE Gao et al. (2021), sentence BERT (SBERT) Reimers and Gurevych (2019), and our phrase embedding model. In addition, using the augmented KP20k test set in Chan et al. (2019) where keyphrases are linked to their name variations (KP20k-NV), we compute the alignment-like and uniformity-like properties of the embedding spaces Wang and Isola (2020). For alignment, we report the average cosine similarity between all name variation pairs. For uniformity, we calculate the metric with 50000 random keyphrase pairs. Table 3 reports the results for comparing phrase embeddings. Our model achieves the highest correlation with human when used for semantic matching. Its embedding also has the best ability to distinguish unrelated phrases, as shown by the highest difference between alignment and uniformity. The alignment results also sheds light on the semantics of our similarity score: name variation pairs are assigned a score of 0.58 on average. \begin{table} \begin{tabular}{l|c c c|c c c} \hline \hline & \multicolumn{3}{c|}{**KP20k**} & \multicolumn{3}{c}{**KPTTimes**} \\ & \(r\) & \(\rho\) & \(\tau\) & \(r\) & \(\rho\) & \(\tau\) \\ \hline Exact Matching & 0.705 & 0.689 & 0.590 & 0.768 & 0.792 & 0.684 \\ Substring Matching & 0.749 & 0.735 & 0.629 & 0.787 & 0.788 & 0.681 \\ BertScore Precision & 0.562 & 0.597 & 0.436 & 0.694 & 0.729 & 0.553 \\ Semantic Matching & **0.898** & **0.884** & **0.735** & **0.926** & **0.925** & **0.794** \\ \hline \hline \end{tabular} \end{table} Table 2: Correlation between phrase-level decision of humans and various matching strategies. Semantic matching leads other strategies by a large margin. \begin{table} \begin{tabular}{l|c c|c c c} \hline \hline & **KP20k** & **KPTTimes** & \multicolumn{3}{c}{**KP20k-NV**} \\ & \(\tau\) & \(\tau\) & Alignment & Uniformity & \(\Delta\) \\ \hline Phrase-BERT & 0.631 & 0.755 & **0.78** & 0.54 & 0.24 \\ SpanBERT & 0.562 & 0.653 & 0.71 & 0.54 & 0.17 \\ Unsup. SimCSE & 0.677 & 0.762 & 0.62 & 0.23 & 0.39 \\ Sup. SimCSE & 0.669 & 0.780 & 0.72 & 0.41 & 0.31 \\ SBERT & 0.705 & 0.780 & 0.63 & 0.11 & 0.52 \\ Ours & **0.735** & **0.794** & 0.58 & **0.02** & **0.56** \\ \hline \hline \end{tabular} \end{table} Table 3: A comparison between different phrase embedding models. Our model achieves that best matching correlation with human, and it embedding space has the best ability to distinguish unrelated phrases. Figure 6: A comparison between the 95% confidence intervals for the Kendall Tau between human and automatic metrics on KP20k. Exact Matching achieves the best correlation for Recall, and the proposed Semantic Matching is the best for precision and F1. We use input-level bootstrap resampling following Deutsch et al. (2021). ## 6 Fine-grained re-evaluation of keyphrase extraction and generation systems Using the proposed framework, we conduct a thorough re-evaluation of 18 keyphrase systems, grouped into keyphrase extraction (**KPE**, M1-M5), keyphrase generation with neural networks trained from scratch (**KPG-NN**, M6-M10), keyphrase generation with pre-trained language models (**KPG-PLM**, M11-M14), prompting **GPT-3.5** (M15-M16), and calling keyphrase extraction **APIs** (M17-M18). Table 4 presents the metric results on all dimensions. We also evaluate human-written keyphrase labels with reference-free metrics. The per-model performance is recorded in appendix E. ### Reference-based evaluation From Table 4, we first observe that the performance underestimation problem of exact matching is alleviated by semantic matching, with the best model achieving around 0.6 \(SemF1\) on KP20k and 0.8 on KPTimes. The difference between KPE and KPG models is also clearly distinguished compared to BertScore reported in Koto et al. (2022) and Glazkova and Morozov (2022). For all the reference-based metrics, we observe that the KPG-PLM family is consistently the best. However, comparing the best performing models in both families (Table 7), we find that they achieve the same level of coverage, while KPG-PLM models are better at generating phrases with higher saliency. For GPT-3.5, we observe that it achieves a competitive coverage in both 0-shot and 5-shot setting. The zero-shot setting achieves a low saliency as the model is not shown any demonstrations. With five examples, the saliency significantly increases. For the Amazon and Azure APIs, we find that they cannot outperform zero-shot prompting GPT-3.5 in the reference-based setting (appendix E). ### Reference-free evaluation NaturalnessWe find that models trained on the sequence generation objective have the highest naturalness. By contrast, KPE models overall exhibit poor naturalness. Pre-trained language models such as BERT and BART do not significant outperform models trained from scratch on KP20k or KPTimes. Noteably, GPT-3.5 exhibits the best performance on KPTimes while performs worse than KPG-NN models on KP20k, suggesting that the model is more prone to generating unnatural phrases in the specialized science domain. FaithfulnessIn terms of faithfulness, GPT-3.5 leads the other models by a large margin. We hypothesize that the GPT-3.5 model has obtained a strong ability of extracting the concepts from the input with minimal paraphrasing. Surprisingly, KPE models outperform KPG-PLM/NN on KPTimes, but not on KP20k. One explanation is that KPE models' prediction may group words that do not belong to the same phrase, which may likely be deemed unfaithful in the scientific domain. \begin{table} \begin{tabular}{l|c|c c c c|c c c c c c} \hline \multirow{2}{*}{**Group**} & \multirow{2}{*}{**\#KP**} & \multicolumn{2}{c|}{**Sal. Cov.**} & \multicolumn{2}{c|}{**Sal.+Cov. Cov.**} & \multirow{2}{*}{**Cov.**} & \multirow{2}{*}{**Nat.**} & \multicolumn{2}{c}{**1**} & \multicolumn{2}{c}{**2**} & \multicolumn{2}{c}{**Diversity**} & \multicolumn{2}{c}{**Utility**} \\ & & & & & & & & & & & \(dup\)\(\downarrow\) & \(emb\_sim\) & \(\downarrow\) & \(R\)\(@\)\(@\)\(@\)\(@\)\(@\)\(@\)\(@\)\(@\)\(@\)\(@\)\(@\)\(@)\(@\)\(@\)\(@\)\(@)\(@\)\(@\)\(@)\(@\)\(@)\(@\)\(@)\(@\)\(@)\(@\)\(@)\(@)\(@\)\(@)\(@\)\(@)\(@\)\(@)\(@)\(@)\(@\)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@@)\(@)\(@)\(@@)\(@)\(@)\(@)\(@)\(@)\(@)\(@)\(@@)\(@)\(@)\(@)\(@)\(@)\(@@)\(@)\(@@)\(@)\(@)\(@)\(@)\(@)\(@@)\(@)\(@@)\(@)\(@)\(@@)\(@)\(@)\(@)\(@)\(@@)\(@)\(@@)\(@)\(@@)\(@)\(@)\(@@)\(@)\(@@)\(@@)\(@)\(@)\(@@)\(@)\(@)\(@@)\(@)\(@@)\(@)\(@@)\(@)\(@@)\(@)\(@@)\(@)\(@@)\(@@)\(@)\(@@)\(@@)\(@@)\(@)\(@@)\(@)\(@@)\(@@)\(@@)\(@)\(@)\(@@)\(@@)\(@@)\(@)\(@@)\(@@)\(@@)\(@)\(@@)\(@@)\(@@)\(@)\(@@)\(@)\(@)\(@@)\(@)\(@@)\(@@)\(@@)\(@@)\(@)\(@)\(@@)\(@)\(@)\(@@)\(@)\(@@)\(@@)\(@@)\(@@)\(@@)\(@)\(@@)\(@@)\(@@)\(@)\(@@)\(@)\(@@)\(@@)\(@@)\(@@)\(@)\(@)\(@)\(@@)\(@@)\(@@)\(@@)\(@@)\(@)\(@@)\(@@)\(@@)\(@@)\(@@)\(@@)\(@)\(@@)\(@@)\(@@)\(@@)\(@@)\(@@)\(@@)\(@@)\(@@)\(@@)\(@)\(@@)\(@)\(@@)\(@@)\(@)\(@@)\(@@)\(@)\(@)\(@@@)\(@)\(@@@)\(@@)\(@@)\(@@)\(@)\(@@)\(@@)\(@)\(@@)\(@@)\(@@)\(@@)\(@)\(@)\(@@)\(@@)\(@@)\(@)\(@)\( In addition, human references do not obtain a high score, which can be caused by humans writing more abstract absent keyphrases. This suggests that the metric model can be further improved for judging the faithfulness of concepts absent from the input. We also note that UniEval may inherently prefer similar pre-trained language models like GPT-3.5 and is not suitable as a single gold standard Deutsch et al. (2022). DiversityFor diversity, we find that \(dup\_token\_ratio\) and \(embed\_sim\) do not always agree. While the former prefers humans and GPT-3.5, the latter prefers GPT-3.5 and keyphrase APIs. In addition, KPG-PLM models have much higher diversity compared to KPG-NN and KPE models. After a manual inspection, we find that the major reason is that KPE-PLM models generate much less duplications compared to KPG-NN even if greedy decoding is used for both. In addition, some KPE models use ranking heuristics that rank similar phrases together, causing a high duplication. From this perspective, methods that explicitly model topical diversity (such as M3) has a great advantage. UtilityThe last two columns of Table 4 show the performance of the keyphrase predictions for downstream document retrieval. For both datasets, we use the training and the testing corpus as \(C\) and report the scores for \(k=5\). It is notable that the **utility evaluation does not agree with reference-based evaluation**, i.e., scoring high against human references does not guarantee good downstream performance. For both retrieval effectiveness (\(RR\)) and efficiency (\(Sparse\)), we observe that GPT-3.5 leads the other types of models by a large margin, indicating its an outstanding ability to pick useful keyphrases and rank them properly. Human-written keyphrases have the lowest \(RR\), possibly due to the low number of keyphrases per example. This issue is mitigated with \(Sparse_{5}\), which always focuses on the top five keyphrases. ### Discussion In the previous two subsections, we have benchmarked a range of keyphrase systems using the proposed framework. What insights can these results provide for NLP practitioners? No one-winning-for-all model.Think about what you need.As shown earlier, different models have different assumptions and capabilities. Therefore, we define fine-grained goals and select proper tools for evaluating them. If a "human-like behavior" as defined by a large dataset is desired, then KPG-PLM models are the best choice. However, if the goal is to have a system that can generate natural, faithful, and diverse keyphrases for humans and robust to inputs from multiple domains, then prompting GPT-3.5 with a few examples should be preferred. If the keyphrases are merely used for IR tasks and inference cost is a concern, then a keyphrase extraction model such as M3 or M18 would be the best choice (Table 8 and 9). Do not overtrust human references.Our results suggest that judging the models based on their similarity to human references is insufficient for evaluating keyphrase systems. Particularly, we have shown that human references themselves are suboptimal in terms of diversity and utility. Nuances in model selection.In general, model selection decisions are affected by both the metric and the benchmarking dataset. Our framework produces different model ranking than previous work such as Ye et al. (2021) and Wu et al. (2022). We thus warn that even for the same dimension, different metrics (e.g., F1 calculated by different matching strategies) have different connotations and one should not assumption a metric's ranking be identical to another. In addition, different model rankings can be concluded for highly specialized domains (such as KP20k) compared to general domains (such as KPTimes). ## 7 Related work This section introduces previous advances in the evaluation of keyphrase systems. Reference-free evaluationEarly keyphrase systems adopt human evaluation to avoid the need for references Barker and Cornacchia (2000); Matsuo and Ishizuka (2004). Another branch of work evaluates keyphrases by their utility in downstream applications such as retrieval Bracewell et al. (2005); Boudin and Gallina (2021) or summarization Litvak and Last (2008). Other reference-free metrics include diversity Bahuleyan and El Asri (2020) and descriptive statistics such as the number of keyphrase generated and their average lengths. Reference-based evaluationTraditionally, keyphrase extraction and keyphrase generation systems are evaluated with precision, recall,
2304.00936
Equivariantly formal 2-torus actions of complexity one
In this paper we study a specific class of actions of a $2$-torus $\mathbb{Z}_2^k$ on manifolds, namely, the actions of complexity one in general position. We describe the orbit space of equivariantly formal $2$-torus actions of complexity one in general position and restricted complexity one actions in the case of small covers. It is observed that the orbit spaces of such actions are topological manifolds. If the action is equivariantly formal, we prove that the orbit space is a $\mathbb{Z}_2$-homology sphere. We study a particular subclass of these $2$-torus actions: restrictions of small covers to a subgroup of index 2 in general position. The subgroup of this form exists if and only if the small cover is orientable, and in this case we prove that the orbit space of a restricted $2$-torus action is homeomorphic to a sphere.
Vladimir Gorchakov
2023-04-03T12:49:23Z
http://arxiv.org/abs/2304.00936v1
# Equivariantly formal \(2\)-torus actions of complexity one ###### Abstract. In this paper we study a specific class of actions of a \(2\)-torus \(\mathbb{Z}_{2}^{k}\) on manifolds, namely, the actions of complexity one in general position. We describe the orbit space of equivariantly formal \(2\)-torus actions of complexity one in general position and restricted complexity one actions in the case of small covers. It is observed that the orbit spaces of such actions are topological manifolds. If the action is equivariantly formal, we prove that the orbit space is a \(\mathbb{Z}_{2}\)-homology sphere. We study a particular subclass of these \(2\)-torus actions: restrictions of small covers to a subgroup of index \(2\) in general position. The subgroup of this form exists if and only if the small cover is orientable, and in this case we prove that the orbit space of a restricted \(2\)-torus action is homeomorphic to a sphere. 2020 Mathematics Subject Classification: Primary 57S12, 57S17, 57S25, 57N65, 57R91; Secondary 57R18, 55M35, 55R20 The article was prepared within the framework of the HSE University Basic Research Program. ## 1. Introduction Let a compact torus \(T^{k}=(S^{1})^{k}\) effectively act on a closed manifold \(X^{2n}\) with nonempty finite fixed points set. The number \(n-k\) is called _the complexity of the action_. The study of orbit spaces of complexity zero is a well-know subject of toric topology (See [7]). In [2], A. Ayzenberg showed that the orbit space of a complexity one action in general position is a topological manifold. In [4], A. Ayzenberg and M. Masuda described the orbit space of equivariantly formal torus actions of complexity one in general position. In [3], A. Ayzenberg and V. Cherepanov described torus actions of complexity one in non-general position. Similarly, we can study the orbit space of \(\mathbb{Z}_{2}^{k}\) action on a manifold \(X^{n}\). The group \(\mathbb{Z}_{2}^{k}\) is a real analog of \(T^{k}\), it is called a \(2\)_-torus_. Similarly to the torus case the number \(n-k\) is called _the complexity of the action_. In [20], L. Yu studied the orbit space of an equivariantly formal \(2\)-torus action of complexity zero. In [8], V. Buchstaber and S. Terzic proved that \(Gr_{4,2}(\mathbb{R})/\mathbb{Z}_{2}^{3}\cong S^{4}\) and \(Fl_{3}(\mathbb{R})/\mathbb{Z}_{2}^{2}\cong S^{3}\) for real Grassmann manifold \(G_{4,2}(\mathbb{R})\) of \(2\)-planes in \(\mathbb{R}^{4}\) and the real manifold \(Fl_{3}(\mathbb{R})\) of full flags in \(R^{3}\). In [15], D. Gugnin proved that \(T^{n}/\mathbb{Z}_{2}^{n-1}\cong S^{n}\) for a certain action with isolated fixed points. These are examples of complexity one actions. The aim of this paper is to describe the orbit space of equivariantly formal \(2\)-torus action of complexity one in general position. We also describe restricted complexity one actions in the case of small covers. Let us give preliminary definitions and formulate the main results. Let a \(2\)-torus \(\mathbb{Z}_{2}^{n-1}\) act effectively on a connected closed smooth manifold \(X=X^{n}\) with nonempty set of fixed points. For a fixed point \(x\in X^{\mathbb{Z}_{2}^{n-1}}\) of the action we have the tangent representation of \(\mathbb{Z}_{2}^{n-1}\) at \(x\). Consider \(\alpha_{x,1},\ldots,\alpha_{x,n}\in\operatorname{Hom}(\mathbb{Z}_{2}^{n-1}, \mathbb{Z}_{2})\cong\mathbb{Z}_{2}^{n-1}\), the weights of the tangent representation at \(x\). The action is said to be in _general position_ if for any fixed point \(x\), any \(n-1\) of the weights \(\alpha_{x,1},\ldots,\alpha_{x,n}\) are linearly independent over \(\mathbb{Z}_{2}\). Now we provide a coordinate description of an action in general position in the case of \(\mathbb{R}^{n}\). Let \(G\) be a subgroup of \(\mathbb{Z}_{2}^{n}\) consisting of elements of the following form: \[G=\{(g_{1},\ldots,g_{n})\in\mathbb{Z}_{2}^{n}:\Pi_{i=1}^{n}g_{i}=1\}, \tag{1.1}\] where \(g_{i}\in\{-1,1\}\). Since \(\mathbb{Z}_{2}^{n}\) acts coordinate-wise on \(\mathbb{R}^{n}\), we have an induced action of \(G\) on \(\mathbb{R}^{n}\), which we call _the standard complexity one action_. Remark 1.1.: If an action of \(\mathbb{Z}_{2}^{n-1}\) on \(\mathbb{R}^{n}\) is in general position, then it is weakly equivalent to the standard complexity one action, see Proposition 3.3. C. Lange and M. Mikhailova in [12], [17] studied orbit spaces of general representations of finite groups. It follows from their results that \(\mathbb{R}^{n}/G\cong\mathbb{R}^{n}\) for complexity one representation in general position. Using this result and some additional condition, which holds for equivariantly formal actions, we show that for a \(\mathbb{Z}_{2}^{n-1}\)-action on \(X\) in general position the orbit space \(Q=X/\mathbb{Z}_{2}^{n-1}\) is a topological manifold. If a \(\mathbb{Z}_{2}^{n-1}\)-action is in non-general position, then \(Q\) is a topological manifold with boundary similar to the torus case studied in [10] by V. Cherepanov. The action of \(\mathbb{Z}_{2}^{n-1}\) on \(X\) is called _equivariantly formal_ if \[\dim_{\mathbb{Z}_{2}}H^{*}(X^{\mathbb{Z}_{2}^{n-1}};\mathbb{Z}_{2})=\dim_{ \mathbb{Z}_{2}}H^{*}(X;\mathbb{Z}_{2}), \tag{1.2}\] where \(X^{\mathbb{Z}_{2}^{n-1}}\) is the fixed point set of \(\mathbb{Z}_{2}^{n-1}\)-action. Remark 1.2.: If an \(\mathbb{Z}_{2}^{n-1}\)-action is in general position, then the fixed point set is finite and the action is equivariantly formal if and only if \(|X^{\mathbb{Z}_{2}^{n-1}}|=\dim_{\mathbb{Z}_{2}}H^{*}(X;\mathbb{Z}_{2})\). Theorem 1.: _Let \(\mathbb{Z}_{2}^{n-1}\) act on \(X=X^{n}\) equivariantly formal and in general position. Then the orbit space \(Q=X/\mathbb{Z}_{2}^{n-1}\) is a topological manifold and \(H^{*}(Q;\mathbb{Z}_{2})\cong H^{*}(S^{n};\mathbb{Z}_{2})\)._ Let \(X\) be a small cover and let \(G\cong\mathbb{Z}_{2}^{n-1}\) be a subgroup of \(\mathbb{Z}_{2}^{n}\) of index \(2\). Then we get the restricted \(G\)-action of complexity one on \(X\). The subgroup \(G\) is called a \(2\)_-subtorus in general position_, if this \(G\)-action is in general position. In the case of small covers, we notice that the condition of existence \(2\)-subtorus in general position is equivalent to the the condition of orientability of a small cover, which was proved by H. Nakayama and Y. Nishimura in [18]. Theorem 2.: _Let \(X\) be a small cover. There exists a \(2\)-subtorus in general position if and only if \(X\) is orientable. If such \(2\)-subtorus exist, then it is unique._ If the \(2\)-subtorus in general position exists, then the orbit space is homeomorphic to the sphere. **Theorem 3**.: _Let \(X=X^{n}\) be an orientable small cover and let \(G\) be the \(2\)-subtorus in general position. Then the orbit space \(X/G\) is homeomorphic to the \(n\)-dimensional sphere \(S^{n}\)._ We have the following coordinate description of the \(2\)-subtorus in general position. **Remark 1.3**.: Let \(X=X^{n}\) be an orientable small cover over a simple polytope \(P\), let \(p=F_{i_{1}}\cap\cdots\cap F_{i_{n}}\) be a vertex of \(P\) and \(\lambda_{i_{1}},\ldots,\lambda_{i_{n}}\) be the corresponding characteristic vectors. Taking the characteristic vectors as standard generators of \(\mathbb{Z}_{2}^{n}\), consider the following subgroup: \[G=\{(g_{1},\ldots,g_{n})\in\mathbb{Z}_{2}^{n}:\Pi_{i=1}^{n}g_{i}=1\}.\] Then \(G\) is the \(2\)-subtorus in general position. Now we provide some examples of complexity one actions. In all examples below actions are equivariantly formal and in general position. In examples 1.5, 1.6 we get restricted actions on small covers. **Example 1.4**.: Let \(\mathbb{Z}_{2}\) act on \(S^{2}\) by rotation on angle \(180^{\circ}\) around an axis. Then \(S^{2}/\mathbb{Z}_{2}\cong S^{2}\). The action of \(\mathbb{Z}_{2}\) on \(S^{2}\) **Example 1.5**.: More generally, let \(\mathbb{Z}_{2}\) act on a closed orientable surface \(M_{g}\) of genus \(g\) by rotation on angle \(180^{\circ}\) around the axis. Then \(M_{g}/\mathbb{Z}_{2}\cong S^{2}\). If \(g=1\), then it is the particular case of the next example. Notice that if \(g=0\), then \(M_{0}=S^{2}\) is not a small cover. The orbit space of the \(\mathbb{Z}_{2}\) on \(M_{3}\)\(\mathbb{Z}_{2}\)-action on \(M_{3}\) **Example 1.6**.: Consider a \(\mathbb{Z}_{2}\)-action on \(S^{1}\) by the map \((x,y)\to(x,-y)\). Taking the n-fold product, we have a \(\mathbb{Z}_{2}^{n}\)-action on the \(T^{n}\). Let \[G=\{(g_{1},\dots,g_{n})\in\mathbb{Z}_{2}^{n}:\Pi_{i=1}^{n}g_{i}=1\}\] be the index 2 subgroup of orientation-preserving elements. In [15] it was proved that \(T^{n}/G\cong S^{n}\). Two examples below are not small covers. **Example 1.7**.: Let \(\mathbb{Z}_{2}^{4}\) act on \(\mathbb{R}^{4}\) by the standard action. From this we get the effective action of \(\mathbb{Z}_{2}^{3}\) on the real Grassmann manifold \(G_{4,2}(\mathbb{R})\) of 2-planes in \(\mathbb{R}^{4}\). In [8] it was proved that \(G_{4,2}(\mathbb{R})/\mathbb{Z}_{2}^{3}\cong S^{4}\). **Example 1.8**.: Let \(\mathbb{Z}_{2}^{3}\) act on \(\mathbb{R}^{3}\) by the standard action. From this we get the effective action of \(\mathbb{Z}_{2}^{2}\) on the real full flag manifold \(Fl_{3}(\mathbb{R})\). In [8] it was proved that \(Fl_{3}(\mathbb{R})/\mathbb{Z}_{2}^{2}\cong S^{3}\). Now we provide a possible connection of Theorem 3 with the theory of _n-valued groups_. See [6] for the definition of \(n\)-valued group and other details. The following construction can be used to produce \(n\)-valued groups. Let \(G\) be a group, let \(A\) be a finite group with \(|A|=n\) and \(\phi:A\to Aut(G)\) be homomorphism to the group of automorphisms of \(G\). Then we have an \(n\)-valued group structure on the orbit space \(X=G/\phi(A)\), see [6, Thm. 1]. In example 1.6 we get a \(2^{n-1}\)-valued topological group structure on the \(n\)-sphere \(S^{n}\) for \(n\geq 2\). The following problem was posed by V. M. Buchstaber. **Problem 1**.: _Can Theorem 3 be applied to other small covers to provide new examples of \(2^{n-1}\)-valued group structure on \(S^{n}\)?_ For this we need a small cover \(X\) with the property that it is a group and \(\mathbb{Z}_{2}^{n}\) acts by group automorphisms. Non-Example 1.9.: We have that \(\mathbb{R}P^{3}\) is diffeomorphic to \(SO(3,\mathbb{R})\), hence there is a Lie group structure on a small cover \(\mathbb{R}P^{3}\). However, \(\mathbb{Z}_{2}^{3}\) does not act by Lie group automorphisms. Indeed, since all automorphisms of \(SO(3,\mathbb{R})\) are inner, we have that \(Aut(SO(3,\mathbb{R}))\cong SO(3,\mathbb{R})\). There is no finite subgroup in \(SO(3,\mathbb{R})\) isomorphic to \(\mathbb{Z}_{2}^{3}\). Therefore, \(\mathbb{Z}_{2}^{3}\) does not act by group automorphisms. ## 2. Preliminaries In this section we recall some general facts about group actions and 2-torus actions on manifolds. The main reference about group actions is [5]. The main reference about equivariantly formal 2-torus actions is [20]. Let a group \(G\) act effectively on closed smooth manifold \(X\). In this paper we consider only smooth action. For a point \(x\in X\) let \(Stab(x)\) denote the stabilizer subgroup of \(G\) and \(Gx\) the orbit of \(x\). We define the partition by orbit types \[X=\bigsqcup_{H\subseteq G}X^{(H)}. \tag{2.1}\] Here \(X^{(H)}=\{x\in X:Stab(x)=H\}\). We denote the fixed point of subgroup \(H\) by \(X^{H}=\{x\in X:Stab(x)\subset H\}\). Let \(x\in X^{(H)}\) be a point with the stabilizer subgroup \(H\). We can define the _tangent representation_ of \(H\) at \(x\): \[H\to GL(T_{x}X/T_{x}Gx).\] Let \(V_{x}\) denote \(T_{x}X/T_{x}Gx\). There is the following theorem about \(G\)-equivariant tubular neighborhood. **Theorem 2.1** (The Slice Theorem).: _There exist a \(G\)-equivariant diffeomorphism from the \(G\times_{Stab(x)}V_{x}\) onto a \(G\)-invariant neighborhood of the orbit \(Gx\) in \(X\), which send the zero section \(G/Stab(x)\) onto the orbit \(Gx\)._ We now recall the notation of an equivariantly formal action of \(2\)-torus \(\mathbb{Z}_{2}^{k}\), see [20] for the details. There is a classical result of E. Floyd. **Theorem 2.2** ([14]).: _For any paracompact \(\mathbb{Z}_{2}^{k}\)-space \(X\) with finite cohomology dimension, the fixed point set \(X^{\mathbb{Z}_{2}^{k}}\) always satisfies_ \[\dim_{\mathbb{Z}_{2}}H^{*}(X^{\mathbb{Z}_{2}^{k}};\mathbb{Z}_{2})\leq\dim_{ \mathbb{Z}_{2}}H^{*}(X;\mathbb{Z}_{2}). \tag{2.2}\] The next theorem says, when the equality in (2.2) holds. **Theorem 2.3** ([16]).: _The equality in (2.2) holds if and only if the \(E_{2}=E_{\infty}\) for the Serre spectral sequence of the fibration \(X\to E\mathbb{Z}_{2}^{k}\times_{\mathbb{Z}_{2}^{k}}X\to B\mathbb{Z}_{2}^{k}\)._ **Definition 2.4**.: Let a \(2\)-torus \(\mathbb{Z}_{2}^{k}\) act on a closed smooth manifold X. The action is called _equivariantly formal_ over \(\mathbb{Z}_{2}\) if there is equality in (2.2). Sometimes the degeneration of the Serre spectral sequence at \(E_{2}\) is taken as the definition of equivariant formality. This definition is equivalent to Definition 2.4 according to Theorem 2.3. The notation of equivariant formality is similar to the corresponding notion in the theory of torus actions. **Remark 2.5**.: If the fixed point set \(X^{\mathbb{Z}_{2}^{k}}\) is finite, then \(2\)-torus action equivariant formal if and only if \[|X^{\mathbb{Z}_{2}^{k}}|=\dim_{\mathbb{Z}_{2}}H^{*}(X;\mathbb{Z}_{2}). \tag{2.3}\] This remark is useful to characterize equivariant formality of the particular \(2\)-torus actions. **Definition 2.6**.: Consider a non-free effective action of \(\mathbb{Z}_{2}^{n}\) on a closed connected smooth manifold \(X^{n}\). Manifold with such action is called a \(2\)_-torus manifold_. L. Yu proved the following criteria of equivariant formality for \(2\)-torus manifolds in terms of its orbit space. **Theorem 2.7** ([20]).: _Let \(X\) be a \(2\)-torus manifold with orbit space \(Q\)._ 1. \(X\) _is equivariantly formal if and only if_ \(X\) _is locally standard and_ \(Q\) _is mod_ \(2\) _face-acyclic._ 2. \(X\) _is equivariantly formal and_ \(H^{*}(X;\mathbb{Z}_{2})\) _is generated by its degree-one part if and only if_ \(X\) _is locally standard and_ \(Q\) _is a mod_ \(2\) _homology polytope._ ## 3. 2-torus actions of complexity one in general position In this section we show that an orbit space of certain 2-torus action of complexity one is a topological manifold. This section extends [2] to 2-torus actions. Let a 2-torus \(\mathbb{Z}_{2}^{n-1}\) act effectively on a connected closed smooth manifold \(X=X^{n}\) with nonempty set of fixed points. For a fixed point \(x\in X^{\mathbb{Z}_{2}^{n-1}}\) of the action we the have tangent representation of \(\mathbb{Z}_{2}^{n-1}\) at \(x\). Consider \(\alpha_{x,1},\ldots,\alpha_{x,n}\in\operatorname{Hom}(\mathbb{Z}_{2}^{n-1}, \mathbb{Z}_{2})\cong\mathbb{Z}_{2}^{n-1}\), the weights of the tangent representation at \(x\), i.e. \[T_{x}X\cong V(\alpha_{x,1})\oplus\ldots\oplus V(\alpha_{x,1})\] where \(V(\alpha_{x,1})\) is a 1-dimensional real representation given by \(t\cdot y=\alpha_{x,1}(t)y\) for \(t\in\mathbb{Z}_{2}^{n-1}\) and \(y\in\mathbb{R}\). **Definition 3.1**.: The action is said to be in _general position_ if for any fixed point \(x\), any \(n-1\) of the weights \(\alpha_{x,1},\ldots,\alpha_{x,n}\) are linearly independent over \(\mathbb{Z}_{2}\). **Remark 3.2**.: From The Slice Theorem it follows that all weights of an action are non-zero if and only if the fixed point set is discrete. Hence, if \(X\) is compact, then the fixed point set is finite. Let an action be in general position. Since any \(n-1\) of the weights \(\alpha_{x,1},\ldots,\alpha_{x,n}\in\mathbb{Z}_{2}^{n-1}\) are linearly independent, we have \(\alpha_{x,1}+\ldots+\alpha_{x,n}=0\). Hence, for any \(t\in\mathbb{Z}_{2}^{n-1}\) we have \[\Pi_{i=1}^{n}\alpha(t)_{x,i}=1. \tag{3.1}\] Moreover, the condition of general position implies that the tangent representation at any fixed point is faithful. This motivates the following construction. Let \(G\) be a subgroup of \(\mathbb{Z}_{2}^{n}\) consisting of elements of the following form: \[G=\{(g_{1},\ldots,g_{n})\in\mathbb{Z}_{2}^{n}:\Pi_{i=1}^{n}g_{i}=1\}, \tag{3.2}\] where \(g_{i}\in\{-1,1\}\). Since \(\mathbb{Z}_{2}^{n}\) acts coordinate-wise on \(\mathbb{R}^{n}\), we have an induced action of \(G\) on \(\mathbb{R}^{n}\), which we call _the standard complexity one action_. We show below that the orbit space of the standard complexity one action is homeomorphic to \(\mathbb{R}^{n}\). Moreover, \(G\) is the unique subgroup of \(\mathbb{Z}_{2}^{n}\) of index \(2\) such that \(\mathbb{R}^{n}/G\cong\mathbb{R}^{n}\). Let \(\chi:\mathbb{Z}_{2}^{n-1}\to GL_{n}(\mathbb{R})\) be an action in general position with the weights \(\alpha_{1},\ldots,\alpha_{n}\). Define \(\phi:\mathbb{Z}_{2}^{n-1}\to G\) by the following formula \(\phi(t)=(\alpha_{1}(t),\ldots,\alpha_{n}(t))\). Since the action in general position is faithful representation and (3.1) holds, we have that \(\phi\) is an isomorphism and the following diagram commutes: (3.3) Where \(\chi\) is the action in general position and \(\psi\) is the standard complexity one action of \(G\) on \(\mathbb{R}^{n}\) by coordinates, i.e. \(g\cdot x=(g_{1}x_{1},\ldots,g_{n}x_{n})\). Hence, we get the following: **Proposition 3.3**.: _Let \(\mathbb{Z}_{2}^{n-1}\)-action on \(\mathbb{R}^{n}\) be in general position. Consider \(G\) as in (3.2). Then this action is weakly equivalent to the standard complexity one action of \(G\) on \(\mathbb{R}^{n}\), i.e. diagram \((\ref{eq:1})\) commutes._ In the following proposition we prove that all stabilizers of a \(2\)-torus action in general position are generated by _rotations_, i.e. by orthogonal transformations whose fixed-point subspace has codimension two. **Proposition 3.4**.: _Consider the standard complexity one action of \(G\) on \(\mathbb{R}^{n}\). Let \(H=Stab(x)\) be the stabilizer subgroup of any \(x\in\mathbb{R}^{n}\). Then the orbit space \(\mathbb{R}^{n}/H\) is homeomorphic to \(\mathbb{R}^{n}\). Moreover, \(G\) is the only one subgroup of \(\mathbb{Z}_{2}^{n}\) of index \(2\) such that \(\mathbb{R}^{n}/G\cong\mathbb{R}^{n}\)._ Proof.: Let us describe the stabilizer \(H\) of a point \(x\in\mathbb{R}^{n}\). For \(x=(x_{1},\ldots,x_{n})\in\mathbb{R}^{n}\) let \[I=\{i\in\{1,\ldots,n\}:i\in I\text{ if }x_{i}=0\}=\{i_{1},\ldots,i_{k}\}\] be the set of indices with zero coordinates of \(x\). If \(|I|<2\), then the stabilizer subgroup is trivial, hence we can assume that \(|I|\geq 2\). Consider the following subgroup of \(\mathbb{Z}_{2}^{n}\): \[\mathbb{Z}_{2}^{I}=(\mathbb{Z}_{2},1)^{I}=\{(g_{1},\ldots,g_{n})\in\mathbb{Z}_ {2}^{n}:g_{i}\in\mathbb{Z}_{2}\text{ if }i\in I,\text{ otherwise }g_{i}=1\}. \tag{3.4}\] We have that \(H=\mathbb{Z}_{2}^{I}\cap G\). Taking \(g_{i_{1}},\ldots,g_{i_{k}}\) as generators of \(\mathbb{Z}_{2}^{I}\), we have that \(H=\{(g_{1},\ldots,g_{k})\in\mathbb{Z}_{2}^{I}:\prod_{1}^{k}g_{i}=1\}\). This means that \(H\) generated by rotations. It follows from [17, Thm. A] that the orbit space of this action is homeomorphic to \(\mathbb{R}^{n}\). From this theorem also follows that any subgroup \(H\) of \(\mathbb{Z}_{2}^{n}\) such that \(\mathbb{R}^{n}/H\) is homeomorphic to \(\mathbb{R}^{n}\) must be generated by rotations, but \(G=\{(g_{1},\ldots,g_{n}):\Pi_{i=1}^{n}g_{i}=1\}\) is the only subgroup of index \(2\) generated by rotations. **Corollary 3.5**.: _Suppose \(\mathbb{Z}_{2}^{n-1}\) action on \(\mathbb{R}^{n}\) is in general position. Then \(\mathbb{R}^{n}/Stab(x)\cong\mathbb{R}^{n}\) for the stabilizer subgroup \(Stab(x)\) of any \(x\in\mathbb{R}^{n}\)._ Proof.: Since action is in general position, we have that the diagram (3.3) commutes. Therefore, the action is weakly equivalent to the action in Proposition 3.4. **Remark 3.6**.: From the previous corollary it follows that the stabilizer subgroup of an action in general position can not be arbitrary. For example, it can not be \(H=\{(1,1,1,1),(-1,-1,-1,-1)\}\) and, indeed, \(\mathbb{R}^{4}/H\) is homeomorphic to the open cone over \(\mathbb{R}P^{3}\). This example shows importance of the condition that subgroup is generated by rotations. Description of all linear representations of finite groups whose orbit spaces are homeomorphic to \(\mathbb{R}^{n}\) can be found in [17]. For the global statement of the previous corollary we need the following condition on an action: (3.5) Every connected component of \[X^{H}\] has a global fixed point, where \(X^{H}=\{x\in X:H\subset Stab(x)\}\). The following lemma shows that this condition holds for equivariantly formal action of \(2\)-torus. Lemma 3.7 ([20, Lem. 3.2]).: _Suppose a \(\mathbb{Z}_{2}^{k}\)-action on a compact manifold \(X\) is equivariantly formal. Then for every subgroup \(H\) of \(\mathbb{Z}_{2}^{k}\), the induced action of \(\mathbb{Z}_{2}^{k}\)( or \(\mathbb{Z}_{2}^{k}/H\)) on every connected component \(N\) of \(X^{H}\) is equivariantly formal, hence \(N\) has a \(\mathbb{Z}_{2}^{k}\)-fixed point._ Now we can prove the following theorem about the orbit space of complexity one actions in general position. This is the first part of Theorem 1 from the introduction. Theorem 3.8.: _Let \(\mathbb{Z}_{2}^{n-1}\) act on a connected closed smooth manifold \(X=X^{n}\). Suppose that the action is in general position and the condition (3.5) holds. Then the orbit space \(Q=X/\mathbb{Z}_{2}^{n-1}\) is a topological manifold._ Proof.: Let us denote \(G=\mathbb{Z}_{2}^{n-1}\). Let \(H=Stab(x)\) be the stabilizer subgroup of any \(x\) in \(X\), or equivalently \(x\in X^{(H)}\). Hence, \(x\in N\), where \(N\) is a connected component of \(X^{H}\). It follows from the condition (3.5) that \(N\) has a global fixed point \(x^{\prime}\). By The Slice Theorem there exist a \(G\) - equivariant neighborhood \(U(x^{\prime})\) of \(x^{\prime}\) such that \(U(x^{\prime})\) is \(G\)-diffeomorphic to \(T_{x^{\prime}}X\). Since the tangent representation of \(H\) at \(x\) depends only on a connected component of \(X^{H}\), we can assume that \(x\) is near \(x^{\prime}\), i.e. \(x\in U(x^{\prime})\). Therefore, \(H\) is a stabilizer subgroup of an action \(\mathbb{Z}_{2}^{n-1}\) on \(\mathbb{R}^{n}\) in general position. By The Slice Theorem every orbit has a \(G\)-equivariant neighborhood \(U\) such that \(U\) is \(G\)-equivariantly homeomorphic to \(G\times_{H}V_{x}\), where \(V_{x}=T_{x}X/T_{x}Gx\). Since the orbit \(Gx\) is a discrete set, we have \(V_{x}=T_{x}X\). Therefore, \(U/G\cong T_{x}X/H\cong\mathbb{R}^{n}\) by Corollary 3.5. ## 4. Equivariantly formal actions of complexity one In this section we show that the orbit space of an equivariantly formal action in general position is a \(\mathbb{Z}_{2}\)-homology sphere. First of all, we introduce the notion of a face. For an action of \(\mathbb{Z}_{2}^{n-1}\) on \(X\) consider the equivariant filtration \[\varnothing=X_{-1}\subset X_{0}\subset X_{1}\subset\cdots\subset X_{n-1}=X,\] where \(X_{i}\) is the union of orbits of size at most \(2^{i}\) and \(X_{-1}=\varnothing\). Notice that \[X_{i}=\{x\in X:rk(Stab(x))\geq n-1-i\}=\bigsqcup_{rkH\geq n-1-i}X^{(H)}\] and each \(X^{(H)}\) is the disjoint union of submanifolds of X. There is an orbit type filtration of the orbit space \(Q=X/\mathbb{Z}_{2}^{n-1}\): \[\varnothing=Q_{-1}\subset Q_{0}\subset Q_{1}\subset\cdots\subset Q_{n-1}=Q,\] where \(Q_{i}=X_{i}/\mathbb{Z}_{2}^{n-1}\) and \(Q_{-1}=\varnothing\). **Definition 4.1**.: A closure of any connected component of \(Q_{i}\backslash Q_{i-1}\) is called _a face of rank i_. For a face \(F\) of rank \(i\) we define \(F_{-1}=F\cap Q_{i-1}\). Let \(F\) be a face of \(Q\), consider the quotient map \(p:X\to Q\), denote the preimage of \(F\) by \(X_{F}=p^{-1}(F)\). Let \(G_{F}\subset\mathbb{Z}_{2}^{n-1}\) be the non-effective kernel of the \(\mathbb{Z}_{2}^{n-1}\)-action on \(X_{F}\). Then \(X_{F}\) is a connected component of \(X^{G_{F}}\), therefore \(X_{F}\) is a submanifold of \(X\). **Definition 4.2**.: Let \(F\) be a face of \(Q\). The preimage \(X_{F}=p^{-1}(F)\) of \(F\) is called _face submanifold_ corresponding to \(F\). For a face submanifold \(X_{F}\) we define \(X_{F_{-1}}=X_{F}\cap X_{i}\). In the next proposition we show that each \(X_{F}\) is an equivariantly formal \(2\)-torus manifold. **Proposition 4.3**.: _Let \(\mathbb{Z}_{2}^{n-1}\) act on \(X\) equivariantly formal and in general position. Let \(X_{F}\) be the face submanifold corresponding to a face \(F\) of rank \(i<n-1\). Then \(\dim X_{F}=i\), hence \(X_{F}\) is a equivariantly formal \(2\)-torus manifold with an action \(\mathbb{Z}_{2}^{n-1}/G_{F}\)._ Proof.: Let \(G_{F}\) be the non-effective kernel of the \(\mathbb{Z}_{2}^{n-1}\)-action on \(X_{F}\), we have that \[|G_{F}|=2^{n-1-i}, \tag{4.1}\] since \(\operatorname{rk}F=i\). Consider \(p^{-1}(F^{\circ})=\{x\in X_{F}:Stab(x)=G_{F}\}\) is an open subset of \(X_{F}\). Let \(x^{\prime}\in X_{F}\) be a global fixed point of action \(\mathbb{Z}_{2}^{n-1}\) on \(X\). Let \(U(x^{\prime})\) be \(\mathbb{Z}_{2}^{n-1}\)-equivariant neighborhood of \(x^{\prime}\) such that \(U(x^{\prime})\) is \(\mathbb{Z}_{2}^{n-1}\)-equivariant diffeomorphic to \(T_{x}^{\prime}X\). We have that \(p^{-1}(F^{\circ})\cap U(x^{\prime})\cong V\), where \(V=\{x\in T_{x^{\prime}}X:Stab(x)=G_{F}\}\) is a linear subspace such that for any \(x=(x_{1},\dots,x_{n})\in V\) we have \(x_{i}=0\), where \(i\in I\subset\{1,\dots,n\}\), i.e. some coordinates of \(x\) are zero. Let \(|I|=k\) be the number of zero coordinates. \(G_{F}\) is generated by rotations, since it is the stabilizer subgroup of the standard complexity one action. Therefore, \(G_{F}\) contains all \((g_{1},\dots,g_{n})\in\mathbb{Z}_{2}^{n}\) such that \(g_{j}=-1\) for all \(j\in J\), where \(J\subset I\) and \(|J|\) is even. Hence, we have \[|G_{F}|=\binom{k}{0}+\binom{k}{2}+\dots+\binom{k}{2\lfloor\frac{k}{2}\rfloor}= 2^{k-1}. \tag{4.2}\] On the other hand, we have \(|G_{F}|=2^{n-1-i}\), therefore \(k=n-i\). Hence, \(\dim X_{F}=n-k=i\) and \(\operatorname{rk}\mathbb{Z}_{2}^{n-1}/G_{F}=i\). This action is equivariantly formal by Lemma 3.7. In this paper all cohomology groups are taken with coefficients in \(\mathbb{Z}_{2}\). **Corollary 4.4**.: _For any face \(F\) of rank \(i<n-1\), we have \(H^{*}(F,F_{-1})=H^{*}(D^{i},\partial D^{i})\)._ Proof.: From Theorem 2.7 it follows that \(F\) is mod \(2\) face-acyclic. Hence, \[H^{*}(F,F_{-1})=H^{*}(D^{i},\partial D^{i})\] by Lefschetz duality. Now we introduce Atiyah-Bredon-Franz-Puppe sequence for equivariant cohomology \[0\to H^{*}_{\mathbb{Z}^{k}_{2}}(X)\stackrel{{ i*}}{{ \rightarrow}}H^{*}_{\mathbb{Z}^{k}_{2}}(X_{0})\stackrel{{\delta_{0} }}{{\rightarrow}}H^{*+1}_{\mathbb{Z}^{k}_{2}}(X_{1},X_{0})\stackrel{{ \delta_{1}}}{{\rightarrow}}\cdots\\ \cdots\stackrel{{\delta_{k-2}}}{{\rightarrow}}H^{*+k- 1}_{\mathbb{Z}^{k}_{2}}(X_{k-1},X_{k-2})\stackrel{{\delta_{k-1}}}{ {\rightarrow}}H^{*+k}_{\mathbb{Z}^{k}_{2}}(X,X_{k-1})\to 0, \tag{4.3}\] where \(\delta_{i}\) is the connecting homomorphism in the long exact sequence of equivariant cohomology of the triple \((X_{i+1},X_{i},X_{i-1})\) and \(X_{i}\) is the union of orbits of size at most \(2^{i}\). If an \(\mathbb{Z}^{k}_{2}\)-action on \(X\) is equivariantly formal, then this sequence is exact. For the proof see [1],[9]. Also see [13] for the torus case. **Theorem 4.5** ([1, Thm. 10.2]).: _Suppose that an action of \(\mathbb{Z}^{k}_{2}\) on \(X\) is equivariantly formal. Then sequence (4.3) is exact._ From exactness of this sequence and previous corollary we immediately get the following result: **Proposition 4.6**.: _Let \(\mathbb{Z}^{n-1}_{2}\) act on \(X\) equivariantly formal and in general position. Then for the orbit space \(Q\) we have \(H^{i}(Q,Q_{n-2})=0\) for \(i<n-1\)._ Proof.: To be short denote \(G=\mathbb{Z}^{n-1}_{2}\). Consider \(i\)-th term in (4.3) with \(i\leq n-2\): \[H^{*+i}_{G}(X_{i},X_{i-1})\cong\bigoplus_{F\colon\dim F=i}H^{*+i}_{G}(X_{F},X_ {F}\cap X_{i-1})\cong\bigoplus_{F\colon\dim F=i}H^{i}(F,F_{-1})\otimes H^{*}( BG_{F}).\] The first isomorphism follows from the equivariant version of Mayer-Vietoris sequence and the second isomorphism follows from the two facts: the action of group \(G/G_{F}\) on \(X_{F}\backslash(X_{F}\cap X_{i-1})\) is free and from Corollary 4.4. Therefore, \(H^{*+i}_{G}(X_{i},X_{i-1})=0\) for \(*<0\) and \(i\leq n-2\). Consider \(*<0\). Then from exactness of sequence (4.3) we get that \(H^{i}_{G}(X,X_{n-2})=0\) for \(i<n-1\). On the other hand, we have \(H^{i}_{G}(X,X_{n-2})\cong H^{i}(Q,Q_{n-2})\) since the action of \(G\) on \(X\backslash X_{n-2}\) is free. Hence, we have \(H^{i}(Q,Q_{n-2})=0\) for \(i<n-1\). Now we can prove Theorem 1 from the introduction. **Theorem 4.7**.: _Let \(\mathbb{Z}^{n-1}_{2}\) act on \(X=X^{n}\) equivariantly formal and in general position. Then the orbit space \(Q=X/\mathbb{Z}^{n-1}_{2}\) is a \(\mathbb{Z}_{2}\)-homology \(n\)-sphere, i.e. \(H^{*}(Q;\mathbb{Z}_{2})\cong H^{*}(S^{n};\mathbb{Z}_{2})\)._ Proof.: Consider the cohomology spectral sequence associated with the filtration \[\varnothing=Q_{-1}\subset Q_{0}\subset Q_{1}\subset\cdots\subset Q_{n-1}=Q,\] where \(Q_{i}=X_{i}/\mathbb{Z}^{n-1}_{2}\) and \(Q_{-1}=\varnothing\): \[E^{p,q}_{1}=H^{p+q}(Q_{p},Q_{p-1})\Rightarrow H^{p+q}(Q). \tag{4.4}\] We get that \[E^{p,q}_{1}=H^{p+q}(Q_{p},Q_{p-1})=\bigoplus_{F:rkF=p}H^{p+q}(F,F_{-1}). \tag{4.5}\] From the Corollary 4.4 and Proposition 4.6 it follows that the only non-zero terms of the first page of the spectral sequence are the \(0\)-th row \((E_{1}^{p,0},d^{1})\) and the \(n-1\)-th column \((E_{1}^{n-1,q},d^{1})\). We have that the first page of the spectral sequence (4.4) as shown on Figure 1. We claim that the differential complex in the \(0\)-th row \((E_{1}^{p,0},d^{1})\) is isomorphic to the degree \(0\) part of the non-augmented version of Atiyah-Bredon-Franz-Puppe sequence (4.3) \[0\to H^{0}_{\mathbb{Z}_{2}^{n-1}}(X_{0})\stackrel{{\delta_{0}}}{ {\rightarrow}}H^{1}_{\mathbb{Z}_{2}^{n-1}}(X_{1},X_{0})\stackrel{{ \delta_{1}}}{{\rightarrow}}\cdots\stackrel{{\delta_{n-3}}}{{ \rightarrow}}H^{n-2}_{\mathbb{Z}_{2}^{n-1}}(X_{n-2},X_{n-3})\stackrel{{ \delta_{n-2}}}{{\rightarrow}}H^{n-1}_{\mathbb{Z}_{2}^{n-1}}(X,X_{n-2})\to 0.\] Indeed, we have the natural projection map \(\pi:X_{G}\to X/G\) such that \(\pi((X_{i})_{G})\subset Q_{i}\) for any \(i\). Therefore, \(\pi\) induces a map between long exact sequences of \((X_{i+1},X_{i},X_{i-1})_{G}\) and \((Q_{i+1},Q_{i},Q_{i-1})\), hence \(\pi^{*}\) commutes with \(\delta_{i}\) for any \(i\). We claim that the induced maps in cohomology \[\pi^{*}:H^{i}(Q_{i},Q_{i-1})\to H^{i}_{G}(X_{i},X_{i-1})\] are isomorphisms. We have that \[H^{i}_{G}(X_{i},X_{i-1})=\bigoplus_{F\colon rkF=i}H^{i}_{G}(X_{F},X_{F_{-1}}),\] \[H^{i}(Q_{i},Q_{i-1})=\bigoplus_{F\colon rkF=i}H^{i}(F,F_{-1}).\] Since \((X_{F},X_{F_{-1}})\) fixed by \(G_{F}\) and \(EG=E(G_{F})\times E(G/G_{F})\), we have that \(H^{i}_{G}(X_{F},X_{F_{-1}})=H^{i}_{G/G_{F}}(X_{F},X_{F_{-1}})\otimes_{\mathbb{ Z}_{2}}H^{0}(BG_{F})\). Therefore, it is enough to prove that the map \[\pi^{*}:H^{i}(F,F_{-1})\to H^{i}_{G/G_{F}}(X_{F},X_{F_{-1}})\] is an isomorphism. We have that the action of \(G/G_{F}\) is free on \(X_{F}\backslash X_{F_{-1}}\), therefore \(\pi^{-1}(x)=E(G/G_{F})\) for any \(x\in F\backslash F_{-1}\). Hence, from a relative version of the Vietoris-Begle Theorem (see [19]), it follows that the induced map \[\pi^{*}:H^{i}(F,F_{-1})\to H^{i}_{G/G_{F}}(X_{F},X_{F_{-1}})\] is an isomorphism. Since the action is equivariantly formal, the Atiyah-Bredon-Franz-Puppe sequence (4.3) is exact. Therefore, the differential complex in the \(0\)-th row \((E_{1}^{p,0},d^{1})\) is exact. Figure 1. The first page of the spectral sequence (4.4) Hence, we have that the second page of the spectral sequence (4.4) as shown on Figure 2. We have that \(E_{\infty}^{p,q}=0\) for \(0<p+q\leq n-1\), therefore \(H^{i}(Q)=0\) for \(i\leq n-1\). On the other hand, \(Q\) is a topological manifold, hence \(H^{i}(Q)\cong H^{i}(S^{n})\). ## 5. Complexity one actions in case of small covers In this section we recall the definition of a small cover and prove Theorem 2, 3 from the introduction. For the details about small covers see [11]. **Definition 5.1**.: Let \(P=P^{n}\) be a simple polytope of dimension \(n\). A small cover over \(P\) is a smooth manifold \(X=X^{n}\) with a locally standard \(\mathbb{Z}_{2}^{n}\)-action such that the orbit space \(X/\mathbb{Z}_{2}^{n}\) is diffeomorphic to a simple polytope \(P\) as a manifold with corners. Let \(\pi:X\to P\) be a small cover over \(P\). For every face \(F\) of \(P\) and for every \(x,y\in\pi^{-1}(F^{\circ})\) the stabilizer group of \(x\) and \(y\) is the same, i.e. \(Stab(x)=Stab(y)\). Denote this stabilizer group by \(G_{F}\). In particular, if \(F\) is a facet, then \(G_{F}\) is subgroup of rank one, hence \(G_{F}=\langle\lambda(F)\rangle\) for some \(\lambda(F)\in\mathbb{Z}_{2}^{n}\). Hence, we get _the characteristic function_ \[\lambda:\mathcal{F}\rightarrow\mathbb{Z}_{2}^{n},\] from the set \(\mathcal{F}\) of all facets of \(P\). We denote \(\lambda(F_{i})\) by \(\lambda_{i}\). For a codimension k face \(F\) we have that \(F=F_{1}\cap\cdots\cap F_{k}\) for some facets \(F_{1},\ldots,F_{k}\in\mathcal{F}\), then \(G_{F}\) is a subgroup with rank equal to \(k\) and generated by \(\lambda_{1},\ldots,\lambda_{k}\). Therefore, we have the following \((*)\)-condition \((*)\) Let \(F=F_{1}\cap\cdots\cap F_{k}\) be any codimension k face of \(P\). Then \(\lambda_{1},\ldots,\lambda_{k}\) are linearly independent in \(\mathbb{Z}_{2}^{n}\). Conversely, a simple polytope \(P\) and a map \(\lambda:\mathcal{F}\rightarrow\mathbb{Z}_{2}^{n}\) satisfying the \((*)\)-condition determine a small cover \(X(P,\lambda)\) over \(P\). For the construction of \(X(P,\lambda)\) and other details see [11]. **Theorem 5.2** ([11, Prop. 1.8]).: _Let \(X\) be a small cover over \(P\) with characteristic function \(\lambda:\mathcal{F}\rightarrow\mathbb{Z}_{2}^{n}\). Then \(X\) and \(X(P,\lambda)\) are equivariantly homeomorphic._ Figure 2. The second page of the spectral sequence (4.4) Let \(X\) be a small cover, let \(G\cong\mathbb{Z}_{2}^{n-1}\) be a subgroup of \(\mathbb{Z}_{2}^{n}\) of index \(2\), then we get an restricted \(G\)-action of complexity one on \(X\). The subgroup \(G\) is called a _\(2\)-subtorus in general position_, if this \(G\)-action is in general position. _Remark 5.3_.: Notice that it is possible that a \(2\)-subtorus in general position does not exist. For example, for the small cover \(\mathbb{R}P^{2}\) over \(\Delta^{2}\) there is no \(2\)-subtorus in general position. However, we will see that if \(X\) is an orientable small cover, then such \(2\)-subtorus exist. Every subgroup \(G\subset\mathbb{Z}_{2}^{n}\) with rank \(n-1\) is determined by some non-zero linear functional \(\xi\in(\mathbb{Z}_{2}^{n})^{*}\) by the following \(G=\operatorname{Ker}(\xi)\). We have the following criteria when \(G\) is in general position. **Proposition 5.4**.: _Let \(X=X^{n}\) be a small cover, let \(\lambda_{i}=\lambda(F_{i})\) be characteristic vectors. The \(2\)-subtorus \(G=\operatorname{Ker}(\xi:\mathbb{Z}_{2}^{n}\to\mathbb{Z}_{2})\) is in general position if and only if \(\xi(\lambda_{i})=1\), in other words \(\lambda_{i}\notin G\), for every facet \(F_{i}\in\mathcal{F}\)._ Proof.: If \(p=F_{i_{1}}\cap\cdots\cap F_{i_{n}}\) is a vertex of \(P\), then the elements of the dual basis \(\lambda_{i_{1}}^{*},\ldots,\lambda_{i_{n}}^{*}\) are the weights of the tangent representation of \(\mathbb{Z}_{2}^{n}\) in the corresponding fixed point. Therefore, \(G=\operatorname{Ker}(\xi)\) is in general position at this fixed point if and only if any \(n-1\) of the weights \(\lambda_{i_{1}}^{*},\ldots,\lambda_{i_{n}}^{*}\) are linearly independent in \((\mathbb{Z}_{2}^{n})^{*}/\langle\xi\rangle\). Since \(\lambda_{i_{1}}^{*},\ldots,\lambda_{i_{n}}^{*}\) is a basis, we have that \(\xi=a_{1}\lambda_{i_{1}}^{*}+\cdots+a_{n}\lambda_{i_{n}}^{*}\) in \(\mathbb{Z}_{2}^{n}\) for \(a_{i}\in\mathbb{Z}_{2}\). Therefore, we have \(a_{1}\lambda_{i_{1}}^{*}+\cdots+a_{n}\lambda_{i_{n}}^{*}=0\) in \((\mathbb{Z}_{2}^{n})^{*}/\langle\xi\rangle\) for \(a_{i}\in\mathbb{Z}_{2}\). We have that any \(n-1\) of the weights \(\lambda_{i_{1}}^{*},\ldots,\lambda_{i_{n}}^{*}\) are linearly independent in \((\mathbb{Z}_{2}^{n})^{*}/\langle\xi\rangle\) if and only if all \(a_{i}\) are non-zero, i.e. \(a_{i}=1\) for every \(i\). The last statement is equivalent to that \(\xi(\lambda_{i_{k}})=1\) for all \(i_{k}\). **Corollary 5.5**.: _If \(G\subset\mathbb{Z}_{2}^{n}\) is in general position, then \(Stab(x)\notin G\) for every \(x\in X\)._ Proof.: Indeed, Stab(x) is generated by \(\lambda_{i}\) for some \(i\). _Remark 5.6_.: From the proof, we see that there exist only one \(2\)-subtorus in general position. If we choose a vertex \(p=F_{i_{1}}\cap\cdots\cap F_{i_{n}}\), then corresponding characterestic vectors \(\lambda_{i_{1}},\ldots,\lambda_{i_{n}}\) is a basis of \(\mathbb{Z}_{2}^{n}\). Taking these vectors as generators of \(\mathbb{Z}_{2}^{n}\), we get that \[G=\{(g_{1},\ldots,g_{n})\in\mathbb{Z}_{2}^{n}:\Pi_{i=1}^{n}g_{i}=1\}\] in the multiplicative notation. The condition from Proposition 5.4 is related to the following result of H. Nakayama and Y. Nishimura. **Theorem 5.7** ([18, Thm. 1.7]).: _A small cover \(X=X^{n}\) is orientable if and only if there exist \(\xi\in(\mathbb{Z}_{2}^{n})^{*}\) such that \(\xi(\lambda_{i})=1\) for every face \(F_{i}\in\mathcal{F}\)._ From this result we get Theorem 2 from the introduction. **Corollary 5.8**.: _Let \(X\) be a small cover. There exists a \(2\)-subtorus in general position if and only if \(X\) is orientable._ Now we can prove Theorem 3 from the introduction. **Theorem 5.9**.: _Let \(X=X^{n}\) be an orientable small cover over, let \(G\) be the \(2\)-subtorus in general position. Then the orbit space \(X/G\) is homeomorphic to the \(n\)-dimensional sphere \(S^{n}\)._ Proof.: Let \(Q=X/G\) be the orbit space of the \(G\)-action, let \(P=X/\mathbb{Z}_{2}^{n}\) be the orbit space of the \(\mathbb{Z}_{2}^{n}\)-action. By the definition of a small cover, \(P\) is a simple polytope and \(\mathrm{dim}P=n\). Notice that \(P=Q/(\mathbb{Z}_{2}^{n}/G)\) and we have the quotient map \[p:Q\to Q/(\mathbb{Z}_{2}^{n}/G).\] If \(x\in P\) is a free \(\mathbb{Z}_{2}^{n}\)-orbit, i.e. \(x\) in the interior of \(P\), then \(p^{-1}(x)=\mathbb{Z}_{2}\). Otherwise, by Corollary 5 there exist \(t\in Stab(x)\) such that \(t\notin G\). Therefore, if a \(\mathbb{Z}_{2}^{n}\)-stabilizer group of \(x\) is non-trivial, i.e. \(x\) in the boundary of \(P\), then it is a fixed point of \(\mathbb{Z}_{2}^{n}/G\)-action. Hence, for \(x\in\partial P\) we have that \(p^{-1}(x)\) is a single point. Since \(P\) is contractible, the map \(p:Q\to Q/(\mathbb{Z}_{2}^{n}/G)\) admit a section over the interior of \(Q/(\mathbb{Z}_{2}^{n}/G)=P\). Therefore, we have \[Q\cong P\times\mathbb{Z}_{2}/\sim\] where \((x,1)\sim(x,-1)\) if \(x\in\partial P\). Since \(P\) is homeomorphic to the \(n\)-disc \(D^{n}\), we have that \[Q\cong D^{n}\sqcup D^{n}/\sim\cong S^{n},\] which proves the statement. ## Acknowledgements The author would like to thanks his advisor A. A. Ayzenberg for an invitation to toric topology, his help and guidance during this work, T. E. Panov for helpful discussions about small covers. The author would like to thanks Eva Vlasova for her support and for drawing pictures.
2303.01457
AI as mediator between composers, sound designers, and creative media producers
Musical professionals who produce material for non-musical stakeholders often face communication challenges in the early ideation stage. Expressing musical ideas can be difficult, especially when domain-specific vocabulary is lacking. This position paper proposes the use of artificial intelligence to facilitate communication between stakeholders and accelerate the consensus-building process. Rather than fully or partially automating the creative process, the aim is to give more time for creativity by reducing time spent on defining the expected outcome. To demonstrate this point, the paper discusses two application scenarios for interactive music systems that are based on the authors' research into gesture-to-sound mapping.
Sebastian Löbbers, Mathieu Barthet, György Fazekas
2023-03-02T18:15:03Z
http://arxiv.org/abs/2303.01457v1
# AI as mediator between composers, sound designers, and creative media producers ###### Abstract. Musical professionals who produce material for non-musical stakeholders often face communication challenges in the early ideation stage. Expressing musical ideas can be difficult, especially when domain-specific vocabulary is lacking. This position paper proposes the use of artificial intelligence to facilitate communication between stakeholders and accelerate the consensus-building process. Rather than fully or partially automating the creative process, the aim is to give more time for creativity by reducing time spent on defining the expected outcome. To demonstrate this point, the paper discusses two application scenarios for interactive music systems that are based on the authors' research into gesture-to-sound mapping. Keywords:Giving musical composition, digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. ## 1. Introduction Composers and sound designers often work with, or for, people who do not have expert musical knowledge. For example, a composer may work with a filmmaker or choreographer to create an original score serving a narrative, and a sound designer may be hired by a company to produce audio branding suiting their corporate identity. In both situations, the two sides are likely to propose and share ideas in an early exploratory stage to define more precisely what the sounds or music should express. A potential bottleneck lies in the imbalance in musical knowledge between the two parties. Individuals can have strong opinions and preferences for sound and music but when questioned about the underlying reasons, semantic descriptions differ between non-musicians and musicians (Bartner et al., 2013). Compared to non-experts, musicians tend to rely on technical terms instead of everyday language which can be hard to decode for a composer or sound designer. While technical language (e.g. "a piece in the Lydian mode") can express ideas to domain experts, it is opaque to other stakeholders. In some cases, non-verbal forms of communication like gestures or body movement might be preferred, e.g. when producers describe spatial audio (Bartner et al., 2013). In practice, these difficulties are overcome by sharing sound and music material that represents intended ideas and can be referenced in the ideation process. However, this introduces two new issues: (1) how is this material found? Stakeholders may use their own knowledge of existing musical compositions ("internal" catalogue of music), but this becomes less viable when searching for specific sounds; (2) despite having a reference in mind, a composer or sound designer may create material that ultimately does not suit the assignment. How can ideas be quickly adapted for prototyping, e.g. fitting composed music to a film or choreography, to reduce "wasted" work? Recent advancements in artificial intelligence (AI) present new opportunities for music professionals, for example by generating audio from text-prompts (Boward et al., 2016), creating guitar tablatures in a specific genre (Boward et al., 2016), visualising sound intelligently (Boward et al., 2016) or retrieving audio from graphical sketches (Boward et al., 2016) and gestures (Boward et al., 2016). However, AI research often focuses on the optimisation of quantifiable metrics like audio quality or classification accuracy, rather than exploring its role for application in the "real world". Reflecting on the authors' own work that uses AI for gesture-to-sound mappings, two potential applications are imagined to aid human-human music and sound ideation. The paper shortly describes these applications in the next section and discusses them in the context of the _Integrating AI in Human-Human Collaborative Ideation_ workshop in Section 3. ## 2. Case Studies This section presents two interactive music systems, _Liquid Dance_ and _SketchSynth_, and explores their capabilities for collaborative music and sound ideation in two hypothetical case studies that are based on the first author's experience as a professional composer and sound designer. ### Liquid Dance As illustrated in Figure 1, _Liquid Dance_ uses motion capture and a fluid particle simulation to quantify the intensity of dance movements and automatically assemble musical snippets creating a suitable score in real-time. It was developed in collaboration with a choreographer for the music AI startup MXX1 to showcase their intelligent music editing software with an interactive performance. At a later stage, an unrelated composition project for dance sparked the idea of using _Liquid Dance_ for rapid testing of musical ideas for choreography. This is described in the following scenario: Footnote 1: [https://www.mxx.ai/](https://www.mxx.ai/) _A choreographer and music composer have chosen several reference music pieces and are now interested in testing their Figure 1. _Liquid Dance_ overview. Demonstration available at [https://youtu.be/fYInCkaOZO8](https://youtu.be/fYInCkaOZO8). compatibility with a choreographed dance performance. To achieve this, they plan on using a system that will enable them to dance alongside the music pieces, automatically selecting segments that best match the intensity of the dance movements. By using this method, they hope to gain a better understanding of how the selected music will work in tandem with the choreography._ ### SketchSynth _SketchSynth_ is based on the authors' research into cross-modal perception of musical timbre (Steintein et al., 2017; Dosov et al., 2018; Dosov et al., 2018). It employs AI and algorithmic methods to predict sound from a graphical sketch input as illustrated in Figure 2. The following scenario describes a potential application in industrial sound design: _A sound designer and product manager are looking for suitable sounds to incorporate into their product. To facilitate this process, they plan on using SketchSynth, which enables non-experts to express their ideas and explore the sonic space through an intuitive mode of interaction. Both parties can literally draw out ideas until reaching a consensus from which the sound designer can elaborate in greater detail._ ## 3. Discussion and Conclusion In both case studies AI acts as a translator between non-verbal expression and sound or music creation. While an expert human actor is likely capable of creating suitable content from these expressions, a machine can be trained to perform this task much faster in real-time. This gives stakeholders the opportunity to test initial ideas much faster, focus on their interpersonal relationships and leave more time for the creative process that is arguably done best by a human professional. In this context, AI is seen strictly as a tool that is expected to reliably perform a complex but well-defined task. _Liquid Dance_ uses a robust, but very specific setup that might only be useful for a narrow type of projects; _SketchSynth_ tries to find universal sketch-to-sound mappings that might be too broad to communicate sound in sufficient detail. As user needs are likely to change between or even during projects, the scope of AI tools might have to change with them. This raises the question of not only how AI is used during human-human ideation, but also how it is trained and set up. One approach is to choose a model that was trained on a diverse dataset to solve a large number of projects. However, this might hand over too much agency leading to unexpected behaviour not suitable for the task at hand. Therefore, a user might want to adjust a system in an initial "briefing" stage to prepare for the Figure 2. SketchSynth overview. Demonstration available at [https://youtu.be/arSF3iBAUM](https://youtu.be/arSF3iBAUM). upcoming task. Using the case studies in this paper as an example: for _SketchSynth_ this could mean confining the space of possible sounds, e.g. to mechanic sounds for an electronic car project or digital blips for a mobile game; or for _Liquid Dance_, optimising the interface for a specific dance style, e.g. ballet or contemporary, and conditioning the AI-driven music segmentation on a specific genre like classical or electronic music. Ultimately, it remains for the user to define how much agency they want their system to display during the ideation process. In conclusion, recent AI-driven music research can find valuable application in aiding human-human ideation for example as a translator for non-verbal descriptions of sound and music. However, AI tools may have to be adjustable to user needs and project requirements to avoid unpredictable or inadequate behaviour. ## 4. Acknowledgments EPSRC and AHRC Centre for Doctoral Training in Media and Arts Technology (EP/L01632X/1).
2310.19320
A Near-Field Treatment of Aperture Synthesis Techniques using the Murchison Widefield Array
Typical radio interferometer observations are performed assuming the source of radiation to be in the far-field of the instrument, resulting in a two-dimensional Fourier relationship between the observed visibilities in the aperture plane and the sky brightness distribution (over a small field of view). When near-field objects are present in an observation, the standard approach applies far-field delays during correlation, resulting in loss of signal coherence for the signal from the near-field object. In this paper, we demonstrate near-field aperture synthesis techniques using a Murchison Widefield Array observation of the International Space Station (ISS), as it appears as a bright near-field object. We perform visibility phase corrections to restore coherence across the array for the near-field object (however not restoring coherence losses due to time and frequency averaging at the correlator). We illustrate the impact of the near-field corrections in the aperture plane and the sky plane. The aperture plane curves to match the curvature of the near-field wavefront, and in the sky plane near-field corrections manifest as fringe rotations at different rates as we bring the focal point of the array from infinity to the desired near-field distance. We also demonstrate the inverse scenario of inferring the line-of-sight range of the ISS by inverting the apparent curvature of the wavefront seen by the aperture. We conclude the paper by briefly discussing the limitations of the methods developed and the near-field science cases where our approach can be exploited.
Steve Prabu, Steven J. Tingay, Andrew Williams
2023-10-30T07:36:50Z
http://arxiv.org/abs/2310.19320v1
# A Near-Field Treatment of Aperture Synthesis Techniques using the Murchison Widefield Array ###### Abstract Typical radio interferometer observations are performed assuming the source of radiation to be in the far-field of the instrument, resulting in a two-dimensional Fourier relationship between the observed visibilities in the aperture plane and the sky brightness distribution (over a small field of view). When near-field objects are present in an observation, the standard approach applies far-field delays during correlation, resulting in loss of signal coherence for the signal from the near-field object. In this paper, we demonstrate near-field aperture synthesis techniques using a Murchison Widefield Array observation of the International Space Station (ISS), as it appears as a bright near-field object. We perform visibility phase corrections to restore coherence across the array for the near-field object (however not restoring coherence losses due to time and frequency averaging at the correlator). We illustrate the impact of the near-field corrections in the aperture plane and the sky plane. The aperture plane curves to match the curvature of the near-field wavefront, and in the sky plane near-field corrections manifest as fringe rotations at different rates as we bring the focal point of the array from infinity to the desired near-field distance. We also demonstrate the inverse scenario of inferring the line-of-sight range of the ISS by inverting the apparent curvature of the wavefront seen by the aperture. We conclude the paper by briefly discussing the limitations of the methods developed and the near-field science cases where our approach can be exploited. techniquesinterferometric - radio continuum, techniques: image processing, instrumentation: interferometers + Footnote †: journal: Accepted in ApJ 1 Footnote 1: institutetext: Department of Physics, University of California, Berkeley, CA 94721, USA 2 Footnote 2: institutetext: Department of Physics, University of California, Berkeley, CA 94721, USA 3 Footnote 4: institutetext: Department of Physics, University of California, Berkeley, CA 94721, USA ## 1 Introduction Using three basic assumptions, conventional aperture synthesis theory derives a 2D Fourier relationship (Thompson, Moran, and Swenson, 2017; Marr, Snell, and Kurtz, 2015) between the visibilities sampled by an interferometer in the aperture plane and the sky brightness distribution. The three assumptions are: that a narrow bandwidth is used; that the object being observed is in the far-field; and that a narrow Field of View (FOV) is being imaged. Techniques such as Multi-Frequency Synthesis (Sault and Wieringa, 1994; Conway, Cornwell, and Wilkinson, 1990; Rau and Cornwell, 2011) and W-Stacking (Offringa, McKinley, Hurley-Walker, et al., 2014; Offringa and Smirnov, 2017) have been developed to overcome the bandwidth and FOV limits, in this paper we develop techniques/tools to observe objects in the near-field of the instrument, building upon previous work. We demonstrate near-field imaging techniques using the Murchison Widefield Array (MWA) (S. J. Tingay et al., 2013; Wayth et al., 2018), whose long (6 km) baselines see objects in Low Earth Orbit (LEO) in the near-field (for the frequency range that the MWA operates in). The MWA is a radio interferometer built as a precursor to the low-frequency component of the Square Kilometre Array (SKA) and is located in the radio-quiet region of the Murchison Shire in Western Australia, at _Inyarimunha Ilgari Bundana_, the CSIRO Murchison Radio-astronomy Observatory. The MWA is capable of observing between \(80-300\) MHz with an instantaneous bandwidth of \(30.72,\) MHz (each element in the interferometer is an aperture array composed of dual-polarised bow-tie antennas arranged in a 4x4 format, a so-called "tile"). ### Previous work Aperture synthesis studies in the near field have been performed by many groups with varying objectives. While often signals in the near-field are considered a source of interference in astronomical observations (Wang et al., 2021; S. Tingay et al., 2020), Finlay et al. 2023 has recently developed techniques to use satellite signals to perform calibration of the instrument. Spacecraft enthusiasts among the VLBI community have successfully been able to track the near-field transmissions from satellites within the solar system (Duev et al., 2012; Lanyi, Bagri, and Border, 2007). In the "ultra" near-field, RFIs have also been localized to nearby transmission lines2, electric cars being charged3, and new LED lamps being installed in nearby farmhouses4. [MISSING_PAGE_POST] Footnote 19: instit from LEO satellites, we use a single 2 min observation of the International Space Station (ISS) as the near-field target of choice in this work. Using our developed near-field correction tools, we then infer the range of the ISS across multiple time steps by inverting the focal distance that provided the maximum signal to noise on the source. This paper is structured as follows. We describe the data used and our methods in Section 2, and our results in Section 3. The discussion and conclusions are in Section 4 and 5, respectively. ## 2 Observations and Methods ### Data Pre-Processing and Calibration We use a single 2 min Phase 3 MWA observation (obs ID 1333747192, and henceforth referred to as the target observation) of the ISS in the FM band. We downloaded the data from the All-Sky Virtual Observatory1 (ASVO), with 0.25 s time-averaging and 40 kHz frequency averaging. The back-end of ASVO uses Birli2 to convert MWA correlator output files to the CASA defined measurement set format (McMullin et al., 2007). Footnote 1: [https://asvo.mwatelescope.org/](https://asvo.mwatelescope.org/) We calibrate the instrument by using an observation (obs ID 1333747784) of the radio galaxy Hercules-\(\Lambda\), also downloaded from ASVO using the same parameters as the target observation. We perform RFI flagging on the calibration observation using the AOFLAGGER tool (Offringa et al., 2015). Using the calibrate tool (Offringa et al., 2016) we perform a preliminary round of calibration using the source model2 of the Hercules-\(\Lambda\) radio galaxy. As Hercules-\(\Lambda\) is the brightest source in the FOV of the observation, we obtain a reasonably good initial amplitude and phase calibration solution. Using WSClean(Offringa, McKinley, Hurley-Walker, et al., 2014; Offringa and Smirnov, 2017) and the preliminary calibration solution, we perform a round of self-calibration to obtain better calibration solutions. The final set of calibration solutions are then transferred to the target observation. Footnote 2: [https://github.com/MWM/CRISCO/Birli](https://github.com/MWM/CRISCO/Birli) Footnote 3: [https://github.com/StevePrabu/MWA-ORBITAL/blob/master/models/model-HerA-27comp.wwithalpha.txt](https://github.com/StevePrabu/MWA-ORBITAL/blob/master/models/model-HerA-27comp.wwithalpha.txt) ### Near-field Correction using LEOLens The radiation wavefronts originating from near-field objects appear curved when viewed using the long baselines of the MWA. During correlation most interferometers, including the MWA, assume the sources to be in the far field of the instrument, thus resulting in a loss of coherence in the near-field signal due to de-correlation. However, we can recover near-field phase coherence by re-arranging the fringes (phase rotation of visibilities) projected by the interferometer in the plane of the sky. We apply this near-field correction to the target visibilities using a python casacore4 tool we call LEOLens5, explained in the following paragraphs. Footnote 5: [https://asvo.mwatelescope.org/](https://asvo.mwatelescope.org/) In Figure 1, we show a Topocentric Cartesian Coordinate (TCC) system centered at the geometrical centre of an interferometer, such that \(\hat{x}\) points East, \(\hat{y}\) points towards the zenith, and \(\hat{z}\) points North. Having obtained the coordinates of every MWA tile in this coordinate system, we next obtain the coordinates of the near-field object. The TCC coordinates of the near-field object \(\left(f_{x},f_{y},f_{z}\right)\) can be obtained from its azimuth angle (\(\theta\)), elevation angle (\(\phi\)), and range \(\left(f_{dist}\right)\) from the origin, \[\begin{array}{l}f_{x}=f_{dist}\times\cos(\phi)\times sin(\theta),\\ f_{y}=f_{dist}\times sin(\phi),\\ f_{z}=f_{dist}\times\cos(\phi)\times\cos(\phi).\end{array} \tag{1}\] For every baseline between antennas \(A_{i}\left(X_{i},Y_{i},Z_{i}\right)\) and \(A_{j}\left(X_{j},Y_{j},Z_{j}\right)\), the delay (or the w-term) for the baseline can be obtained using Equation 2. \[\begin{array}{l}r_{i}=\sqrt{(f_{x}-X_{i})^{2}+(f_{y}-Y_{i})^{2}+(f_{z}-Z_{i} )^{2}},\\ r_{j}=\sqrt{(f_{x}-X_{j})^{2}+(f_{y}-Y_{j})^{2}+(f_{z}-Z_{j})^{2}},\\ u_{near-field,i,j}=r_{j}-r_{i}.\end{array} \tag{2}\] As the voltage streams from both antennas are already correlated using far-field delays \(\left(w_{far-field,i,j}\right)\) by the correlator, LEOLens updates every visibility measurement using the near-field delay. The capacity to define a delay (or w-term) for a baseline comes from having set a phase reference in the sky plane. Hence, any changes in the w-term (such as due to the near-field corrections done here) will rotate the baseline phase. This phase correction is applied by LEOLens to the visibility using Equation 3. \[\Delta\phi_{i,j}=\exp^{2\pi\frac{\Delta w_{i,j}}{\lambda}}, \tag{3}\] where \(\phi\) is the interferometer phase, \(\lambda\) is the wavelength of radiation, and \(\Delta w_{i,j}=w_{near-field,i,j}-w_{far-field,i,j}\). \(\Delta\phi_{i,j}\) is the phase difference applied to the visibility phase on baseline formed by antennas \(A_{i}\) and \(A_{j}\). We illustrate the impact of near-field corrections in the aperture plane and the sky plane in Appendix 1. ## 3 Results Having described our near-field correction method, we present our results in three steps. In Section 3.1, we show the MWA near-field images made for a wide range of focal distances, followed by a null test in Section 3.2. Having developed confidence in the method using the null test, we then proceed to demonstrate results from our range estimation method in Section 3.3. ### Near-Field Images Using a single time-step during which the ISS was detected through FM reflection, we show near-field images at varying focal distances (Figure 2). We focus the array over a wide range of distances (\(10,000\) km, \(1000\) km, \(500\) km, and \(50\) km) using LEDLens and then create images using WSClean. A distance of \(10,000\) km is in the far-field of the instrument, and hence the image is not noticeably different from the image produced without any near-field corrections. As we bring the focus of the array to much closer distances, at about \(500\) km we see a streak-like signal from the ISS, as coherence is obtained. The ISS signal is again de-correlated as we bring the focal distance to \(50\) km. In the \(10,\!000\) km image of Figure 2, we also see a point source (background radio galaxy) whose location in the sky is shown using the white arrow. We note that the point source is de-correlated as we bring the focal distance to smaller distances, in accordance with our expectations. Due to the source being un-resolved, all baselines respond equally to the point source across the aperture plane. As the Phase 3 MWA extended array has predominantly long baselines which undergo significant delay correction (or rotation of fringes in the sky) as we change the focal distance, the point source is de-correlated. Conversely, the overall phase structure of any extended source in the observation is expected to remain preserved for a wider range of focal distances, as the structure of an extended source is sampled by the shorter baselines that do not undergo significant delay corrections3. Footnote 3: an animation of near-field corrections applied to an extended source can be found here [https://www.yourube.com/watch?v=sqiefj](https://www.yourube.com/watch?v=sqiefj)[Y]CAo. As the observation used in the animation is an MWA Phase 1 observation, and was processed differently to the other observations used here, we do not put it in the main body of the manuscript. ### The Near-Field Null Test We build confidence in our near-field techniques by performing a null test described below. We select a fine-frequency channel (\(\nu_{1}\)) containing the ISS FM reflection signal and difference the visibilities with an adjacent fine-frequency channel (\(\nu_{2}\)) that did not have any FM reflection. Doing so isolates the signal of interest (ISS FM reflection) from the background astronomical sources and we use it to perform the null test. Due to the close proximity of the two selected channels, the instrument's response to the background sky (\(S_{sky}\) below) can be considered to be identical, the difference subtracting the sky's contribution to the measured visibilities. Also, due to the closeness of the two channels, the instrument's response is not noticeably chromatic and does not leave behind artifacts Figure 1: The Topocentric Cartesian Coordinate (TCC) system used in this work to calculate near-field corrections. while differencing. Mathematically the frequency differenced visibilities can be represented as follows \[\begin{array}{l}V(\nu_{1})=e(\nu_{1})+S_{ISS}+S_{sky}(\nu_{1}),\\ V(\nu_{2})=e(\nu_{2})+S_{sky}(\nu_{2}),\\ \Delta V=V(\nu_{1})-V(\nu_{2}),\\ \Delta V\approx e_{1,2}+S_{ISS}\quad[\because S_{sky}(\nu_{1})\approx S_{sky}( \nu_{2})].\end{array} \tag{4}\] In the above equations, due to the noise [\(e(\nu_{1})\) and \(e(\nu_{2})\)] in the two channels being un-correlated, the ISS signal (\(S_{ISS}\)) should be detected above a Gaussian noise distribution. We focus the differenced visibilities to \(500\,\mathrm{km}\) (approx. range of the ISS) and show the corresponding image and phase-distrbution of the visibilities in the top three panels of Figure 3. Due to the ISS signal being at the phase centre of the image, the phases of the differenced visibilities cluster near \(0^{\circ}\)deg and \(180^{\circ}\)deg, as expected. In order to test the reliability of our near-field correction technique, we create a different set of frequency differenced visibilities, but this time neither of the two channels show significant ISS signal, and we focus the differenced visibilities to \(500\,\mathrm{km}\) as before. As neither of the two channels have ISS signal in them, we expect the differenced visibilities to show noise-like properties. The image and the phase distribution of this new set of differenced visibilities are shown in the bottom three panels of Figure 3. From Figure 3 we see that the phases of the visibilities are randomly distributed (as would be expected for noise) and show no coherent signals in the reconstructed image, thus building confidence in the near-field techniques/software developed in this work. ### Near-Field Object Range Inference In Section 3.1, we demonstrated being able to focus on a near-field object using prior knowledge of its distance from the geometric centre of the MWA. In this section, we demonstrate the inverse problem of inferring the line of sight range to the object. We focus the array at a wide range of trial focal distances. The distance that provides the maximum coherence (measured as signal-to-noise ratio (SNR) in the images) on the source is assumed to be the range to the object. We demonstrate our ranging method using the frequency differenced visibilities described in Section 3.2. For every time-step that the ISS was detected, we change the phase-centre of the visibilities to have the ISS signal at the centre of the image. We then focus the differenced visibilities to a wide range of trial focal distances and plot the SNR of the ISS signal in the top-right panel of Figure 4. As we obtain maximum coherence on the ISS signal when the assumed focal distance matches the true distance, we use the peak of the SNR vs focal distance curve as a proxy for the range measurement. In the bottom-left panel of Figure 4, we plot our range measurements for the ISS across multiple time-steps and compare it to the range predicted from the Two Line Element (TLE) data for the ISS published by space-track.org during the epoch of the observation. As we know the azimuth, elevation, and range of the ISS from our observations, we are able to track the ISS trajectory in 3D space with respect to the MWA, shown in the bottom-right panel of Figure 4. #### 3.3.1 Modelling from First Principles The top right panel of Figure 4 shows that the signal recovered as a function of the assumed focal distance is quite distinctive. Thus, while identifying the peak signal to noise of this function provides an estimate of the optimal focal distance, it would be better to model the entire function and use all of the measurement data. We briefly consider the plausibility of forming such a model in this section. A model to describe the data shown in Figure 4 needs to take into account the interferometric response of the MWA, as a function of assumed focal distance to an object. We have tested a model that utilises the known relative positions of the MWA tiles, and calculates the interferometric response of each baseline (pair of tiles) in the array for an object at an arbitrary location with respect to those tiles, for different values of the focal distance. In this case, errors in the delay and delay rate, relative to the true values, will cause a loss of coherence in an observation that is averaged in time and frequency. These Figure 2: MWA images for four different focal distances. In all four panels, the green arrows show the location of the ISS signal and the white arrow shows the location of a background astronomical point source. We note that at large focal distances, the objects in the near-field (e.g ISS) appear de-correlated and at much closer focal distances the background far-field sources appear de-correlated. An animation of this Figure is available at [https://www.younbe.com/watch?vsqiej](https://www.younbe.com/watch?vsqiej)[y]JC.Ao. errors vary as a function of the length and orientation of the baseline, relative to the direction of the object. As such, across a given time-step, the model requires three parameters to describe the starting position of the object in the array's frame of reference as well as three parameters to describe it's finishing position. When described in spherical coordinates, the parameters can be expressed as a range, azimuth, and elevation. The six parameter model can be fit to observational data, such as in Figure 4. In practise, the parameter space for the fit has many local minima and least squares methods struggle to approach the global minimum. Thus, we have attempted a brute force grid search of the parameter space, which is computationally expensive. For example, implemented in python, a single trial for a single grid point takes \(\sim\)10 seconds on a single CPU. The resolution of the grid needs to be high, as the array will be coherent when all differential delays are correct to within a fraction of a wavelength (\(\sim\)3 m), and the grid range also needs to be large, as the inherent accuracy of our starting point from a TLE is \(\sim\)1 km. Thus, that translates to \(\sim\)10\({}^{18}\) trials, or \(\sim\)10\({}^{19}\) seconds, or 100 billion years on a single CPU, which is clearly not feasible. Optimised code would assist to reduce the compute time. We have implemented trials using a far coarser grid as a test to see if promising regions of parameter space can be identified for further investigation, which are being run on large high performance computing clusters, and produce output that closely resembles the behaviour of the observational data. Further refinements are a work in process. ## 4 Discussion ### Visibility amplitudes While we have successfully demonstrated the recovery of phase coherence in the near-field, we have not made any comments about the amplitude of the visibilities. For astronomical sources, the amplitude of the electric field seen by both antennas of a baseline can be assumed to be identical due to the large distance to the radiating source. However, for near-field objects, the antennas see different amplitudes due to different path lengths between the object and the antennas, and is derived below. Consider again a baseline between two antennas (\(A_{i}\) and \(A_{j}\)) observing an isotropically radiating source of luminosity \(L\) (\(\int\)/s) at distances \(r_{1}\) and \(r_{2}\) from the antennas, respectively. If the effective collecting area of the two antennas are \(A(l_{i},m_{i})\) and \(A(l_{j},m_{j})\), where \(l\) and \(m\) are orthogonal direction cosines of the near-field source with respect to the antenna, the powers measured by the antennas are, \[\begin{split} P_{i}&=L\times\frac{1}{4r_{i}^{2}} \times A(l_{i},m_{i})\\ P_{j}&=L\times\frac{1}{4r_{j}^{2}}\times A(l_{j},m_ {j})\end{split} \tag{5}\] If the radiating source is observed in a direction that is Figure 3: The top three panels show the reconstructed image, visibility phases plotted against baseline length, and histogram distribution of the visibility phases for the frequency differenced visibilities obtained from differencing a channel with 155 FM signal from an adjacent channel with no FM signal. The bottom three panels show the same but for frequency difference visibilities obtained for two channels, neither of which had an 155 reflection signal. An animation of the figure for a wide range of focal distances can be obtained at [https://www.youtube.com/watch:v=haPl-iFX6bY](https://www.youtube.com/watch:v=haPl-iFX6bY) Figure 4: The top-left panel shows the image with visibilities focused in the far field for one of the time steps considered. The image is also phase-centred at the default pointing centre of the observation. The insert panel (top-middle) shows the phase-tracked image of the ISS. In the top-right panel, we show the SNR of the ISS signal when focusing the array to a wide range of focal distances for a single timestep. Having performed this across multiple timesteps, we plot the estimated line of sight range in the bottom-left panel. Close to the actual range of the ISS, we attempt focussing at every 5km intervals, and hence we use 5 km as the error in the bottom-left plot. We also show the Using the estimated azimuth, elevation, and range of the ISS, we are able to track its trajectory in 3D, as shown in the bottom-right panel. An animation of the Figure can be obtained from [https://www.youtube.com/watch:v=90vksNIvA](https://www.youtube.com/watch:v=90vksNIvA) not orthogonal to the baseline, there is a propagation path length difference (\(\Delta r\)) between the source and the two antennas. However, due to geometrical reasons, the path length cannot exceed4 the actual physical distance (\(b\)) between the two antennas (for example, the MWA's longest baseline is 6 km long, and \(\Delta r\) for the baseline \(<=6\) km). The two different propagation path lengths are defined as follows, Footnote 4: not true for sources very close to the instrument with curved wave–fronts but is a good first-order guide for the magnitude of \(\Delta r\). \[\begin{array}{l}r_{i}=r\\ r_{j}=r_{ij}+\Delta r_{i,j}\quad\text{(where $\Delta r_{i,j}<=b$)}\end{array} \tag{6}\] Using Equations 5 and 6, the ratio of powers measured by the two antennas is given by, \[\begin{array}{l}P_{ratio}=P_{i}/P_{j}\\ P_{ratio}=\frac{A(|l,m\rangle}{A(|l,m\rangle)}\times\frac{r_{i}^{2}}{r_{i}^{2}} \\ P_{ratio}=\frac{A(|l,m\rangle}{A(|l,m\rangle)}\times\left[\frac{r_{i}\Delta r}{ r}\right]^{2}\end{array} \tag{7}\] For astronomical sources, \(\lim_{r\rightarrow\infty}P_{ratio}\approx\frac{A(|l,m_{m}\rangle}{A(|l,m\rangle)}\), and the two antennas only see holographic effects, but for the objects in the near-field (where \(\Delta r\) is comparable to \(r\)) the two antennas see significantly different powers from the radiating source. We do not account for this effect in our ISS analysis and find it to be about \(P_{ratio}=0.986\) (r=500 km, \(\Delta r\)=3.35 km) for the longer baselines. ### _Confusion from near-field sources_ In most radio-astronomy observations, objects in the near-field constitute Radio Frequency Interference (RFI). A common practice to mitigate RFI is to flag the data, channels and/or times, and/or baselines, that respond to the RFI signal (Ford and Buch, 2014). However, with the advent of satellite mega-constellations, the radio sky is getting increasingly polluted and we may reach a time in the future when flagging is no longer an affordable option due to the constant presence of satellite signals in the data. An alternative RFI mitigation strategy would be to subtract or 'peel' the RFI's signal contribution (Perley and Cornwell, 2003) from the visibility matrix, demonstrated recently by Finlay et al. 2023 using a Bayesian framework to subtract satellite RFI from MeerKAT (Jonas and Team, 2016) simulated data. However, based on the near-field aperture synthesis understanding developed in this work, we comment that the peeling of near-field objects without any near-field corrections could leave behind residual sidelobe confusion noise in the images, explained further below. We select five different fine channels (one with a strong ISS reflection signal and the others without any noticeable ISS signal), focus the visibilities to a wide range of focal distances, and plot the corresponding residual noise maps in Figure 6. In a perfectly cleaned image, the residual map should just contain contributions from thermal noise (due to the instrument operating at non-absolute zero temperature) with a random distribution of phases, much like the bottom panels of Figure 3, and the noise level would not be expected to change with any phase-rotation that may be applied to the visibilities. We see from Figure 6 that for the channel with the ISS signal the noise in the residual map is the lowest when the fringes are rotated to the correct focal distance of the near-field RFI, and for all other fringe positions (or focal distances) the cleaning process leaves behind residual noise due to imperfect subtraction of the near-field source. In the other four channels with no ISS signal, no astronomical sources are detected in 0.25 s time-averaged fine-channel data (due to very large thermal noise), and the residual map noise levels do not noticeably change with focal distance, thus demonstrating the importance of near-field considerations while subtracting near-field RFI objects from the visibility matrix. To properly reach the lowest noise levels in the data when mitigating RFI, the peeling process has to be performed in the near field. We also note that the lowest noise in the channel with the ISS signal is higher than the other channels, possibly due to increased system noise in this channel (as the ISS signal is a few thousand Jy). ### _Fringe washing_ While longer baselines have better range resolution using our method (as they see more change in wavefront curvature), as previously discussed in Prabu et al. 2022, the signals from fast-moving objects get blurred into smears (fringe-washed) when they move more than a synthesized beam during the visibility integration time. The phase measurement of a source by a baseline depends on the source's position in the fringe pattern projected by the baseline in the sky plane. However, for fast-moving objects such as satellites, the source moves through many fringes within the time-averaging interval of the correlator, and hence we obtain an averaged phase measurement. Figure 5: **updated figure/We show the ratio of powers seen by a baseline with (\(A_{1}\) not equal to \(A_{2}\)) and without (\(A_{1}=A_{2}\)) holographic effects. For the above plot, we assume \(\Delta r=2\) km (approx. delay in a 6 km baseline observing a source 20 degrees from the zenith).** Hence, even though we have developed techniques in this work to account for the near-field curvature perceived by the long baselines, the reduction in SNR due to fringe-washing is not recoverable. For example, for a satellite at \(400\,\mathrm{km}\) moving with an angular speed of \(1^{\circ}/s\) near the zenith, approx. \(75\%\) of the MWA extended configuration baselines are fringe-washed, while even the longest baseline of MWA's compact configuration would not be affected. For a more detailed discussion on fringe-washing, we direct the reader to Prabu et al. 2022. ### Future Work Near-field tools and techniques can be used in a wide range of science cases. We currently have plans to use LEOLens to search for low-frequency intrinsic radio emission from meteors, previously only detected by Obenberger et al. 2014 using the Long Wavelength Array (Taylor et al., 2012, 2012). Zhang et al. 2018 previously attempted to detect the meteor emission using a 322 hour MWA observation campaign, but no candidates were identified. The study discarded all the longer MWA baselines and just used the short baselines to mitigate the near-field effect and hence was performed at a much-reduced sensitivity. Given that we now have tools to achieve coherence on near-field objects using the long baselines, we aim to perform a more sensitive search for intrinsic meteor emission with the MWA. A natural extension of the discussion on residual confusion noise from near-field objects in Section 4.2 is to develop a near-field RFI peeling capability in conjunction with LEOLens. Our preliminary peeling test with the ISS observation used here shows promising results and we aim to develop this peeling method into a tool for the wider community in the future. ## 5 Conclusions In this paper, we explore the near-field aperture synthesis techniques using a single observation of the ISS using the Murchison Widefield Array. For desired focal distances, we calculate the appropriate delays for every baseline which would bring the near-field signal into focus of the reconstructed image. We illustrate the effect of near-field corrections in the aperture plane that results in 'curving' the array through addition delays such that the incoming near-field wavefront falls coherently on the array. This delay correction in the aperture plane, translates to rotating fringes at different rates in the sky plane. As longer baselines see more of the near-field curvature, they undergo more delay corrections (or fringe rotation) when compared to the short baselines. Having developed a python tool that performs near-field corrections to the input interferometric dataset, we use it to demonstrate the inverse problem of inferring the range of the near-field event from the apparent curvature in radiation as seen by the array. We do so by trialing many focal distances to make a focused image of the near-field signal. When the assumed focal distance is equal (or approximately equal) to the true distance, the re-constructed image has the highest SNR. We demonstrate this using the ISS observation, and obtain ranges are in agreement with the distances predicted by its Two Line Elements. We conclude the paper by discussing the limitations of the near-field methods used in this work. For objects that are very close (distances comparable to its longest baseline length) to the array, the near-field signal undergoes different amounts of propagation fading resulting in different powers seen by the two antennas of the baseline. We also find that the peeling of near-field RFI from interferometric data can leave behind residual confusion noise when not accounting for the near-field effects. ## Acknowledgement This scientific work uses data obtained from Inyarrimanha Ilgari Bundara / the Murchison Radio-astronomy Observatory. We acknowledge the Wajarri Yamaji People as the Traditional Owners and native title holders of the Observatory site. Establishment of CSIRO's Murchison Radio-astronomy Observatory is an initiative of the Australian Government, with support from the Government of Western Australia and the Science and Industry Endowment Fund. Support for the operation of the MWA is provided by the Australian Government (NCRIS), under a contract to Curtin University administered by Astronomy Australia Limited. This work was supported by resources provided by the Pawsey Supercomputing Research Centre with funding from the Australian Government and the Government of Western Australia. Steve Prabu would also like to thank Adrian Sutinjo for the valuable comment on amplitude fading during the CIRA Journal Club talk on near-field aperture synthesis. Figure 6: Noise levels in residual maps for five different channels focused to a wide range of focal distances. Note that the noise is lowest in the channel with the ISS when the focal distance is approx. \(500\,\mathrm{km}\) (also the line of sight range to ISS) #### Software We acknowledge the work and the support of the developers of the following Python packages: Astropy (The Astropy Collaboration, Robitaille, and Tollerud, 2013; Astropy Collaboration, Price-Whelan, and Sipocz, 2018), Numpy (van der Walt, Colbert, and Varoquaux, 2011), Scipy (Jones, Oliphant, Peterson, et al., 2001), matplotlib (Hunter, 2007), and python-casacore1. The work also used WSCLEAN (Offringa, McKinley, Hurley-Walker, et al., 2014; Offringa and Smirnov, 2017) for making fits images and DS9m for visualization purposes. Footnote 1: [https://github.com/casacore/python-casacore](https://github.com/casacore/python-casacore) m. dS9.si.edu/site/Home.html ## Appendix A Near-Field Correction in Aperture Plane vs Sky Plane We illustrate the effect of near-field correction in the aperture plane (u,v,w) and the sky plane (l,m) using a single FM band fine channel data for one of the time steps that the ISS was detected through reflection. In the aperture plane, the near-field correction can be thought of as 'curving' the array to match the near-field wavefront such that the near-field signal falls coherently on the array. We show this in the four panels of Figure 7. For an arbitrarily chosen four different focal distances (\(10,000\,\mathrm{km}\), \(1000\,\mathrm{km}\), \(500\,\mathrm{km}\), and \(50\,\mathrm{km}\)) we show the absolute delay correction performed by LEOLens to approx. \(8000\) instantaneous baselines of the MWA. We note from Figure 7 that for the wide range of focal distances considered, the short-baselines do not go through much delay correction as they would still see the objects in the far field. On the contrary, the long baselines have larger delay and delay rate (slope of delay vs focal distance) corrections as we bring the focal distance closer to the instrument. In the sky plane, the near-field corrections result in rotating the fringes at different rates across the sky as we bring the focal distance from a faraway distance to shorter focal distances. We illustrate this using an animation provided at [https://www.youtube.com/watch?v=3HUGVY_nfU](https://www.youtube.com/watch?v=3HUGVY_nfU). In the top-left and top-middle panels of the animation, we show the fringe produced by a long baseline and a short baseline (the baseline lengths were arbitrarily chosen to help demonstrate better). As we bring the focal distance of the array from far-field to much closer distances, the fringes are rotated at different rates in the sky. The bottom-left and bottom-middle panel shows the delay corrections performed to the long and short baseline as we change the focal distance. The MWA has about \(8000\) instantaneous baselines and in the bottom-right panel we show the combined effect of all the rotated fringes as we change the focal distance. When the assumed focal distance matches the range of the ISS (about \(500\,\mathrm{km}\)), all the fringes coherently produce the image of the ISS with the maximum possible SNR.
2306.02958
Constraining quantum fluctuations of spacetime foam from BBN
A possibility to describe quantum gravitational fluctuations of the spacetime background is provided by virtual $D$-branes. These effects may induce a tiny violation of the Lorentz invariance (as well as a possible violation of the equivalence principle). In this framework, we study the formation of light elements in the early Universe (Big Bang Nucleosynthesis). By using the Big Bang Nucleosynthesis observations, We infer an upper bound on the topological fluctuations in the spacetime foam vacuum $\sigma^2$, given by $\sigma^2 \lesssim 10^{-22}$.
Saurya Das, Gaetano Lambiase, Elias C. Vagenas
2023-06-05T15:25:16Z
http://arxiv.org/abs/2306.02958v1
# Constraining quantum fluctuations of spacetime foam from BBN ###### Abstract A possibility to describe quantum gravitational fluctuations of the spacetime background is provided by virtual \(D\)-branes. These effects may induce a tiny violation of the Lorentz invariance (as well as a possible violation of the equivalence principle). In this framework, we study the formation of light elements in the early Universe (Big Bang Nucleosynthesis). By using the Big Bang Nucleosynthesis observations, We infer an upper bound on the topological fluctuations in the spacetime foam vacuum \(\sigma^{2}\), given by \(\sigma^{2}\lesssim 10^{-22}\). pacs: 04.50.-h, 04.60.Bc ## I Introduction Formulating a quantum theory of gravity is one of the most important challenges of the modern approaches aimed to unify all fundamental interactions. These studies have clearly shown that spacetime must have a non-trivial topology near the Planck scale. After the Wheeler [1] suggestion that spacetime may have a foam-like structure, the study of quantum fluctuations of the spacetime background have received a lot of interest [2; 3; 4; 5; 6; 7; 8; 9]. The Planck-size topological fluctuations imply that the (quantum gravitational) vacuum behaves as a non-trivial medium. This occurs, for example, in the framework of string theory [10] and of the canonical approach to quantum gravity [11; 12]. The underlying idea of Ref. [10] is that the quantum gravitational fluctuations in the vacuum get modified by the passage of an energetic particle, inducing recoil effects described by back reaction effects on the propagating particle [13]. Although present technologies preclude any possibility to probe quantum gravity effects, it has been suggested in Ref. [14] that Gamma-Ray Bursts (GRBs) might offer the possibility to test the theories at Planck energies. The idea is that the origin of GRBs at a cosmological distance and their high energies may make them sensitive to the dispersion scales that are comparable with the Planck scales [14]. In addition, the quantum fluctuations of spacetime may have had relevant consequences during the early Universe. In fact, the CPT-violating aspects of brane Universe models may induce an asymmetry between particles and antiparticles, allowing to explain the observed Baryon Asymmetry [15]. In this contribution, we investigate the foamy structure of the gravitational background, referring in particular to the Ellis-Mavromatos-Nanopoulos-Volkov (EMNV) model [16], and its role on the formation of light elements during the primordial phase of the Universe evolution (Big Bang Nucleosynthesis). Big-Bang Nucleosynthesis (BBN) represents an important epoch during the evolution of the Universe. During this period, the primordial light elements formed leaving imprints on their abundance today. Thanks to the advancements in measurements and theoretical predictions of the abundances of light elements, BBN has become a powerful cosmological probe for testing the early Universe. BBN has hence no trivial consequences on any physics scenario beyond the Standard Models of particle physics and cosmology [17; 18; 19]. The latter may alter the course of the events at that era with respect to the standard theories, and therefore such a probe provides strong constraints. The rest of the paper is structured as follows. In section 2, we present the EMNV model and provide a formula that connects the baryon density parameter with the quantum fluctuations mass scale. In section 3, we provide bounds on the quantum fluctuations mass scale using today's primordial abundances of light elements which were produced in the BBN era. In section 4, we conclude and briefly present our results. ## 2 The EMNV model The basic idea of the EMNV model is that the recoil effect of a \(D\)-brane struck by particles (bosons [8; 9; 13] or fermions [16]), induces an energy dependence of the off-diagonal terms of the background metric, \(G_{0i}\sim u_{i}\), where \(u_{i}\sim E/M_{s}\ll 1\) (\(u_{i}\) is the average recoil velocity of the generic \(D\)-brane [16] and \(E\) is the energy of the particle scattering off the \(D\)-brane, and \(M_{s}\) characterizes the quantum fluctuations scale). The consequence of the off-diagonal term in the metric tensor implies the breaking of the Lorentz invariance [16]. For a \(D\)-dimensional spacetime, one has \(G_{ij}=\delta_{ij}\), \(G_{00}=-1\), and \(G_{0i}\sim u_{i\parallel}\), \(i,j=1,\ldots,D-1\) (here \(u_{i\parallel}\) is the recoil (parallel) velocity of the \(D\)-particle). Moreover, the metric induces a variation of the light velocity \(\delta c/c\sim-E/M_{s}\). The capture and splitting of the open string and its interaction with the \(D\)-particle, and the recoil of the latter, gives rise to a local effective spacetime metric distortion [8; 9; 20] \[ds^{2}=g_{\mu\nu}dx^{\mu}dx^{\nu}=(\eta_{\mu\nu}+G_{\mu\nu})dx^{\mu}dx^{\nu}. \tag{1}\] The dispersion relation of a particle (neutrino) propagating on the deformed isotropic spacetime reads \(g_{\mu\nu}p^{\mu}p^{\nu}=(\eta_{\mu\nu}+G_{\mu\nu})p^{\mu}p^{\nu}=-m^{2}\, \Rightarrow\,E^{2}-2E\vec{p}\cdot u_{\parallel}-\vec{p}^{2}-m^{2}=0\), where \(m\) is the mass of the particle. Taking into consideration this on-shell condition and taking the average \(\ll\cdots\gg\) over \(D\)-particle populations with the stochastic processes (\(\ll u_{i\parallel}\gg=0\), \(\ll u_{i\parallel}u_{j\parallel}\gg=\sigma^{2}\delta_{ij}\)), one gets the average neutrino and anti-neutrino energies in the \(D\)-foam background \[\ll E_{\nu,\overline{\nu}}\gg=\sqrt{p^{2}+m_{\nu}^{2}}\left(1+\frac{1}{2} \sigma^{2}\right)\mp\frac{1}{2}\frac{M_{s}}{g_{s}}\,\sigma^{2}. \tag{2}\] Here it is assumed that the recoil-velocities fluctuation strengths are the same for particle and antiparticle sectors (the asymmetric scenario has been studied in [20]). As we can see, the local violation of Lorentz symmetry (LV) induced by the recoil velocities of the \(D\)-particles, induced a CPT violation too, since the dispersion relations between particles and antiparticles are different, generating a matter-antimatter lepton asymmetry \[\ll n-\overline{n}\gg=g_{d.o.f.}\int\frac{d^{3}p}{(2\pi)^{3}}\ll[f(E)-f( \overline{E})]\gg \tag{3}\] where \(f(E,\mu)=\frac{1}{\exp[(E-\mu)/T]\pm 1}\), \(E^{2}={\bf p}^{2}+m^{2}\), and \(g_{d.o.f.}\) denotes the number of degrees of freedom of relativistic neutrinos. Assuming that \(\sigma^{2}\) is constant (independent of space and of the (anti)neutrino energy), one can get \(\Delta n_{\nu}\simeq\frac{g_{d.o.f.}}{\pi^{2}}\,T^{3}\left(\frac{M_{s}\sigma^ {2}}{g_{s}T}\right)\,>\,0\). Notice that the CPT term, i.e., \(-\frac{1}{2}\frac{M_{s}}{g_{s}}\sigma^{2}\), in the dispersion relation of the neutrino comes with the right sign (_"loss"_) guaranteeing the excess of particles over antiparticles. The resulting lepton asymmetry reads \[\eta=\frac{\Delta n_{\nu}}{n_{\gamma}}\sim\frac{315}{2\pi^{4}}\frac{\text{GeV }}{T}\left(\frac{M_{s}}{\text{GeV}}\,\frac{\sigma^{2}}{g_{s}}\right). \tag{4}\] Observations of the CMB radiation [21], predictions of BBN [22] (and the absence of intense radiation from matter-antimatter annihilation [23]), implies that the observed baryon number asymmetry today is \[\eta=(6.04\pm 0.08)\times 10^{-10}\,. \tag{5}\] Such a value remains constant from early times till today. For later reasons, one introduces the baryon density parameter \(\eta_{10}\) defined as [24; 25] \[\eta_{10}\equiv 10^{10}\eta\equiv 10^{10}\frac{\Delta n_{\nu}}{n_{\gamma}} \tag{6}\] with \(\eta_{10}\) to be determined. From (4) and (6) one gets \[\frac{M_{s}}{\text{GeV}}\frac{\sigma^{2}}{g_{s}}=10^{-13}\frac{2\pi^{4}}{315 }\frac{T_{BBN}}{\text{MeV}}\,\eta_{10} \tag{7}\] where \(T_{BBN}\sim 1\)MeV is the temperature at which the BBN processes are effective. Primordial light element \(\{^{4}He,D,Li\}\) We will now derive the bound on the scale \(M_{s}\) by analyzing the effects of the primordial abundances of light elements, i.e., Deuterium \({}^{2}H\), Helium \({}^{4}He\), and Lithium \({}^{7}Li\), using the asymmetry given by (4). In this analysis, the baryon-antibaryon asymmetry, here indicated with \(\eta_{10}\), plays a crucial role [24; 25]. Since we are interested in deviations from the standard cosmological model, hereafter we shall assume three generations of neutrinos so that we set \(N_{\nu}=3\), which means \(Z=1\) (corresponding to the standard cosmological model) in the equations below. We follow the Refs. [26; 27]. The relevant processes are here recalled: * The production of Helium \({}^{4}He\) is generated by the production of \({}^{2}H\) through a neutron and a proton. Consequently, the formed Deuterium converts into \({}^{3}He\) and Tritium. The best fit of the primordial \({}^{4}He\) abundance is [28; 29] \[Y_{p}=0.2485\pm 0.0006+0.0016\left[\left(\eta_{10}-6\right)+100\left(Z-1 \right)\right]\] (11) The standard result of BBN for the \({}^{4}He\) fraction is recovered for \(Z=1\) and \(\eta_{10}=6\), so in General Relativity (GR) one gets \(\left(Y_{p}\right)\rvert_{GR}=0.2485\pm 0.0006\). However, observations of the Helium \({}^{4}He\) give the abundance \(0.2449\pm 0.0040\)[30]. So employing the observational constraint and the Helium abundance given in (11) for \(Z=1\), we obtain \[0.2449\pm 0.0040=0.2485\pm 0.0006+0.0016\left(\eta_{10}-6\right)\;.\] (12) From the above equations, one infers the constraint \[5.65\lesssim\eta_{10}\lesssim 5.9\;.\] (13) * Deuterium \({}^{2}H\) is produced by the reaction \(n+p\rightarrow{}^{2}H+\gamma\). The best fit gives the Deuterium abundance [24] \[y_{Dp}=2.6(1\pm 0.06)\left(\frac{6}{\eta_{10}-6(Z-1)}\right)^{1.6}\;.\] (14) The values \(Z=1\) and \(\eta_{10}=6\) yield the result in GR, thus the Deuterium abundance will be \(y_{Dp}\rvert_{GR}=2.6\pm 0.16\). Equation (14) and the observational constraint on deuterium abundance \(y_{Dp}=2.55\pm 0.03\)[30] give (for \(Z=1\)) \[2.88\pm 0.22=2.6(1\pm 0.06)\left(\frac{6}{\eta_{10}}\right)^{1.6}\;.\] (15) One then gets the constraint \[5.88626\lesssim\eta_{10}\lesssim 6.25264\;.\] (16) It is noteworthy that such a constraint partially overlaps the helium abundance (13). * The parameter \(\eta_{10}\), defined in (6), although successfully fits the abundances of \(D\) and \({}^{4}He\), it does not fit the observations of \({}^{7}Li\). This is referred in literature as _the Lithium problem_[31]). The ratio of the expected value of \({}^{7}Li\) abundance in GR and the observed one is in the range [31; 32] \[\frac{Li\rvert_{GR}}{Li\rvert_{obs}}\in[2.4-4.3]\;.\] (17) The numerical best fit for \({}^{7}Li\) abundance is (for \(Z=1\)) [24] \[y_{Li}=4.82(1\pm 0.1)\left[\frac{\eta_{10}-3(Z-1)}{6}\right]^{2}=4.82(1\pm 0.1) \left[\frac{\eta_{10}}{6}\right]^{2}\;.\] (18) Employing the observational constraint on Lithium abundance, i.e., \(y_{Li}=1.6\pm 0.3\)[30], one gets the constraint \[3.28457\lesssim\eta_{10}\lesssim 3.59177\;.\] (19) It is evident that such range of values does not overlap with the constraints on \({}^{2}H\) abundance, i.e., Eq. (16), and on \({}^{4}He\) abundance, i.e., Eq. (13). The constraints derived for the three abundances are reported in Fig. 1. It is obvious that the overlapping ranges of \({}^{2}H\) and \({}^{4}He\) correspond to the value \(\eta_{10}\sim 5.9\) (orange region). This value does not overlap with the \({}^{7}Li\) range, which means that the Lithium problem cannot be solved in the framework of the spacetime foam. However, this is not a conclusive result since the eventual modifications of Einstein's equations have not been considered in the present paper, as well as the possibility that the quantum fluctuation parameter \(\sigma^{2}\) is not universal. Inserting the overlapping value of \(\eta_{10}\) into (7) for \(T_{BBN}\lesssim 1\)MeV and using spacetime foam parameter \(M_{s}/g_{s}\sim M_{P}\) (\(M_{P}\sim 10^{19}\)GeV is the Planck mass) [33], one obtains \[\frac{M_{s}}{g_{s}}\sigma^{2}\sim 3.6\times 10^{-13}\text{GeV}\quad\to\quad \sigma^{2}\lesssim 10^{-22}. \tag{17}\] Therefore, we have inferred the upper bound on the dimensionless stochastic variable \(\sigma^{2}\), which expresses the fluctuations of the recoil velocity of the \(D\)-branes. ## IV Conclusions The BBN era, which occurred during the hot and expanding early Universe, has left an observable imprint in the abundance of primordial light elements. Precision observations and high-accuracy predictions of these elements provide an important test of the standard cosmological model (based on General Relativity) and allow probing of non-standard cosmological and particle physics scenarios. In this framework, we have used the BBN sensitivity to obtain a bound on the dimensionless stochastic variable \(\sigma^{2}\) expressing the fluctuations of D-branes recoil velocity. Results give \(\sigma^{2}\lesssim 10^{-22}\) for \(M_{s}/g_{s}\sim M_{P}\). A follow-up of the present work is to investigate physical scenarios related to spacetime foam where the BBN constraints are properly taken into account, or consider the general case in which the quantum fluctuation parameter \(\sigma^{2}\) is not universal [15]. ###### Acknowledgements. The authors would like to thank N. Mavromatos for useful correspondences and fruitful comments. This work was supported by the Natural Sciences and Engineering Research Council of Canada.
2304.05696
On the representations of Bell's operators in Quantum Mechanics
We point out that, when the dimension of the Hilbert space is greater than two, Bell's operators entering the Bell-CHSH inequality exhibit unitarily inequivalent representations. Although the Bell-CHSH inequality turns out to be violated, the size of the violation is different for different representations, the maximum violation being given by Tsirelson's bound. The feature relies on a pairing mechanism between the modes of the Hilbert space of the system.
Silvio Paolo Sorella
2023-04-12T08:38:01Z
http://arxiv.org/abs/2304.05696v3
###### Abstract ###### Abstract We point out that, when the dimension of the Hilbert space is greater than two, Bell's operators entering the Bell-CHSH inequality exhibit unitarily inequivalent representations. Although the Bell-CHSH inequality turns out to be violated, the size of the violation is different for different representations, the maximum violation being given by Tsirelson's bound. The feature relies on a pairing mechanism between the modes of the Hilbert space of the system. **On the representations of Bell's operators in Quantum Mechanics** **S. P. Sorella** \({}^{1}\)_UERJ - Universidade do Estado do Rio de Janeiro,_ _Instituto de Fisica - Departamento de Fisica Teorica - Rua Sao Francisco Xavier 524,_ _20550-013, Maracana, Rio de Janeiro, Brasil_ ## 1 Introduction The Bell-CHSH inequality [1, 2] is one of the most fundamental topics of Quantum Mechanics, signaling the existence of very strong correlations, due to the phenomenon of entanglement. In its usual form, it reads \[\langle\psi|{\cal C}_{CHSH}|\psi\rangle=\langle\psi(A_{1}+A_{2})B_{1}+(A_{1}- A_{2})B_{2}|\psi\rangle\;, \tag{1}\] where \(|\psi\rangle\) is a given entangled state and \((A_{i},B_{k})\), \(i,k=1,2\), are a set of four Hermitian operators fulfilling the requirements [3]: \[A_{i}^{2}=1\;,\qquad B_{k}^{2}=1\;,\qquad[A_{i},B_{k}]=0\;. \tag{2}\] One speaks of a violation of the Bell-CHSH inequality whenever \[2<|\langle\psi|{\cal C}_{CHSH}|\psi\rangle|\leq 2\sqrt{2}\;. \tag{3}\] where the maximum violation, \(2\sqrt{2}\), is known as Tsirelson's bound [3]. As the notation itself let it understand, expression (1) refers to a bipartite system \(AB\) whose Hilbert space is \({\cal H}={\cal H}_{a}\otimes{\cal H}_{b}\), with \({\cal H}_{a}\) and \({\cal H}_{b}\) having the same dimension: \(d_{a}=d_{b}=d\). The operators \(A_{i}\) act only on \({\cal H}_{a}\), while \(B_{k}\) only on \({\cal H}_{b}\). Up to unitary transformations, the construction of the four operators \((A_{i},B_{k})\) is unique when \(d=2\). This is the original example of spin \(1/2\) particles considered by Bell [1], where \((A_{i},B_{k})\) are expressed in terms of Pauli matrices. Though, this is no more true when the Hilbert spaces \({\cal H}_{a}\) and \({\cal H}_{b}\) have dimension greater than two. It is the aim of the present work to show that in such cases there exist unitarily inequivalent representations which lead to different values of the violation of the Bell-CHSH inequality. The number of such inequivalent representations becomes larger and larger as \(d\) increases. This observation is based on the well known work by [4], see Theorem 3. More precisely, the aforementioned inequivalent representations turn out to be linked to the presence of the two by two matrix \[{\cal M}_{i}={\cal M}_{i}^{\dagger}=\begin{pmatrix}0&e^{i\alpha_{i}}\\ e^{-i\alpha_{i}}&0\end{pmatrix}\;, \tag{4}\] which will enable us to introduce a pairing mechanism between the various modes of the Hilbert space. As we shall see, the inequivalent representations can be labelled according to the number of times in which the matrix \({\cal M}_{i}\) appears. In particular, the representation fully given by the direct sum of \({\cal M}_{i}\) yields the maximum allowed violation, namely: Tsirelson's bound \(2\sqrt{2}\). ## 2 The example of a four dimensional Hilbert space We start by considering the case in which \({\cal H}_{a}\) and \({\cal H}_{b}\) have dimension four. Let \((|0\rangle_{a},|1\rangle_{a},|2\rangle_{a},|3\rangle_{a})\) and \((|0\rangle_{b},|1\rangle_{b},|2\rangle_{b},|3\rangle_{b})\) denote orthonormal basis of \({\cal H}_{a}\), resp. \({\cal H}_{b}\). Both operators \((A_{i},B_{k})\) can be represented by \(4\times 4\) matrices. As entangled state, we shall consider the maximally entangled state \[|\psi\rangle=\frac{|0\rangle_{a}|0\rangle_{b}+|1\rangle_{a}|1\rangle_{b}+|2 \rangle_{a}|2\rangle_{b}+|3\rangle_{a}|3\rangle_{b}}{\sqrt{4}}\;. \tag{5}\] To construct the operators \(A_{i}\), resp. \(B_{k}\), one has to keep in mind that they have to be defined on the whole Hilbert space \({\cal H}_{a}\), resp. \({\cal H}_{b}\) ### First representation A first possibility for \((A_{i},B_{k})\) is given by \[A_{i}|0\rangle_{a}=|0\rangle_{a}\;,\qquad A_{i}|1\rangle_{a}=e^{ i\alpha_{i}}|2\rangle_{a}\;,\qquad A_{i}|2\rangle_{a}=e^{-i\alpha_{i}}|1 \rangle_{a}\;,\qquad A_{i}|3\rangle_{a}=|3\rangle_{a}\;,\] \[B_{k}|0\rangle_{b}=|0\rangle_{b}\;,\qquad B_{k}|1\rangle_{b}=e^{ i\beta_{k}}|2\rangle_{b}\;,\qquad B_{k}|2\rangle_{b}=e^{-i\beta_{k}}|1 \rangle_{b}\;,\qquad B_{k}|3\rangle_{b}=|3\rangle_{b}\;, \tag{6}\] where \((\alpha_{i},\beta_{k})\) stand for arbitrary real parameters. The operators \((A_{i},B_{k})\) defined in that way are Hermitian and fulfill the requirements (2). A quick computation shows that \[A_{i}B_{k}|\psi\rangle=\frac{|0\rangle_{a}|0\rangle_{b}+e^{-i(\alpha_{i}+\beta_{ k})}|1\rangle_{a}|1\rangle_{b}+e^{i(\alpha_{i}+\beta_{k})}|2\rangle_{a}|2 \rangle_{b}+|3\rangle_{a}|3\rangle_{b}}{\sqrt{4}}\, \tag{7}\] so that \[\langle\psi|A_{i}B_{k}|\psi\rangle=\frac{\cos(\alpha_{i}+\beta_{k})+1}{2}. \tag{8}\] Therefore, for the Bell-CHSH combination, we get \[\langle\psi|{\cal C}_{CHSH}|\psi\rangle=\frac{\cos(\alpha_{1}+\beta_{1})+\cos (\alpha_{2}+\beta_{1})+\cos(\alpha_{1}+\beta_{2})-\cos(\alpha_{2}+\beta_{2})+2 }{2}. \tag{9}\] To obtain maximum violation, one sets [3] \[\cos(\alpha_{1}+\beta_{1})+\cos(\alpha_{2}+\beta_{1})+\cos(\alpha_{1}+\beta_ {2})-\cos(\alpha_{2}+\beta_{2})=2\sqrt{2}\;, \tag{10}\] namely \[\alpha_{1}=0\;,\qquad\alpha_{2}=\frac{\pi}{2}\;,\qquad\beta_{1}=-\frac{\pi}{ 4}\;,\qquad\beta_{2}=\frac{\pi}{4}\;, \tag{11}\] resulting in \[\langle\psi|{\cal C}_{CHSH}|\psi\rangle=\sqrt{2}+1\approx 2.4 \tag{12}\] We underline that the above value is the maximum violation allowed by the definition (6). Looking at the matrix representation for \((A_{i},B_{k})\), one gets \[A_{i}^{(1)}=A_{i}^{(1)\dagger}=\begin{pmatrix}1&0&0&0\\ 0&0&e^{i\alpha_{i}}&0\\ 0&e^{-i\alpha_{i}}&0&0\\ 0&0&0&1\end{pmatrix}\;, \tag{13}\] and similarly for \(B_{k}^{(1)}\), with \(\alpha_{i}\) replaced by \(\beta_{k}\). ### Second representation A second representation might be obtained by following the setup outlined in [5] in the study of the violation of the Bell-CHSH inequality in relativistic Quantum Field Theory. One defines \[A_{i}|0\rangle_{a}=e^{i\alpha_{i}}|1\rangle_{a}\;,\qquad A_{i}|1 \rangle_{a}=e^{-i\alpha_{i}}|0\rangle_{a}\;,\qquad A_{i}|2\rangle_{a}=e^{i \alpha_{i}}|3\rangle_{a}\;,\qquad A_{i}|3\rangle_{a}=e^{-i\alpha_{i}}|2 \rangle_{a}\;,\] \[B_{k}|0\rangle_{b}=e^{i\beta_{k}}|1\rangle_{b}\;,\qquad B_{k}|1 \rangle_{b}=e^{-i\beta_{k}}|0\rangle_{b}\;,\qquad B_{k}|2\rangle_{b}=e^{i \beta_{k}}|3\rangle_{b}\;,\qquad B_{k}|3\rangle_{b}=e^{-i\beta_{k}}|2\rangle_{ b}\;, \tag{14}\] Again, the operators are Hermitian and fulfill conditions (2). For the correlation function \(\langle\psi|A_{i}B_{k}|\psi\rangle\), one gets \[\langle\psi|A_{i}B_{k}|\psi\rangle=\cos(\alpha_{i}+\beta_{k})\;. \tag{15}\] so that \[\langle\psi|{\cal C}_{CHSH}|\psi\rangle=(\cos(\alpha_{1}+\beta_{1})+\cos(\alpha_{ 2}+\beta_{1})+\cos(\alpha_{1}+\beta_{2})-\cos(\alpha_{2}+\beta_{2}))\,. \tag{16}\] Making use of the choice (11), one finds maximum violation, _i.e._ the saturation of Tsirelson's bound: \[\langle\psi|{\cal C}_{CHSH}|\psi\rangle=2\sqrt{2}\;. \tag{17}\] In matrix representation, we have now \[A^{(2)}_{i}=A^{(2)\dagger}_{i}=\pmatrix{0&e^{i\alpha_{i}}&0&0\cr e^{-i\alpha _{i}}&0&0&0\cr 0&0&0&e^{i\alpha_{i}}\cr 0&0&e^{-i\alpha_{i}}&0}\;. \tag{18}\] However, from Specht's theorem, it follows that \(A^{(1)}_{i}\) and \(A^{(2)}_{i}\) cannot be unitarily equivalent, since \[{\rm Tr}A^{(1)}_{i}=2\neq{\rm Tr}A^{(2)}_{i}=0\;. \tag{19}\] One observes that the maximum violation, _i.e._ the saturation of Tsirelson's bound, occurs for the representation for which \({\rm Tr}A_{i}=0\), namely eq.(18). This feature persists for all examples which will be analyzed. As we shall see, the condition \({\rm Tr}A_{i}=0\) can be seen as a consequence of the fact that this representation is made up by the direct sum of the 2x2 Pauli matrices employed for discussing the violation of the Bell-CHSH inequality for the two-dimensional Hilbert space of spin \(1/2\). ### The case of a non-maximally entangled state So far, we have considered only the case of maximally entangled states. It is instructive to check out what happens if the starting state is not maximally entangled as. for instance: \[|\psi\rangle={|0\rangle_{a}|0\rangle_{b}+|1\rangle_{a}|1\rangle_{b}+|2\rangle_ {a}|2\rangle_{b}+\sqrt{r}|3\rangle_{a}|3\rangle_{b}\over\sqrt{3+r}}\;. \tag{20}\] where \(r\) is a real parameter \(0<r<1\). Let us discuss the violation for each representation: * first representation, eq.(13). A simple calculation shows that \[\langle\psi|{\cal C}_{CHSH}|\psi\rangle={2(1+r)\over 3+r}+{4\sqrt{2}\over 3+r}\;.\] (21) * second representation, eq.(14). It turns out that \[\langle\psi|{\cal C}_{CHSH}|\psi\rangle=4\sqrt{2}\left({\sqrt{r}\over 3+r} \right)+{4\sqrt{2}\over 3+r}\;.\] (22) Since the parameter \(r\) is always \(r<1\), both expressions (21), (22) cannot saturate Tsirelson's bound: \(2\sqrt{2}\). In the first case, eq.(21), the maximum violation is attained for \(r\approx 0\), yielding \(\approx 2.55\). In the second case, the violation is bigger, increasing as \(r\) approximates the value \(1\). Tsirelson bound is attained when \(r=1\), _i.e._ when the state (20) becomes maximally entangled. Other kinds of non-maximally entangled states might be discussed. Though, we shall proceed by considering examples of maximally entangled states. ## 3 The six dimensional case For a better understanding, let us discuss briefly the six dimensional case, considering the maximally entangled state \[|\psi\rangle=\frac{|0\rangle_{a}|0\rangle_{b}+|1\rangle_{a}|1\rangle_{b}+|2 \rangle_{a}|2\rangle_{b}+|3\rangle_{a}|3\rangle_{b}+|4\rangle_{a}|4\rangle_{b }+|5\rangle_{a}|5\rangle_{b}}{\sqrt{6}}\;. \tag{23}\] As the dimension of the Hilbert state increases, more inequivalent matrix representations can be constructed, namely * first option \[A_{i}^{(1)}=A_{i}^{(1)\dagger}=\begin{pmatrix}1&0&0&0&0&0\\ 0&0&e^{i\alpha_{i}}&0&0&0\\ 0&e^{-i\alpha_{i}}&0&0&0&0\\ 0&0&0&1&0&0\\ 0&0&0&0&1&0\\ 0&0&0&0&0&1\end{pmatrix}\] (24) \[A_{i}^{(1)}A_{i}^{(1)}=1,\qquad\mathrm{Tr}A_{i}^{(1)}=4\;,\qquad\langle\psi| \mathcal{C}_{CHSH}|\psi\rangle^{(1)}=\frac{2(2\sqrt{2})+2\cdot 4}{6}\approx 2.27\;,\] * second option \[A_{i}^{(2)}=A_{i}^{(2)\dagger}=\begin{pmatrix}1&0&0&0&0&0\\ 0&0&e^{i\alpha_{i}}&0&0&0\\ 0&e^{-i\alpha_{i}}&0&0&0&0\\ 0&0&0&0&e^{i\alpha_{i}}&0\\ 0&0&0&e^{-i\alpha_{i}}&0&0\\ 0&0&0&0&0&1\end{pmatrix}\] (26) \[A_{i}^{(2)}A_{i}^{(2)}=1,\qquad\mathrm{Tr}A_{i}^{(2)}=2\;,\qquad\langle\psi| \mathcal{C}_{CHSH}|\psi\rangle^{(2)}=\frac{4(2\sqrt{2})+4}{6}\approx 2.53\;,\] (27) * third option \[A_{i}^{(3)}=A_{i}^{(3)\dagger}=\begin{pmatrix}0&e^{i\alpha_{i}}&0&0&0&0\\ e^{-i\alpha_{i}}&0&0&0&0&0\\ 0&0&0&e^{i\alpha_{i}}&0&0\\ 0&0&e^{-i\alpha_{i}}&0&0&0\\ 0&0&0&0&0&e^{i\alpha_{i}}\\ 0&0&0&0&e^{-i\alpha_{i}}&0\end{pmatrix}\] (28) \[A_{i}^{(3)}A_{i}^{(3)}=1,\qquad\mathrm{Tr}A_{i}^{(3)}=0\;,\qquad \langle\psi|\mathcal{C}_{CHSH}|\psi\rangle^{(3)}=\frac{(2+2+2)(2\sqrt{2})}{6}=2 \sqrt{2}\;,\] (29) All three representations are unitarily inequivalent, yielding different violations for the Bell-CHSH inequality. We notice that, once again, the maximum violation occurs when \(\mathrm{Tr}A_{i}=0\). ## 4 A general pattern: even and odd finite dimensional Hilbert spaces A general pattern emerges from the previous analysis. Let us consider the case in which both \(\mathcal{H}_{a}\) and \(\mathcal{H}_{b}\) have generic finite dimension \(N\). As entangled state, we shall take the maximally entangled state \[|\psi\rangle=\frac{1}{\sqrt{N}}\sum_{n=0}^{N-1}|n\rangle_{a}|n\rangle_{b}.\;. \tag{30}\] The pattern is encoded in the action of the two by two matrix \[\mathcal{M}_{i}=\mathcal{M}_{i}^{\dagger}=\begin{pmatrix}0&e^{i\alpha_{i}}\\ e^{-i\alpha_{i}}&0\end{pmatrix}\;, \tag{31}\] which is nothing but Bell's original operator [1], namely \[\mathcal{M}_{i}=\vec{n}_{i}\cdot\vec{\sigma} \tag{32}\] where \(\vec{\sigma}\) are the Pauli matrices and \(\vec{n}_{i}\) is the unit vector \(\vec{n}_{i}=(\cos(\alpha_{i}),-\sin(\alpha_{i}),0)\). Two orthogonal modes \(|\xi_{a}\rangle\) and \(|\chi_{a}\rangle,\;\langle\xi_{a}|\chi_{a}\rangle=0\), entering the state (30) are said to form a pair if \[A_{i}\begin{pmatrix}\xi_{a}\\ \chi_{a}\end{pmatrix}=\mathcal{M}_{i}\begin{pmatrix}\xi_{a}\\ \chi_{a}\end{pmatrix}\;, \tag{33}\] _i.e_. \[A_{i}|\xi_{a}\rangle=e^{i\alpha_{i}}|\chi_{a}\rangle\;,\qquad A_{i}|\chi_{a} \rangle=e^{-i\alpha_{i}}|\xi_{a}\rangle\;. \tag{34}\] The matrix \({\cal M}_{i}\) turns out to be the building block of all matrix representations listed above. Its action is that of forming pairs out of the \(N\) modes contributing to the state (30). Each pair gives a contribution \(2(2\sqrt{2})\) to the Bell-CHSH inequality, as it is apparent from eqs.(25), (27), (29), depending on how many times the matrix \({\cal M}_{i}\) appears in a given representation. It is thus natural to split the Hilbert spaces in two categories: even and odd dimensional Hilbert spaces. In the case in which \(N\) is even, we can have inequivalent representations in which the appearance of the matrix \({\cal M}_{i}\) ranges from \(1\) to \(N/2\). Since the maximum number of pairs is \(N/2\), we immediately get that the violation of the Bell-CHSH lies in the interval \[\frac{2(2\sqrt{2})+2\cdot(N-2)}{N}\leq\langle\psi|{\cal C}_{CHSH}|\psi\rangle \leq\frac{2\cdot\frac{N}{2}(2\sqrt{2})}{N}=2\sqrt{2}\;,\qquad N\geq 4\;. \tag{35}\] On the other hand, in the odd case, it is impossible to convert all modes into pairs, so that the size of the Bell-CHSH is always strictly lower than Tsirelson's bound, namely \[\frac{2(2\sqrt{2})+2\cdot(2N-1)}{2N+1}\leq\langle\psi|{\cal C}_{CHSH}|\psi \rangle\leq=2\sqrt{2}-\frac{2}{2N+1}\;,\qquad N=1.2,.... \tag{36}\] We see thus that a representation in which \({\cal M}_{i}\) appears only once, _i.e._ only one pair of modes is contemplated, already leads to a violation of the Bell-CHSH inequality which, however, attains its minimum value. In the even case, Tsirelson's bound is recovered when all modes have been converted into pairs, _i.e._ the representation is built out through the direct sum of \(N/2\) matrices \({\cal M}_{i}\). In the odd case, it is impossible to achieve Tsirelson's bound, as a perfect pairing cannot be done. We recover here the result by Gisin-Peres [6], see also [7, 8]. ## 5 The infinite dimensional case: the squeezed state Let us move now to the infinite dimensional case, by considering the normalized squeezed state \[|\eta\rangle=\sqrt{(1-\eta^{2})}\,e^{\eta a^{\dagger}b^{\dagger}}|0\rangle\;,\qquad\langle\eta|\eta\rangle=1\;, \tag{37}\] where the real parameter \(\eta\) is constrained to belong to the interval \[0<\eta<1\;. \tag{38}\] The operators \((a,b)\) obey the following commutation relations: \[\left[a,a^{\dagger}\right] = 1\;,\qquad[a^{\dagger},a^{\dagger}]=0\;,\qquad[a,a]=0\;,\] \[\left[b,b^{\dagger}\right] = 1\;,\qquad[b^{\dagger},b^{\dagger}]=0\;,\qquad[b,b]=0\;,\] \[\left[a,b\right] = 0\;,\qquad[a,b^{\dagger}]=0\;, \tag{39}\] with \[a|0\rangle=b|0\rangle=0\;. \tag{40}\] As one can easily figure out, we have now an infinite number of inequivalent possibilities, each leading to a violation of the Bell-CHSH inequality. As said before, the minimum violation is obtained by considering only one pair of modes. Let us pick up the modes \(|0\rangle\) and \(|1\rangle\) in expression (37), namely, we write \[|\eta\rangle=\sqrt{(1-\eta^{2})}\left(|0_{a}\rangle|0_{b}\rangle+\eta|1_{a} \rangle|1_{b}\rangle+\sum_{n=2}^{\infty}\eta^{n}|n_{a}\rangle|n_{b}\rangle \right)\;,, \tag{41}\] where \[|n_{a}\rangle|n_{b}\rangle=\frac{1}{n!}(a^{\dagger})^{n}(b^{\dagger})^{n}|0 \rangle\;,\qquad|0\rangle=|0_{a}\rangle|0_{b}\rangle\;. \tag{42}\] For the operators \((A_{i},B_{k})\), we write \[A_{i}|0_{a}\rangle=e^{i\alpha_{i}}|1_{a}\rangle\;,\qquad A_{i}|1_{a}\rangle=e ^{-i\alpha_{i}}|0_{a}\rangle\;,\qquad A_{i}=1\;\;\mbox{on the remaining elements of the basis}\;,\] \[B_{k}|0_{b}\rangle=e^{i\beta_{k}}|1_{b}\rangle\;,\qquad B_{k}|1_{b}\rangle=e ^{-i\beta_{k}}|0_{b}\rangle\;,\qquad B_{k}=1\;\;\mbox{on the remaining elements of the basis}\;. \tag{43}\] Therefore, \[\langle\eta|A_{i}B_{k}|\eta\rangle=1+(1-\eta^{2})\left(2\eta\cos(\alpha_{i}+ \beta_{k})-1-\eta^{2}\right)\;. \tag{44}\] Setting \[\alpha_{1}=0\;,\qquad\alpha_{2}=\frac{\pi}{2}\;,\qquad\beta_{1}=-\frac{\pi}{ 4}\;,\qquad\beta_{2}=\frac{\pi}{4}\;, \tag{45}\] for the Bell-CHSH inequality one gets \[\langle\eta|{\cal C}_{CHSH}|\eta\rangle=2+2(1-\eta^{2})\left(2\sqrt{2}\;\eta- 1-\eta^{2}\right)\;. \tag{46}\] There is violation whenever \[\sqrt{2}-1<\eta<1\;. \tag{47}\] The maximum value of the violation occurs for \(\eta\approx 0.7\), yielding \[\langle\eta|{\cal C}_{CHSH}|\eta\rangle\approx 2.5\;. \tag{48}\] On the other hand, the maximum violation is obtained when the pairing mechanism involves all modes, namely: \[A_{i}|(2n)_{a}\rangle = e^{i\alpha_{i}}|(2n+1)_{a}\rangle\;,\qquad A_{i}|(2n+1)_{a} \rangle=e^{-i\alpha_{i}}|(2n)_{a}\rangle\;,\qquad n=0,1,2,..\;,\] \[B_{k}|(2n)_{b}\rangle = e^{i\beta_{k}}|(2n+1)_{b}\rangle\;,\qquad B_{k}|(2n+1)_{b} \rangle=e^{-i\beta_{k}}|(2n)_{b}\rangle\;,\qquad B_{k}=1\;\;\;. \tag{49}\] leading to \[\langle\eta|{\cal C}_{CHSH}|\eta\rangle=\frac{(2\sqrt{2})2\eta}{1+\eta^{2}}\;, \tag{50}\] which attains Tsirelson's bound for \(\eta\approx 1\). The representations (49) have only non-diagonal elements, so that \({\rm Tr}(A_{i})={\rm Tr}(B_{k})=0\). This pattern, already encountered in the finite dimensional case remains valid in the infinite dimesnional case as well. Let us end this section by evaluating the entanglement entropy in order to understand better the limiting case \(\eta\approx 1\). Let us look first at he reduced density matrix \(\rho_{a}\), obtained from \[\rho_{ab}=|\eta\rangle\langle\eta|\;, \tag{51}\] _i.e._ \[\rho_{a}={\rm Tr}_{b}(\rho_{ab})=(1-\eta^{2})\sum_{n=0}^{\infty}\eta^{2n}|n_{a }\rangle\langle n_{a}|\;. \tag{52}\] Therefore \[\rho_{a}^{2}=(1-\eta^{2})^{2}\sum_{n=0}^{\infty}\eta^{4n}|n_{a}\rangle\langle n _{a}|\;, \tag{53}\] so that \[{\rm Tr}\rho_{a}^{2}=\frac{1-\eta^{2}}{1+\eta^{2}}\;, \tag{54}\] from which one sees that \(\rho_{a}\) becomes very impure when \(\eta\approx 1\). Finally, for the entanglement entropy one gets \[S=-{\rm Tr}\rho_{a}\ln\rho_{a}=-\ln\left(1-\eta^{2}\right)-\frac{\eta^{2}\ln \eta^{2}}{1-\eta^{2}}\;. \tag{55}\] This entropy is a monotonically increasing function of \(\eta\) and diverges for \(\eta\to 1\), confirming that the system is highly entangled in this limit. ### Relation with the pseudospin operators In order to achieve a better understanding of the Bell's operators introduced previously, eqs.(43),(49), it is helpful to consider here the so called pseudospin operators [9, 10, 11] defined as \[s_{x}=\sum_{n=0}^{\infty}s_{x}^{(n)}\;,\qquad s_{y}=\sum_{n=0}^{\infty}s_{y}^{ (n)}\;,\qquad s_{z}=\sum_{n=0}^{\infty}s_{z}^{(n)} \tag{56}\] where \[s_{x}^{(n)} = |2n+1\rangle\langle 2n|+|2n\rangle\langle 2n+1|\;,\] \[s_{y}^{(n)} = i\left(|2n+1\rangle\langle 2n|-|2n\rangle\langle 2n+1|\right)\;,\] \[s_{z}^{(n)} = |2n+1\rangle\langle 2n+1|-|2n\rangle\langle 2n|. \tag{57}\] An easy calculation shows that \[\left[s_{x}^{(n)},s_{y}^{(n)}\right] = 2is_{z}^{(n)}\;,\] \[\left[s_{y}^{(n)},s_{z}^{(n)}\right] = 2is_{x}^{(n)}\;,\] \[\left[s_{z}^{(n)},s_{x}^{(n)}\right] = 2is_{y}^{(n)}. \tag{58}\] As a consequence, it follows that the operators (56) obey the same algebraic relations of the spin \(1/2\) Pauli matrices: \[\left[s_{x},s_{y}\right] = 2is_{z}\;,\] \[\left[s_{y},s_{z}\right] = 2is_{x}\;,\] \[\left[s_{z},s_{x}\right] = 2is_{y}. \tag{59}\] from which the name _pseudospin_ follows. In particular, from expressions (57) one observes that the introduction of the pseudospin operators can be exactly related to the pairing mechanism in Hilbert space, a pair being given by two modes, namely \((|2n\rangle,|2n+1\rangle)\), with \(n=0,1,2,...\) Each pair of modes gives raise to a set of operators, \((s_{x}^{(n)},s_{y}^{(n)},s_{z}^{(n)})\), which obey the spin \(1/2\) algebra of the Pauli matrices. As already underlined, the observation of the pairing mechanism goes back to [6]. More recently, its applications to the study of the Bell-CHSH inequality has been discussed in [8, 12], where it has been shown that each single pair might be employed for a test of the Bell-CHSH inequality. Considering now the first choice, given by equations (43), it turns out that the operator \(A\) can be expressed in terms of the pseudospin operators as \[A=\left(\vec{u}\cdot\vec{s}^{(0)}+{\cal R}\right)\otimes I \tag{60}\] where \(\vec{u}\) denotes the unit vector \[\vec{u}=\left(\cos(\alpha),\sin(\alpha),0\right)\;,\qquad\vec{u}\cdot\vec{u}=1 \tag{61}\] and \({\cal R}\) is the identity operator for \(n\geq 2\): \[{\cal R}=\sum_{n=2}^{\infty}|n\rangle\langle n| \tag{62}\] Analogous expressions can be written down for \(B\) as well as for \((A^{\prime},B^{\prime})\). For the primed operators, the parameters \(\alpha\) and \(\beta\) are simply replaced by \(\alpha^{\prime}\) and \(\beta^{\prime}\). In the same vein, for the second Bell setup, eq.(49), in terms of pseudospin opertors, one has \[A=\vec{u}\cdot\vec{s}\otimes I \tag{63}\] where \(\vec{u}\) is the unit vector of expression (61). with similar expressions for the primed operators. It is immediate to check that in both setups the required properties (2) for the Bell-type operators are satisfied. The advantage of employing the pseudospin operators is that they enable to establish a simple and clear relationship between the various inequivalent representations of the Bell operators and the algebra of the Pauli matrices [13]. Remark that, from equation (63), it follows that \[{\rm Tr}A=0\;, \tag{64}\] a property already mentioned and to which we shall devote the next considerations. ## 6 The traceless representation In all examples treated so far, the maximum violation of the Bell-CHSH occurs for the representation for which the Bell operators are traceless, namely TrA =0, see equations (18), (28), (63). Such a representation is obtained by the direct sum of the 2x2 matrix \({\cal M}_{i}\) of equations (31), (32). In fact, equations (18), (28), (63) can be written, respectively, as \[{\cal M}_{i}\oplus{\cal M}_{i}\] \[{\cal M}_{i}\oplus{\cal M}_{i}\oplus{\cal M}_{i}\] \[{\cal M}_{i}\oplus{\cal M}_{i}\oplus{\cal M}_{i}\oplus{\cal M}_{i }\oplus......... \tag{65}\] In practice, in the case of the maximum violation off the Bel-CHSH inequality we have essentially many replica of the elementary spin 1/2 result, in agreement with [13]. ## 7 Conclusion In the present work we have pointed out that, when the dimension of the Hilbert space is greater than two, the Bell operators, eq.(2), entering the Bell-CHSH inequality exhibit unitarily inequivalent representations, leading to different values of the violation. This feature can be traced back to the two by two traceless matrix \({\cal M}_{i}\) \[{\cal M}_{i}={\cal M}_{i}^{\dagger}=\begin{pmatrix}0&e^{i\alpha_{i}}\\ e^{-i\alpha_{i}}&0\end{pmatrix}\;, \tag{66}\] which turns out to be precisley Bell's operator for spin 1/2. As discussed in the previous sections, the role of \({\cal M}_{i}\), which is the building block of the various representations, is that of organizing the modes entering a given entangled state \(|\psi\rangle\) into pairs, eqs.(33),(34). When the matrix \({\cal M}_{i}\) appears only once, the violation of the Bell-CHSH inequality is the lowest possible. However, when all modes are grouped into pairs, something which can happen only when the dimension of the Hilbert space is even, the violation is the biggest one, attaining Tsirelson bound: \(2\sqrt{2}\), For odd dimensional Hilbert spaces, Tsirelson's bound is never achieved. Everything generalizes to infinite dimensional Hilbert spaces. ## Acknowledgements The authors would like to thank the Brazilian agencies CNPq and FAPERJ for financial support. S.P. Sorella is a level 1 CNPq researcher under the contract 301030/2019-7.
2310.09133
Ejected Particles after Impact Splash on Mars: Aggregates and Aerodynamics
Our earlier laboratory measurements showed that low-velocity sand impacts release fine <5 {\mu}m dust from a Martian simulant soil. This dust will become airborne in the Martian atmosphere. Here, we extend this study by measuring aerodynamic properties of ejecta and characterizing deviations from the behavior of spherical, monolithic grains. We observe the settling of particles emitted as part of an impact splash. The sizes (20 to 280 {\mu}m) and sedimentation velocities (0.1 to 0.8 ms^{-1} ) of the particles are deduced from high-speed videos while the particles sediment under low ambient pressure of about 1 mbar. The particles regularly settle slower than expected, down to a factor of about 0.3. Using optical microscopy, the shape of the captured particles is characterized by simple axis ratios (longest/smallest), which show that the vast majority of particles are irregular but typically not too elongated, with axis ratios below 2 on average. Electron microscopy further reveals that the particles are typically porous aggregates, which is the most likely reason for the reduction of the sedimentation velocity. Due to the reduced bulk density, aggregates up to 10 {\mu}m in diameter should regularly be a part of the dust in the Martian atmosphere.
Tim Becker, Jens Teiser, Teresa Jardiel, Marco Peiteado, Olga Muñoz, Julia Martikainen, Juan Carlos Gomez Martin, Jonathan Merrison, Gerhard Wurm
2023-10-13T14:18:52Z
http://arxiv.org/abs/2310.09133v1
# Ejected Particles after Impact Splash on Mars: Aggregates and Aerodynamics ###### Abstract Our earlier laboratory measurements showed that low-velocity sand impacts release fine \(<\)5 \(\mu\)m dust from a Martian simulant soil. This dust will become airborne in the Martian atmosphere. Here, we extend this study by measuring aerodynamic properties of ejecta and characterizing deviations from the behavior of spherical, monolithic grains. We observe the settling of particles emitted as part of an impact splash. The sizes (20 to 280 \(\mu\)m) and sedimentation velocities (0.1 to 0.8 m s\({}^{-1}\)) of the particles are deduced from high-speed videos while the particles sediment under low ambient pressure of about 1 mbar. The particles regularly settle slower than expected, down to a factor of about 0.3. Using optical microscopy, the shape of the captured particles is characterized by simple axis ratios (longest/smallest), which show that the vast majority of particles are irregular but typically not too elongated, with axis ratios below 2 on average. Electron microscopy further reveals that the particles are typically porous aggregates, which is the most likely reason for the reduction of the sedimentation velocity. Due to the reduced bulk density, aggregates up to 10 \(\mu\)m in diameter should regularly be a part of the dust in the Martian atmosphere. Subject headings:_Unified Astronomy Thesaurus concepts: Mars (1007); Planetary atmospheres (1244) + Footnote †: journal: Physics Letters 1 Footnote †: journal: Physics Letters ## 1 Introduction From satellite and rover images, we know that there is a steady movement of soil occurring on many parts of the Martian surface (Metzger et al., 1999; Greeley et al., 2006; Silvestro et al., 2010; Chojnacki et al., 2011, 2019; Bridges et al., 2012, 2012; Reiss and Lorenz, 2016), as well as dust movement in the Martian atmosphere (Cantor et al., 1999; Montabone et al., 2005). Even though it was long believed that the wind conditions on the Martian surface should only rarely be able to move soil (Pollack et al., 1976; Greeley and Iversen, 1985; Sullivan et al., 2000; Haberle et al., 2003; Chojnacki et al., 2011), recent studies have showed that wind-induced shear stress might regularly be capable of moving sand-sized grains (Sullivan and Kok, 2017; Andreotti et al., 2020). Wind tunnel experiments and theoretical studies based on wind tunnel data show that the particle size most susceptible to wind drag is about 100 \(\mu\)m or slightly larger under Martian conditions (Iversen et al., 1976; Greeley et al., 1980; Shao and Lu, 2000; Swann et al., 2020). While for smaller grains, sticking (surface) forces are dominant (Greeley and Iversen, 1985; Alfaro et al., 1997; Shao, 2008; Kok et al., 2012; Rasmussen et al., 2015; Rondeau et al., 2015; Waza et al., 2023), larger grains are grounded by gravity. In the transition zone between these regions, particles are most susceptible to wind-induced shear stress. These 100 \(\mu\)m particles, which are set in motion at the minimum-threshold wind speed, however, are far too heavy to become entrained into the atmosphere (Kok et al., 2012). Instead, they start saltating, i.e., hopping along the surface (Shao, 2008). At each reimpact upon the surface, they can eject smaller particles by breaking cohesive bonds (Bagnold, 1941; Leach et al., 1989), enabling the wind to carry them along (Alfaro et al., 1997). So while micron-sized dust grains can hardly be liberated from the surface by wind-induced shear stress directly, impacts during saltation can liberate such dust particles. Particles of this size are thought to be suspended by turbulent eddies and carried farther upward within the atmosphere by means of convective flows (Edgett and Christensen, 1991; Daerden et al., 2015; Haberle et al., 2017; Musiolik et al., 2018; Neary and Daerden, 2018). Our previous study analyzed this process for individual impacts--i.e., that bonds are broken down to the (sub-) micrometer scale, as we could detect particles even below 1 \(\mu\)m in diameter being ejected from slow saltating impacts (Becker et al., 2022). In that study, the final estimate of how much dust per saltating impact could go into long-term suspension in the atmosphere assumed a cutoff diameter of 3 \(\mu\)m as the upper limit. The cutoff diameter was chosen based on the results of studies using spectral data, performed by, e.g., Pollack et al. (1995), Tomasko et al. (1999), Wolff and Clancy (2003), Clancy et al. (2003), Lemmon et al. (2004), and Wolff et al. (2006). However, those studies only considered monolithic, spherical, or cylindrical grains for the estimation of the particle size. Natural particles are by no means perfectly spherical to begin with, though (Dietrich, 1982; Ming and Morris, 2017). Apart from monolithic particles not being spherical, aggregates being present is another factor (Sullivan et al., 2008; Waza et al., 2023). For these kinds of particles, this upper limit might not hold true. If the particles were to be cluster-cluster aggregates (CCAs) with low fractal dimension, sedimentation speeds would be much lower than for nonfluffly, spherical particles of the same size (Meakin, 1987; Nakamura et al., 1994; Wurm and Blum, 2000). Thus, even large aggregates may enter into long-term suspension. Another possibility is aggregates being spherical, but with high porosity. Such porous material has already been identified on the Martian surface by, e.g., Moore & Jakosky (1989) and Sullivan et al. (2008). The effectively reduced density of a particle compared to a monolithic grain would increase the maximum size for airborne particles as well (Dietrich, 1982), yet this would not be quite as extreme as for CCAs. Morphology is not only important for the lifting process. It also plays a major role for grains entering into saltation. After all, if saltating particles are aggregates, it changes the nature of the reinpact into the soil (Shao, 2008). Upon impact the aggregate can be damaged or even break down into a large number of fragments, that are small compared to the original aggregate (Kun & Herrmann, 1999; Shao, 2001, 2008; Kok et al., 2012) and can be carried away by the wind more easily. Kok (2011) even gives an analytical model, describing the particle size distribution (PSD) of particles generated by such fragmentation processes. Furthermore, size and morphology also impact the retrieval of particle properties from remote sensing (Munoz et al., 2021; Lin et al., 2023). Thus, having a clearer picture on what kinds of physical properties are expected for particles that are ejected by certain dust-generation mechanisms can help in further increasing the accuracy of the information gained from remote-sensing data. For example, a recent work by Martikainen et al. (2023) updates the optical properties of Martian dust by assuming more realistic particle shapes to simulate the Martian regolith, compared to the common assumption of particles being spheres or cylinders. The question remains: what kinds of properties for nonspherical dust particles would be realistic for specific dust-generation mechanisms? Recently, for example, Waza et al. (2023) found large aggregates to be removed by wind on a thick dust layer. For the mechanism of impact-generated dust in a mix of large grains and small dust, we connect to our earlier studies on particle ejection during impacts and extend these experiments in this study. We now measure the sedimentation speed and determine the aerodynamic properties of the particles that would regularly be considered too large to become entrained. We connect these measurements to particles' aggregate structure and shape. ## 2 Experiments In order to get insight into the structure and aerodynamics of the particles liberated in saltating impacts, we approached the problem from two sides. First, we measured the sedimentation velocity and the geometrical size for ejected particles. Second, we caught sedimenting particles and analyzed their shape under an optical microscope and used a scanning electron microscope (SEM) for high-resolution images. To obtain the aspired data, we used an extended version of the setup from our previous work (Becker et al., 2022). ### Impact Setup The experimental setup consists of a plunger that accelerates large saltators into a dust bed containing Martian soil simulant (see Figure 1(b)). The saltators have sizes between 180 and 250 \(\mu\)m and are accelerated to speeds of \(1.04\,\mathrm{m\,s}^{-1}\pm 0.2\,\mathrm{m\,s}^{-1}\). These speeds are in agreement to wind shear velocities at the saltation threshold (Swann et al., 2020). Impact angles were also kept unchanged compared to our previous work, with \(18.8^{\circ}\pm 2.5^{\circ}\), a value well within the range proposed by several studies (Bagnold, 1941; Chepil, 1945; White & Schulz, 1977; Jensen & Sorensen, 1986). The whole setup was placed within a vacuum chamber. The experiments were carried out at a pressure of 1 mbar. The pressure is at the lower end of the surface pressures on Mars, but was mostly chosen for technical reasons. For larger pressures, small particles settle too slowly and are easily picked up by convective flows. For smaller pressures, particles are mostly free falling in the laboratory and not reacting to the gas flow on reasonable timescales or distances. To each side of the dust bed a recess was cut. Grains ejected during impacts fall through and sediment farther downward, as indicated in Figure 1(a). The particles were observed at 2000 fps using a NAC MEMRECAM HX-3 high-speed camera at a vertical distance of 3.6cm below the top of the dust bed. The field of view was \(4544\times 2556\)\(\mu\)m, which means that falling particles could be observed over a maximum distance of 2.56mm. The resolution of the camera was 3.55 \(\mu\)m/px and the error was determined to be 2 pixels to each side. Farther down the vacuum chamber, a microscope slide was placed to capture the sedimenting particles. They were analyzed using an AxioCam ICc 1 mounted on a ZEISS Axiomager.M2 microscope, with a resolution of 0.37 \(\mu\)m/px and an error of 1 pixel to each side. A total number of \(\approx\)30 impact experiments were carried out and analyzed. Additionally, we used an SEM Figure 1: Sketch of the impact setup and acceleration unit (not to scale). (a) Experiment setup within the vacuum chamber in front view. The launcher is located behind the dust bed (not shown in the sketch). The brown lines schematically illustrate the sedimentation paths of the ejected particles. (b) Cross section of the launcher that launches saltators onto the dust bed. to gain information on the morphological structure of the individual sedimented particles. ### Soil: Martian Simulant For the experiment, we reused the Martian Global Simulant (MGS) sample (Cannon et al., 2019) we used in our previous work (Becker et al., 2022) as well. That is, we produced a bimodal size distribution of fine clay-sized particles and larger sand-sized saltators. Using this method, we tried to imitate the steady mixing of fine particles sedimenting from the Martian atmosphere and larger particles moved by wind through saltation. If we assume a regular mixing in that manner, the surface layer of the soil would be comprised of such a bimodal PSD. We set up a 1:1 mixture of small (\(<\)20 \(\mu\)m) particles and a larger fraction centered around roughly 100 \(\mu\)m. The conditioned sample was then placed in the dust bed. As impactors, particles of 180-250 \(\mu\)m from the MGS reservoir were used. ## 3 Methods ### Sedimentation Speed and Aerodynamic Size From the high-speed videos, particle tracks have been analyzed using the ImageJ plugin Trackmate (Ershov et al., 2022). Positions and timestamps were then used to fit a linear model over the trajectory and extract the sedimentation velocity from it. The videos also provided us with the time after launch at which each particle came into the field of view, as well as the particle diameters. The diameters were measured by hand for each particle individually at two different times to get an average. From the measured particle sizes and sedimentation velocities, we determined the aerodynamic size, i.e., the size of a spherical grain of the same material density that corresponds to a given sedimentation speed \(v_{Sed}\). The aerodynamical size can be determined from the particles' equation of motion. In the case of our experiments, the force \(F_{Sed}\) that acts on a particle is \[F_{Sed}=F_{G}-F_{5}, \tag{1}\] where \(F_{G}\) is gravity and \(F_{S}\) is the Stokes drag. At high pressure, the Stokes drag is given as \[F_{5}=6\pi\mu Rv_{Sed}(t), \tag{2}\] with the particle radius \(R\) and dynamic viscosity \(\mu=1.8\cdot 10^{-5}\) Pa s for air. However, as we carry out the experiments at low pressure, a correction factor (Cunningham, 1910) has to be applied, due to the transition from continuum to free molecular flow. This correction can be characterized by the Knudsen number \(Kn=\frac{\lambda}{R_{g}}\), where \(\lambda\) is the mean free path of the gas molecules and \(R_{g}\) is the geometrical size. One parameterization of the correction factor is (Hutchins et al., 1995) \[C=1+Kn\cdot(\alpha+\beta\cdot e^{-\frac{\omega}{Kn}}), \tag{3}\] with \(\alpha=1.231\pm 0.0022\), \(\beta=0.47\pm 0.0037\), and \(\omega=1.178\pm 0.0091\) as empirical constants. The corrected Stokes drag \(F_{\mathrm{Sc}}\) is then \[F_{\mathrm{Sc}}=\frac{F_{S}}{C}. \tag{4}\] Using this and the gravitational force \[F_{G}=mg=\rho_{p}\frac{4}{3}\pi R^{3}g \tag{5}\] with the material density \(\rho_{p}\) then gives \[v_{Sed}(R,\,t)=-\frac{1}{\gamma}\cdot(1-e^{-\gamma t}), \tag{6}\] with \[\gamma=\frac{9\ \mu}{2\rho_{p}R^{2}gC}. \tag{7}\] We measured the average material density \(\rho_{p}=2460\,\mathrm{kg/m^{3}}\) with a pycnometer and a scale. Using the measured velocity and time from the camera images for each particle, we then numerically solved Equation (6) for \(R\), which in this case is the corresponding aerodynamical radius \(R_{\alpha}\), and formed the ratio of aerodynamical to geometrical size for each particle. ### Shape Distribution From the optical microscope images, we retrieved data regarding the shape of the particles. We identified the longest and smallest axis in the cross sections of captured particles and formed the corresponding axis ratio. Additionally, we determined the diameter of a circular particle with the same cross section (mean diameter) for all 5318 individual particles. ## 4 Results ### Sedimentation Speed and Aerodynamic Size The data for the sedimentation velocity are shown in Figure 2. A total of 92 data points are presented, representing particles from 20 \(\mu\)m to 300 \(\mu\)m in diameter. The dashed horizontal line shows the maximum velocity that a frictionless free-falling particle can reach, if it starts at rest at the height of the dust bed. Besides the expected trend that larger particles sediment faster, Figure 2 shows a considerable spread of sedimentation velocities for similar particle sizes. The sedimentation velocities for same-sized particles can differ by a factor of up to 5 for small particles \(<\)50 \(\mu\)m. While this factor goes down for increasing particle sizes (e.g., to about 3 for particles around 100 \(\mu\)m), this deviation still implies that the geometrical size is not the only aspect determining the sedimentation velocity. To better evaluate the geometrical particle sizes and their corresponding sedimentation velocities, we determined the ratio of aerodynamical size and geometrical size. The ratio in dependence from the geometrical size is plotted in Figure 3. Here, a value smaller than 1 means that the particle's drag corresponds to a particle smaller than the observed geometrical particle size (slow regime). It can be seen that, apart from three exceptions, all data points are within the slow regime, reaching down to ratios below 0.3. For reference, the blue dotted line within the plot at a ratio of 0.71 corresponds to a spherical aggregate with a porosity of 50% (50% of the aggregate volume consists of solid material). The majority of particles being below that line implies that most observed particles are highly porous in nature. Figure 4 shows the distribution of porosities over the whole data set, apart from the three exceptions that have an aerodynamical-to-geometrical size ratio greater than 1 and consequently would have a porosity smaller than 0% as well, which obviously does not make sense. Those three particles are most likely ellipsoidal, monolithic grains, well aligned to the airflow. ### Scanning Microscopy This interpretation of highly porous aggregates coincides with SEM images of captured sedimented dust. Due to limited resources, we could only take eight images in total, of which three are presented in Figure 5. The full sample set is shown in the Appendix (see Figure 7). Based on our observations, we analyzed the morphological structure of the particles and categorized them into three different types: small aggregates, large aggregates shattered upon impact, and monolithic particles. An example of each type is shown in Figure 5. A small aggregate of 20-30 \(\mu\)m in diameter can be seen in Figure 5(a). The visible part mainly consists of single particles on the order of 1 \(\mu\)m. Some small grains and aggregates of around 5 \(\mu\)m next to it suggest that they have been shed off upon impact on the sample holder. Figure 5(b) shows a large area covered in small aggregates and monolithic particles of \(<\)5 \(\mu\)m up to \(>\)20 \(\mu\)m. The overall thin coverage of the sample holder compared to the high particle density and locality of this particular patch of material leads us to believe that these are the fragments of a large aggregate that shattered upon impact on the sample holder. We estimate the original aggregate to have been on the order of 100 \(\mu\)m. Last, Figure 5(c) shows an example of a monolithic particle about 80 \(\mu\)m in size. Only a small number of micron-sized grains adhere to its surface. Additionally, it can be seen that the particle has a slightly prolonged shape. These three cases represent the different types of particles observed. Having small and large aggregates as well as monolithic particles as ejecta matches the data presented in Figure 3. ### Shape Distribution All particles--even if they are highly porous--are overall rather round in shape. They are not perfectly spherical, though. Since a particle's shape also impacts the sedimentation speed, strongly elongated particles could either fall slightly faster or slower than a sphere with analogous average geometrical diameter, depending on their alignment to the gas flow. Cluster-cluster-aggregates (CCAs), for example, tend to align with the longest axis downward (Wurm & Blum, 2000). If compact aggregates do so as well, they would fall slightly faster than their spherical counterparts, due to the cross section being subjected to the headwind being reduced. To investigate this topic, we plotted the axis ratio of the longest and smallest axes of each particle over the diameter of a circular particle with the same cross section in Figure 6. Out of all data points, 96.3% lie below an axis ratio of 3 and 86.6% lie below an axis ratio of 2. Furthermore, there is a tendency that the \(y\)-axis spread of the data points decreases toward larger grain sizes, keeping in mind that the number of data points also decreases with increasing particle size. As the microscope footage does not discern between aggregates and monolithic particles, we cannot give specified information on the shape distribution for either population, but, in general, the particles are not very elongated. Thus, even if the alignment is not perfect, we only expect a small effect on the sedimentation speed due to irregular shapes. Figure 4: Histogram of the ratio of aerodynamical size to geometrical size for each particle mapped to the corresponding porosity. Figure 3: Ratio between aerodynamical and geometrical size. Particles with values smaller than 1 are slower than expected for their geometrical size. The dashed blue line represents aggregates with 50% porosity (50% of the aggregate volume is devoid of solid material). The black error bars represent the error caused by measurements of the geometrical size and the red error bars show the error for a \(+\) 5% shift in time. Figure 2: Sedimentation velocity over geometrical size (measured particle diameter) of the 92 observed particles. The dashed blue line is the freefall velocity from the dust bed height. For small geometrical sizes, the measured sedimentation velocity varies by up to a factor of 5 between nearly same-sized particles, while it decreases for increasing geometrical sizes. the measured times between lift and observation could be slightly larger. As indicated by the red error bars in Figure 3, which show the effect of \(\mathrm{a+5\%}\) shift in time on the measurements, such an adjustment would only considerably affect particles with a large geometrical size and a high ratio of aerodynamic to geometric size. Small particles reach terminal velocity quickly. Thus, a small change in timing does not affect the results noticeably. The same holds for very slow particles, such as particles with a large geometrical size, but a low ratio of aerodynamic to geometric size. Consequently, as can be seen in Figure 3, a small shift in time does not noticeably change the majority of our measurements and therefore does not change our conclusions either. ## 6 Conclusions We analyzed sedimentation velocities for particles ejected from slow saltating impacts. We find a considerable spread of sedimentation velocities for similar particle sizes. The spread can be as large as a factor of 5 between the slowest and fastest sedimentation velocity for same-sized particles. The shape analysis of the sedimented particles in Figure 6 shows that the overwhelming majority of particles are rather round in shape, having a long axis-to-small axis ratio of less than 2. Based on this finding, we would expect particles of the same geometrical size to have very similar sedimentation velocities, if they were monolithic. Thus, we see the discrepancy in the sedimentation velocities of same-sized particles in Figure 2 as an indication of particles being aggregates rather than having complex shapes. This argumentation is further supported by our findings from Figure 3. It shows that most particles have aerodynamical sizes that are smaller than their geometrical sizes, i.e., the particles sediment slower than they would be expected to. For some particles, the ratio of aerodynamical size to geometrical size even reaches down to values below 0.5, while for most particles it still is below 0.7. Keeping Figure 6 in mind, particle shape is a very unlikely cause of such low values. Taking a monolithic particle with an axis ratio of 2:1, the extreme values for its ratio of aerodynamical to geometric size would be 1.41 and 0.71, which are still greater than the ratios for most particles in Figure 3. Consequently, the findings from Figure 3 strongly Figure 5: SEM images of captured particles. The black lines in the background are cracks in the adhesive coating. (a) A mostly intact aggregate of about 20-30 \(\mu\)m in diameter. Individual grains have sizes down to 1 \(\mu\)m and less. (b) An aggregate of about 100 \(\mu\)m that fragmented upon impact. (c) A monolithic particle of about 80 \(\mu\)m with a little micron-sized dust attached. Figure 6: Ratios of the longest to smallest axes of particles from optical microscope images over surface equivalent diameter. The area between long axis-to-small axis ratios of 1 and 2 (highlighted in green) contains 86.6% of all particles, and the area between long axis-to-small axis ratios of 2 and 3 (highlighted in blue) contains 9.7% of all particles. point to particles being porous aggregates. The SEM images align with that reasoning as well, since they show aggregates as well as monolithic particles, with all of them generally having rather round shapes. Though the results from SEM data are only of a qualitative nature, they match the results from the shape and sedimentation speed analyses well. Thus, taking all our findings into account, we argue that the main reason for the discrepancy in the sedimentation velocities of the same-sized particles in Figure 2 is a difference in the porosity of the aggregates and monolithic particles, rather than an increase in the complexity of the shapes. Ejecta from saltating impacts contain monolithic particles as well as aggregates, and they are not expected to have extreme shapes or morphologies. We know that for monolithic particles, 3 \(\,\mu\)m (effective diameter) is widely considered to be the upper limit to be suspended on Mars. As for the aggregates, we can estimate an upper size limit for long-term suspension using Figure 3. The lowest ratios of aerodynamical to geometric size are around 0.3. If we apply this ratio to a monolithic 3 \(\mu\)m grain to correspond it to an aggregate with the same aerodynamical size, we get an aggregate with a geometrical size of about 10 \(\mu\)m. Thus, the size limit for aggregates would be 10 \(\,\mu\)m, based on our estimation. Applied to Mars or other planets with a sufficiently thick atmosphere, this shift in the geometric size of suspendable particles influences the total mass budgets of transported dust and optical properties. Since we do not know what fraction of the overall ejecta is constituted by aggregates, we cannot give a specific value for the impact on mass budgets. However, given our findings, an upward shift for the estimated mass is to be expected. Thus, saltation on Mars could bring a wider range of particle sizes into long-term suspension in the atmosphere than previously thought. The shift would have an effect on the optical properties of airborne dust as well, since the size and scattering behavior of aggregates vary from those of monolithic grains. What that means in detail would have to be investigated by further research, though. For future Mars missions, in situ measurements close to the Martian surface and in the atmosphere would be of great interest for us, to determine if mass budgets and circulation models have to be updated, as our study implies. ## Acknowledgments This project has received funding from the European Union's Horizon 2020 research and innovation program under grant agreement No. 101004052. We thank the two anonymous referees for the thorough review of our manuscript. ## Appendix Supplemental Material Here we add a Figure with the complete set of SEM pictures taken, sorted by case. The first column of Figure 7 represents Figure 5. ## ORCID iDs O. Mufioz (r) [https://orcid.org/0000-0002-5138-3932](https://orcid.org/0000-0002-5138-3932) J. Martikainen (r) [https://orcid.org/0000-0003-2211-4001](https://orcid.org/0000-0003-2211-4001) G. Wurm (r) [https://orcid.org/0000-0002-7962-4961](https://orcid.org/0000-0002-7962-4961)
2308.13321
The Arrangement of Marks Impacts Afforded Messages: Ordering, Partitioning, Spacing, and Coloring in Bar Charts
Data visualizations present a massive number of potential messages to an observer. One might notice that one group's average is larger than another's, or that a difference in values is smaller than a difference between two others, or any of a combinatorial explosion of other possibilities. The message that a viewer tends to notice--the message that a visualization 'affords'--is strongly affected by how values are arranged in a chart, e.g., how the values are colored or positioned. Although understanding the mapping between a chart's arrangement and what viewers tend to notice is critical for creating guidelines and recommendation systems, current empirical work is insufficient to lay out clear rules. We present a set of empirical evaluations of how different messages--including ranking, grouping, and part-to-whole relationships--are afforded by variations in ordering, partitioning, spacing, and coloring of values, within the ubiquitous case study of bar graphs. In doing so, we introduce a quantitative method that is easily scalable, reviewable, and replicable, laying groundwork for further investigation of the effects of arrangement on message affordances across other visualizations and tasks. Pre-registration and all supplemental materials are available at https://osf.io/np3q7 and https://osf.io/bvy95 .
Racquel Fygenson, Steven Franconeri, Enrico Bertini
2023-08-25T11:51:42Z
http://arxiv.org/abs/2308.13321v1
# The Arrangement of Marks Impacts Afforded Messages: ###### Abstract Data visualizations present a massive number of potential messages to an observer. One might notice that one group's average is larger than another's, or that a difference in values is smaller than a difference between two others, or any of a combinatorial explosion of other possibilities. The message that a viewer tends to notice - the message that a visualization affords - is strongly affected by how values are arranged in a chart, e.g., how the values are colored or positioned. Although understanding the mapping between a chart's arrangement and what viewers tend to notice is critical for creating guidelines and recommendation systems, current empirical work is insufficient to lay out clear rules. We present a set of empirical evaluations of how different messages-including ranking, grouping, and part-to-whole relationships-are afforded by variations in ordering, partitioning, spacing, and coloring of values, within the ubiquitous case study of bar graphs. In doing so, we introduce a quantitative method that is easily scalable, reviewable, and replicable, laying groundwork for further investigation of the effects of arrangement on message affordances across other visualizations and tasks. Pre-registration and all supplemental materials are available at [https://osf.io/np3q7](https://osf.io/np3q7) and [https://osf.io/bvy95](https://osf.io/bvy95), respectively. Perception & cognition, Methodologies, Human-subjects qualitative studies, Human-subjects quantitative studies, Charts, diagrams and plots, General public ## 1 Introduction Visualization evaluation and design are often guided by a ranking of visual variables developed on precision-based criteria (e.g., response time, exactness of read values) [19, 22, 26, 27, 51]. Other visualization guidance is based off intuition [43], or extrapolated from cognitive psychology experiments that use far simpler stimuli (e.g., sets of shapes) and different participant tasks [28, 29, 33, 2, 2, 18]. Precision-based evaluation provides limited guidance in designing an effective visualization. While precision can ensure quantitative data is estimated accurately from graphical depictions, it is not sufficient to guarantee efficacy. Visualization designs can convey data in precise ways, yet not make an intended message obvious, and imprecise designs can still make intended messages obvious and intuitive [3]. Similarly, two visualizations can show the same data with equal precision, but communicate significantly different messages. Consider the pair of graphs in Figure 2, taken from Alberto Cairo's popular blog "The Functional Art." Both graphs encode number of COVID-19 cases using the height of aligned bars, but differences in ordering and spatial proximity of their bars convey markedly different trends in case numbers. The top graph sorts bars in descending order regardless of time, implying a consistently decreasing trend, while the bottom communicates that the numbers of cases by county increase before they decline. Thus, it is possible for simple changes in the arrangement of parts of a chart to impact the message that a viewer is likely to grasp. More generally, as existing research posts, data visualizations' design can afford potential takeaways [37, 53, 54, 60]. In practice, past research has investigated afforded messages by examining how differences in visualizations can compel viewers to reason differently [53], alter the type of comparisons they make [54], and most commonly vary their description of underlying information [60, 37, 54]. In this paper, we explore a novel metric for evaluating afforded takeaways: Do some arrangements of marks (i.e., visual objects in a graph) make messages more obvious than others? To answer this question, we need to 1) enumerate a possible set of mark arrangements and a possible set of subsequently afforded messages and 2) investigate if these arrangements impact the obviousness of the identified messages. We present four experiments that investigate how four arrangements of data marks (ordering, partitioning, spacing, coloring) affect the subjective match of visualization designs to a set of messages (ranking judgments, group comparisons, and part-to-whole relationship judgments). ## 1 Related Work While precision-centric evaluation methods remain extremely popular [19], alternate methodologies have been advocated in a collection of evaluation-focused papers [3, 4, 27, 32, 48], and motivated the long-running IEEE VIS workshop _BEyond time and errors: novel evaLuation methods for Information Visualization_ (BELIV). In alignment with growing consideration for new metrics of evaluation, studies have explored visual metaphors [61], memorability [1, 5, 6], deeper insights [27], implicit takeaways [54], and afforded reasoning [53]. ### Current Methods for Exploring Visualization Affordances In the late 1990s, Zacks and Tversky, and Shah et al. explored differences in reader takeaways incurred by bar and line charts, finding that bars imbe a sense of discreteness, while line charts imply continuous relationships [37, 60]. In both of these seminal works, researchers employed qualitative methodologies, showing graphs to participants and then hand-coding their open-ended descriptions. This method provides strong benefits by allowing findings to arise organically, without the need for pre-declared hypotheses. The inverse of this method-in which researchers describe a relationship between data points and ask participants to draw a corresponding graph-offers similarly beneficial evidence [16, 60, 49]. Hand-coding qualitative survey responses continues to be used to study visualization affordances, by asking participants to type out or voice their takeaways. [8, 23, 54, 53]. But this methodology is time-consuming and labor-intensive. Even more problematic, as diligently reported by Xiong et.al, this hand-coding is often unable to resolve the natural syntactic and semantic ambiguities in sentences that people type [54]. As a simple example, imagine a bar graph showing the sizes of two birds and two squirrels. If a viewer says, "the birds are bigger than the squirrels", they could mean that any bird is bigger than any squirrel, or that the birds are bigger than squirrels on average. Open-ended methods are powerful exploratory tools, but require, the sometimes impossible, resolving of ambiguities to effectively study visualization affordances. Thus, we present a complementary methodology to such approaches. We employ a confirmatory design that restricts the space of tested stimuli and messages, but provides efficient, replicable data to verify the impact of visual arrangements on afforded messages. Similar approaches include asking participants to report their opinion, often using Likert scales [40], on how much different visualizations support semantic variables (e.g., "stable", "rigid", "complete") [62], on the trustworthiness or bias of visualizations [30, 21], on their agreement with provided statements [20, 21], or on amount of risk to themselves or others before and after seeing visualizations showing pandemic information [31]. We present a comparably empirical approach, but focus on general reader takeaways, a subject matter that-to the best of our knowledge- has only been quantitatively evaluated once before, in Xiong et al.'s Experiment 2, and with a much smaller (n=45 experts vs our n \(\geq\) 130 general public) sample [54]. For a more detailed comparison of our work to Xiong et. al, see SM7 in Supplemental Materials. ### Bar Chart Research Bar charts, one of the most prevalent types of visualization [24], are a common subject of visualization evaluations, and produce study results that have been generalised to other graph types [26]. Foundational research in visualization, including the widely cited Cleveland & McGill, Zacks & Tversky, Shah et al., Bateman et al., and Heer & Bostock papers seek to evaluate fundamental paradigms of visualizations and their communication by studying bar charts [1, 11, 17, 41, 59, 60]. Takeaways from these papers establish core tenets of bar chart interpretation, including how accurately one can discern bar chart lengths given different placement and heights of the bars [11, 17, 41, 59], and that arranging bars in groups using irregular spacing leads to readers "visual[ly] chunking" their takeaways accordingly [37]. For the most part, best practices using bar charts can be informed via the effectiveness ranking of channels [26]. In this work, we explore the effect of bar chart arrangements beyond classic manipulated variables (e.g., x-axis alignment, height differences) and the classic dependent variable of precision (e.g., response time, read accuracy). To further the understanding of visualization design, we include conditions that have been tested (e.g., bar alignments vs misalignment), and those that have yet to be investigated (e.g., spacing vs coloring to convey grouping of bars), and focus on the impact of these conditions on message obviousness. Thus we present novel results on how bar chart arrangements' obviousness of messages can align, or fail to align, with precision-based design decisions. See Figures 4, 5, 6, and 7 for tested messages, bar charts, and results. **Spacing**, in this context, addresses the use of spatial proximity to organize visual objects into groups. Spacing is used in grouped bar charts, exploded pie charts, and grouping within Sankey diagrams or alluvial plots. **Coloring**, in this context, describes the use of different hues or levels of saturation to group elements and/or distinguish between elements of different types. Examples of coloring used in this way include colored dots in scatter plots, colored bars in bars charts, colored rectangles in green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green//green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green//green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green//green/green/green/green/green/green/green/green/green/green//green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green//green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green//green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green//green/green//green/green//green/green/green/green/green/green/green/green/green/green/green/green//green/green/green//green/green/green/green/green/green//green/green/green/green//green/green/green//green/green/green/green/green/green/green/green/green/green//green/green/green/green//green/green/green/green//green//green/green//green/green/green//green/green/green/green/green/green/green/green//green/green/green/green//green/green/green/green/green//green/green/green/green/green//green/green//green/green/green//green//green/green/green/green/green//green/green/green/green/green/green/green/green/green/green/green/green//green/green/green/green/green//green/green//green/green/green/green//green/green/green/green/green//green/green//green/green/green/green/green/green//green/green/green//green/green//green/green/green/green//green/green/green/green//green//green/green/green/green//green/green/green//green/green/green/green//green/green/green/green/green/green/green//green/green/green//green/green/green/green/green/green//green/green/green/green/green//green//green/green/green//green/green/green/green//green/green/green/green/green/green/green//green/green/green//green/green/green/green//green/green/green/green//green//green//green/green/green/green/green/green/green/green/green/green/green/green/green/green//green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green//green/green/green/green/green//green/green//green//green/green/green//green/green/green/green/green/green/green/green/green/green/green/green//green/green/green/green/green/green/green/green/green/green//green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green//green/green/green/green//green/green/green/green/green/green/green/green/green/green/green/green//green/green/green/green//green/green/green/green/green/green/green/green//green/green/green//green/green/green/green//green/green/green/green//green//green/green/green/green/green/green/green/green//green//green/green//green/green/green//green/green//green/green/green//green/green/green//green/green/green/green/green/green/green/green/green//green//green/green/green/green/green//green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green//green/green/green//green/green/green/green/green//green/green/green/green/green/green/green/green/green/green/green/green//green/green//green/green/green/green/green/green/green/green/green/green/green//green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green//green/green/green/green/green/green//green/green/green/green/green//green/green/green/green/green/green/green/green/green/green/green/green//green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green//green/green/green/green/green/green/green/green/green//green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green//green/green/green/green/green/green/green/green/green/green/green/green/green/green//green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green//green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green/green//green/green/green/green/green/green/green/green/green/ #### 4.1.3 Experiment 3 - Spacing Experiments 3 and 4 investigate the affordance of grouping messages, given bars with varied ordering, spatial proximity, and coloring. Experiment 3 3 tests uniform vs irregular spacing to determine if, as is currently maintained in visualization [26, 37], visual perception [47], and psychology [7, 52] literature, increasing the space between bars-and therefore the proximity of some bars to others-affords grouping. We hypothesize that **(H3A)** bar charts with uniform spacing (Fig. 5, A & B) will make messages about overall extrema more obvious than bar charts with grouping implied via irregular spacing (Fig. 5, C & D). This hypothesis stems from pilot study results and, while logical given gestalt principles [47], was not immediately obvious to us before collecting pilot data. We also hypothesize that **(H3B)** bar charts with irregular spacing defining elements in groups (Fig. 5, C & D) will make messages that discuss those groups more obvious than bar charts with uniform spacing (Fig. 5, A & B). #### 4.1.4 Experiment 4 - Color vs Spacing Experiment 4 compares the strength of spatial grouping to color grouping in bar charts (see Figure 6). Research on the hierarchy of visual grouping mechanisms has found that proximity conveys grouping more strongly that similar coloring [2, 7, 15, 26, 33]. Interestingly, the majority of studies that investigate visual grouping do not examine bar charts, instead, focusing on dot lattices [2, 7, 18, 28, 29, 33]. We hypothesize that **(H4A)** bar charts with color groups and regular spacing (Fig. 6, A & B) will make messages about overall extrema more obvious than those with groups defined by spacing (Fig. 6, C & D), because the communication of grouping will be less strong in the color-grouped charts and thus easier to ignore when evaluating extrema over multiple groups. Informed by the same known hierarchy, we also hypothesize that **(H4B)** bar charts with groups defined by spacing (Fig. 6, C & D) will make messages that discuss those groups more obvious than charts with groups defined by color (Fig. 6, A & B). #### 4.1.5 Methodological Checks Lastly, to check the methodological rigor of our survey, we include the following message-graph questions and their pre-registered hypotheses. To confirm that participants are not swayed by familiarity bias, thus always reporting that height-ordered bar charts (Fig. 3, C & D) make all messages most obvious, we hypothesize that **(H1C)** charts with bars sorted in a specific order (Fig. 3, A) will make messages comparing bars grouped in that order (Fig. 3, Order.7) most obvious due to the proximity of the bars in questions. In Experiment 2, we also test some pairs of messages with alternate wordings and sentence structures to examine the effect of the style of messages on our results. We hypothesize that **(H2E)** messages communicating the same concept, despite rewording, (Fig. 4, Partitions.3 and 4,.7 and.8,.9 and.10) will not produce different results. ### Stimuli Design Experiments 1-4 investigate variations in bar chart arrangements (see Figs. 3 to 6). Unlike much prior affordance work in visualization, we focus on varying arrangements within bar charts as opposed to visualization encoding types (e.g. bar charts, line charts, pie charts) in the interest of evaluating design decisions that are less commonly investigated in visualization literature, education, and recommendation systems. This decision also reinforces the validity of our survey design; asking participants to select between multiple types of visualizations increases the risk of familiarity bias (i.e., participants always selecting visualizations that they have seen more often) clouding participant judgement. Due to the lack of visual variance, we believe the familiarity differential of bar charts with varying arrangements (e.g., ordered ascending vs descending) is much smaller than that plausible of different visualization types (e.g., pie chart vs tree-map). #### 4.2.2 Experiment 1 - Order Experiment 1 investigates ordering of bars and is motivated by a lack of research into the effects of such design decisions. The majority of research on ordering of bars generally fails to investigate higher-level, participant-reported takeaways in favor of response time, precision, eye-tracking, and cognitive effort models [12, 14, 25, 34]. In all four experiments we test four different visualization conditions for the sake of methodological consistency. Experiment 1 consists of the same bar chart in ascending, descending, and alphabetical order, as well as a fourth, "wildcard" ordering in which tallest bars are centered forming a \(\wedge\) shape (see Fig. 3). This last arrangement was motivated by the desire to test four conditions, and by an interest in how the Gestalt Law of Symmetry, which states that people tend to perceive symmetrical shapes and prefer visual symmetry [47, 52], might have an unexpected effect on message obviousness (spoiler alert: it didn't). #### 4.2.2 Experiment 2 - Partitions Experiment 2 investigates the representation of part-to-whole bar charts. The primary motivation behind the development of its visualization conditions was to explore the obviousness of part-to-whole relationships given different partitioning. The hierarchy of visual encoding channels (as discussed in the Related Work) is universal in informing effective visualization design [26, 51], and can be used to justify the replacement of all part-to-whole visualizations (i.e., pie charts, stacked bars) with side-by-side bar charts. This replacement prioritizes precision but has not been shown to better facilitate the communication of relationships or other non-precision messages. In fact, previous work investigating the efficacy of pie and bar charts challenges the effectiveness hierarchy when completing certain tasks [38]. Experiment 2 seeks to investigate how the hierarchy of effectiveness compares to the affordance of part-to-whole messages in side-by-side (aligned) and stacked (unaligned) bar charts. Thus, Experiment 2's visualization space consists of one dataset split into two groups of three bars, both side-by-side and stacked (Fig. 4, B & C), and the same data split into three groups of two bars both side-by-side and stacked ( Fig. 4, A & C). To determine color scheme, we selected three colors from a widely used categorical color palette from Tableau1, a popular software for making visualizations. We selected these colors by avoiding hues that are strongly associated with warning (i.e., red, orange, yellow). Next we used Color Oracle2, free software that simulates common forms of Color Vision Deficiency (CVD) [46], to evaluate and slightly alter the luminance of our chosen colors so as to increase their distinction for viewers with CVD. Footnote 1: [https://help.tableau.com/current/pro/desktop/en-us/viewparts_marks_marketproperties_color.htm](https://help.tableau.com/current/pro/desktop/en-us/viewparts_marks_marketproperties_color.htm) Footnote 2: [https://colororacle.org/index.html](https://colororacle.org/index.html) #### 4.2.3 Experiment 3 - Spacing Experiments 3 and 4 seek to investigate how color and spatial arrangements of marks afford grouping. Experiment 3 is designed to replicate previous findings that irregular spacing strongly implies groups among bar charts [9]. Thus Experiment 3's stimuli design consists of two different orderings of bars, each regularly spaced (Fig. 5, A & B), and then irregularly spaced into groups (a condition with two groups of three bars, and a condition with three groups of two bars (Fig. 5, C & D). #### 4.2.4 Experiment 4 - Color vs Spacing Experiment 4 shares much of the same motivation as Experiment 3, but investigates a less strongly supported theory on grouping in bar charts. While generally proximity is agreed to imply grouping more strongly than similar coloring [2, 7, 15, 26, 33], this has not been directly measured in bar charts. Thus, Experiment 4 presents a novel investigation into the hierarchy of afforded grouping in bar charts. To do so, Experiment 4 replicates proximity grouped conditions from Experiment 3 (Fig. 6, C & D), and compares them to equivalent bar charts that use color grouping instead (Fig. 6, A & B). We reuse the CVD-friendly color scheme from Experiment 2 in this experiment as well. ### Procedure All four of our within-subjects experiments were implemented through a Qualtrics3 survey. After reading and approving a consent form, par ticipants were given the option to self-report their education level and if they had CVD (mentioned by name and colloquialized as "color blindness" in our survey). Participants were then instructed to make their browser window as large as possible and primed on the types of graphs they would see (see SM1 in Supplementary Materials for language used). They were then shown a page comprised of four charts with varied arrangements, a short sentence describing the content of the charts, and, as an attention check, a message with a fill-in-the-blank drop-down consisting of two possible answers, one of which correctly described the data depicted in all four chart conditions. Participants were instructed to _1. Use the charts below to fill in the blank_. and _2. Then select the chart that makes the statement below most obvious to you_. See SM2 in Supplemental Materials for an example survey question. Participants were shown between 6 and 10 of these questions, depending on the number of tested messages in the experiment. This methodology, which presents quantitative and easily replicable evidence of subjective takes, stands in contrast to many similarly motivated investigations, which show participants visualization stimuli and ask them to describe it [37, 54, 56, 60], or show participants a description and ask them to represent the information with a visual creation [60, 16, 44]. As mentioned in the Section 2.1, to the best of our knowledge, the only experiment with similar methodology to ours is Xiong et al.'s Experiment 2 (see SM7 in Supplemental Materials for a more detailed comparison) [54]. Both the order in which participants viewed questions, and the order charts were presented in the quadrant of every question were randomized using the Qualtrics "randomization" functionality. The order of the drop-down answers for the fill-in-the-blank was not randomized, but held consistent with terms like "smaller," "less," and "least" appearing above terms like "larger," "more," and "most," so as not to confuse participants or lead to incorrect selection despite correct comprehension. We added the fill-in-the-blank question as both an attention check, and to compel participants to actually read and consider the message when reporting the graph that made it the most obvious. Without this experimental design detail, we would have little way of knowing if participants actually read and reported their opinions on the message, because all four conditions show the same data and are therefore technically "correct" answers. While our pre-registered analysis plan4 dictates excluding a participant's chart selection if they incorrectly answer the corresponding drop-down question, we find very little inconsistency between reported obviousness of charts from participants who correctly and incorrectly answer the drop-down. See SM4 in Supplemental Materials for a comparison of results with and without this exclusion criteria. Footnote 4: [https://osf.io/np3q7](https://osf.io/np3q7) ### Participants Participants were recruited via the online platform Prolific5. Prolific connects scientific researchers with eligible human studies participants, and offers a number of services to facilitate high-quality, ethical human-subjects research, including enacting specified inclusion and exclusion criteria, encouraging fair pay rates for participants, and facilitating compensation directly. Using Prolific, we recruited participants who were over the age of 18, fluent in English, current residents of the United States, and had high (\(\geq\) 98%) approval rates on the platform, and constructed a study population that was roughly balanced on reported sex, as stated in our pre-registered study plan6. Also via Prolific, we compensated all participants 1.60USD for their participation, given an anticipated participation time of 8 minute, for an estimated rate of 12.00USD/hour. Footnote 5:prolific.co Footnote 6: [https://osf.io/np3q7](https://osf.io/np3q7) ## 5 Results ### Participants A total of 610 participants were recruited via Prolific. Of these, 591 (Exp. 1 n = 147, Exp. 2 n = 166, Exp. 3 n = 140, Exp. 4 n = 138) completed the full survey with no higher than a 30% error rate, passing the universal exclusion criteria, and were included in our final data analysis. For a breakdown of participants' reported sex, education and color vision deficiency for each experiment, see Table 1. Exact sample size per message varies based on number of participants who selected the corresponding drop-down correctly, although all sample sizes are equal to or more than our minimum pre-registered sample size of 128. For a breakdown of sample size per tested message see SM3 in Supplementary Materials. Initially, Experiment 1 included multiple un-piloted messages that resulted in very high (\(>25\%\)) error rates. We hypothesized that these errors were most likely due to ambiguous or overly convoluted messages. We re-wrote these messages to be more straightforward7, and re-ran the entire experiment, drastically decreasing error rates to \(\leq\) 12%. We report the results from the final Experiment 1 in this paper. Footnote 7: for differences in the preliminary and final run of Experiment 1, compare the pre-registered design ([https://osf.io/np3q7](https://osf.io/np3q7)) with the design reported in this paper ### Analysis We exclude all participants who answered \(>30\%\) of all drop-down answers incorrectly. Participant responses are analyzed using the Sison-Glaz procedure for estimating multinomial proportion confidence intervals [39], as implemented by the Python library statsmodels.stats [36]. Due to the multinomial nature of this procedure, no correction for family-wise error rate is necessary. Using the worst-case multinomial proportion table (Table 1) from Steven K. Thompson's "Sample Size for Estimating Multinomial Proportions" [42], we determine minimum sample size for a 95% confidence interval within a maximum specified distance from the true proportion, \(d\), of 0.1 to be 128 participants per experiment. We elect to only conduct a visual analysis of the confidence intervals, avoiding null hypothesis significance testing and its common pitfalls (e.g. type II statistical errors) [13]. We present and discuss results of all four experiments using language and best practices of statistical analysis for Human Computer Interaction [13]. In Figures 3 to 6, we visualize the actual proportions and 95% confidence interval for all four conditions given each message tested. We highly encourage all readers to view and determine strength of results for themselves, but will summarize visual findings using hedged language as advised by [13]. noting that this signal is much weaker for message Order.1, which concerns the largest value in the bar chart. Observed alone, there is not a sizeable difference in CIs to support condition C affording Order.1 more than condition D. Yet, when taken into context with messages Order.2 and Order.3, a more consistent signal of interest emerges. The data in Figure 3, Minimum-Centric provide a similar series of visually distinct signals that bar charts formatted in ascending order (condition D) make messages concerning the smallest, second-, and third-smallest bars more obvious than those formatted in descending, alphabetical, or centrally-peaked order (C, A, B). This signal can be seen to grow stronger (i.e., the distance between CIs for ascending and descending conditions increases) as the messages concern more convoluted (e.g., second- and third- order) rankings. This increase in signal suggests that readers do not simply report increased obviousness due to marks of interest being immediately proximate to the left side of a chart, and that ascending and descending conditions still have an impact on obviousness of messages concerning bars that are more centrally located (e.g., third-largest and -smallest bars). Finally, the data shown in Figure 3, Method Check support **H1C** with a strong signal that bar charts formatted in a particular order (condition A) make messages comparing companies grouped in that order (Order.7) more obvious than any other tested bar charts. This result is supported by cognitive psychology research that credits proximity with the ability to suggest grouping [7, 26, 28, 47, 52, 50]. ### Experiment 2 Results Experiment 2 results are visualized in Figure 4. Supporting **H2A**, the data in Figure 4, Whole Comparisons presents a consistent, visually distinct signal that, in part-to-whole charts, stacking bars (conditions C, D) make messages concerning comparison of the whole more obvious than arranging them side-by-side(conditions A, B). Inversely, the data in Figure 4, Part Comparisons support **H2C** by presenting a consistent, visually distinct signal that bars arranged side-by-side (conditions A, B) make messages regarding the comparison of single parts more obvious than their stacked equivalents (conditions C, D). The data in Figure 4, Proportions provide fairly strong evidence to support **(H2B)** stacking bars (condition D) makes messages regarding a single part as a percentage of a three-part whole (message Partitions.7, 8) more obvious than a side-by-side arrangement (condition B). At the same time, the visualized CIs in Figure 4, Proportions provide no evidence to support **(H2D)** the same difference in signal when messages regard a single part as a percentage of a two-part whole (messages Partitions.9, 10). This difference could be explained by a visual processing capacity limit of two colors at once [55, 35]. For further discussion, see Section 6.1. Finally, Experiment 2 renders very similar CI results when testing re-wordings of the same messages (see red annotations in Fig. 4). This similarity supports **H2E** and the methodological validity of the survey by addressing concerns of potential confounding due to phrasing variations. ### Experiment 3 Results Experiment 3 results are visualized in Figure 5. The data in Figure 5, Ranking support **H3A** by depicting a consistent, visually distinct signal that bar charts without irregular spacing (conditions A, B) make messages concerning overall extrema more obvious than bar charts with irregular spacing (conditions C, D). The data in Figure 5, 3 Groups and Figure 5, 2 Groups support **H3B** by displaying a consistent, visually distinct signal that bar charts grouped via irregular spacing (conditions C, D) make messages concerning those groups more obvious than bar charts with identical ordering but uniform spacing (conditions A, B). ### Experiment 4 Results Experiment 4 results are visualized in Figure 6. The data in Figure 6, Ranking slightly support **H4A** with a consistent signal that bar charts with color grouping (conditions A, B) make messages concerning overall extrema more obvious than bar charts with proximity grouping (conditions C, D). The difference in signal between conditions A-B and C-D appear to be significant but are not as widely spread as hypothesis **H4A** postured. The data in Figure 6, 2 Groups do not support **H4B**. Instead, they show a consistent, visually distinct signal that bar charts with color grouping (condition A) make messages concerning two groups of three bars more obvious than bar charts with spatial grouping (condition C), which is the inverse of our hypothesized hierarchy. The data in Figure 6, 3 Groups display visually approximate confidence intervals for conditions B and D, which-while different from results in Figure 6, 2 Group-still do not support **H4B**. We speculate the reason for this difference could be a visual processing capacity limit of two colors [55, 35], as discussed in Experiment 2, as well. For further discussion, see Section 6.1. ## 6 Discussion In this paper, we present four experiments investigating differences in visual arrangements' afforded messages. We do so through an empirical methodology that evaluates which arrangements of marks increase the obviousness of potential takeaways. Fig. 3: Experiment 1 Results. Tested conditions are shown across the top of the figure. Below, lines encode 95% CIs for the proportion of respondents that report each condition makes the given message most obvious. Circles encode the actual proportion observed from the experiment. ### _Main Takeaways_ We summarize the following outcomes from the Results section: 1. Messages concerning largest, second-, and third- largest bars are made the most obvious by bars sorted in descending order from left to right (Fig. 3, Maximum-Centric). 2. Messages concerning smallest, second-, and third- smallest bars are made the most obvious by bars sorted in ascending order from left to right (Fig. 3, Minimum-Centric). Takeaways 1 and 2 advise researchers and designers alike that the ordering of marks in a bar chart affects the affordance of messages about ranking. These takeaways exist within the context of the tested charts in Experiment 1 and the English-speaking nature of our participants. Still, these findings help bolster empirical evidence surrounding the impact of sorting bars, much of which has been conflicting. For example, Tversky et al. found similar evidence in their cognitive psychology study in 1991, discovering that children that speak directionally-ordered languages (e.g. left-to-right for English) associate ordering schema accordingly [44]. At the same time, newer perceptual effort models, supported by eye-tracking experiments, suggest the opposite; ascending bars require more effort to extract the minimum value than descending bars [34]. Regardless, this paper presents actionable recommendations for visualization designers who aim to draw attention to ranking-related messages. 1. In bar charts depicting part-to-whole data, messages concerning Fig. 4: Experiment 2 Results. Tested conditions are shown across the top of the figure. Below, lines encode 95% CIs for the proportion of respondents that report each condition makes the given message most obvious. Circles encode the actual proportion observed from the experiment. Fig. 5: Experiment 3 Results. Tested conditions are shown across the top of the figure. Below, lines encode 95% CIs for the proportion of respondents that report each condition makes the given message most obvious. Circles encode the actual proportion observed from the experiment. the whole(s) are made more obvious by stacking than by side-by-side arrangements (Fig. 4, Whole Comparisons). 4. In bar charts depicting part-to-whole data, messages concerning the parts(s) are made more obvious by side-by-side arrangements than by stacking (Fig. 4, Part Comparisons). 5. In bar charts depicting part-to-whole data, messages concerning parts as percentages of the whole are sometimes made more obvious by stacking (Fig. 4, Proportions). Takeaway 3 is hardly surprising since comparing visualized sums is easier than trying to mentally sum parts and then compare. The same, but inverse logic holds for Takeaway 4. More interestingly, Takeaway 5 finds messages about parts as a percentage of a whole are made equally, if not more, obvious by stacked bars over side-by-side bars. This holds true, even when an side-by-side arrangement would, from a precision standpoint, more effectively facilitate said comparison over its stacked counterpart [17, 41]. Thus, we present initial evidence that precision and affordance (at least in the way it is operationalized as "obviousness" in our experiments) can diverge. In other words, a graph may lead to more precise comparisons and still be worse from an arrangement-message matching standpoint. This explanation is speculative, and requires additional empirical support. Currently our studies confound number of colors, number of objects to be compared, and number of total groups. Additionally, some conditions use saturation differences (e.g., light purple and dark purple) while others use hue differences. While we doubt that these factors drive asymmetries in our results, our understanding could be better supported by experiments that are specifically designed to isolate the effect of number of color hues. Still, we remain excited about this speculative account because, while surprising, it is consistent with new models of color grouping and processing capacity [35, 55]. If this speculative reasoning holds, it would produce a clear design guideline: use color to distinguish among two groups, use either for three categories, and use space to distinguish among four or more. ### Limitations While the method with which we study visualization affordances presents many positive features (see Section 3.1), its confirmatory nature also restricts the scope of possible findings. As noted in the Material & Methods section, our findings must be digested with their restricted scope in mind. For example, Takeaway 2 (_Messages concerning smallest... bars are made the most obvious by bars sorted in ascending order from left to right._) holds in comparison to the other three orders tested, but it may not do so when compared to other bar chart arrangements. Fortunately, this limitation can be mitigated in part by pairing our experimental design with an exploratory method, as detailed in the Related Work section. Similar contextual restrictions surround our study population. We recruit participants who are fluent in English, over the age of 18, and currently reside in the US. Ordered language conventions could very likely influence findings [44], and the replication of our work with other populations is prudent before generalizing results on a global scale. Fortunately, due to the easily replicable and modifiable nature of our method, such experiments could be run affordably. Additionally, the results we present only consist of responses from participants who correctly filled in the drop-down of a given message. If a participant incorrectly filled in message A, their response to which chart made message A the most obvious was discarded, though all of their other responses were included. This exclusion has the potential to bias results towards an audience with a high graphic literacy. But to maintain a high quality of data, such removal is necessary to ensure that analyzed participants are actually answering survey questions with care. Due to both of these considerations, we provide a comparison of all results with and without this exclusion in SM4 in the Supplemental Materials - no large differences are apparent between the two. Lastly, while we posit that affordance is an important metric in evaluating visualizations, the line between affordance and effectiveness is blurry. Can a graph make a desired message obvious but be ineffective? Or can a graph be effective but not make a desired message obvious? These are questions that need to be clarified, but are difficult due to a lack of agreed upon definition for effectiveness in visualization (see Related Work for a summary of metrics). Our current intuition, advised by our presented findings, is that affordance should correlate with increased graph comprehension, reduced reading effort, and general viewer preference. Future work is warranted to investigate if strong arrangement-message matches lead to increased efficacy of a graph, perhaps through the use of response time and precision as Vessey's cognitive fit model suggests [45] or via other metrics like cognitive effort and memorability. ### Implications & Future Work The studies we present firmly suggest that visual arrangements can directly impact the messages people perceive from a graph. That is, the various arrangements of identical marks in a graph can alter the strength of perceived messages. While our experiments cover a limited set of visual arrangements and messages, they point to a number of implications, and compel the expansion of this work. To continue to build out academic and practical understanding of the effect of arrangements on afforded messages, our work can be extended as follows: * _Study different arrangement variations_. Future works may maximize their impact by investigating properties that are generic enough to apply to a wide variety of visual representations. Candidate arrangements include: orientation, rotation, styling of negative marks, and visual linking through outlines or edges. * _Study different message types_. We cover a small subset of potential messages afforded by visualizations. Further exploration of other messages could drastically expand our understanding of visualizations and what they communicate. * _Study different visualization types_. Future work could also examine the arrangements studied in this paper (or an extension of them) with new types of graphs. Candidates include: spacing or ordering in pie charts or tree maps, color grouping in scatter plots or choropleths, and ordering in Sankey diagrams. * _Study the relative strength of arrangements' affordance of messages_. Our methodology provides continuous, as opposed to binary, output allowing researchers to investigate both whether arrangements afford a message, and also possible hierarchies of arrangements' affordance (e.g., spacing affords grouping \(>\) color affords grouping \(>\) shape affords grouping), as demonstrated in Experiment 4. Lastly, the work presented in this paper has relevant implications for practitioners. This work provides infrastructure to build a "library" of visual arrangements and their afforded message,s which designers could use to inform and evaluate their visualizations. Practically, a designer could begin either with a desired message to communicate, or with a set of visualizations they want to narrow down, and use our framework, or a repository of results from our framework, to better understand the implications behind their designs. The same affordance library could be used as an evaluation tool to review existing visualizations. Existing designs could be evaluated so as to confirm that intended messages are conveyed strongly and, equally paramount, that unintended messages are not strongly communicated. ## 7 Conclusion In this paper, we investigate how four different arrangements of marks - ordering, partitioning, spacing, and coloring - in bar charts afford messages on ranking, part-to-whole relationships, and grouping. We present an replicable, scalable, modifiable, confirmatory methodology for investigating arrangements of marks within visualizations and their relative impact on afforded messages. In our Related Work, we establish current methods of investigating visualization affordances and current understanding of bar charts to provide context for our findings. In our Discussion, we summarize our findings into nine key takeaways which provide insight for visualization designers, researchers, and educators on the affordance of messages when considering spatial and color arrangements of marks. We then contextualize said findings, comparing them to the closest existing research. In summary, we provide two useful contributions: 1) four experiments resulting in nine takeaways on how bar chart arrangements afford various messages and 2) the tools to continue this work through an easily scalable and modifiable method for evaluating visualization arrangements' impact on their afforded messages. ## Supplemental Materials All supplemental materials are available on OSF at [https://osf.io/bvy95/files/osfstorage](https://osf.io/bvy95/files/osfstorage). In particular, they include (1-2) screenshots of the Qualtrics survey for posterity, (3) a table showing sample size for each tested message, (4) a side-by-side visual comparison of results excluding and including participants who answered a specific question incorrectly (does not apply to participants fully excluded from studies), (5) raw data files and a runnable jupyter notebook with all analysis, (6).csv files used in the visualized CIs for Figures 3 to 6, and (7) a comparison of our work to [54]. ## Figure Credits Figure 2 is a partial recreation of figures that appear in [10]. ## Acknowledgments The authors wish to thank Laura South, Myrl Marmarelis, and Sydney Purdue for their advice on statistical analysis. This work was supported in part by a grant from the National Science Foundation (Award #2236644)
2303.11644
The Cut Method on Hypergraphs for the Wiener Index
The cut method has been proved to be extremely useful in chemical graph theory. In this paper the cut method is extended to hypergraphs. More precisely, the method is developed for the Wiener index of $k$-uniform partial cube-hypergraphs. The method is applied to cube-hypergraphs and hypertrees. Extensions of the method to hypergraphs arising in chemistry which are not necessary $k$-uniform and/or not necessary linear are also developed.
Sandi Klavžar, Gašper Domen Romih
2023-03-21T07:37:35Z
http://arxiv.org/abs/2303.11644v1
# The Cut Method on Hypergraphs for the Wiener Index ###### Abstract The cut method has been proved to be extremely useful in chemical graph theory. In this paper the cut method is extended to hypergraphs. More precisely, the method is developed for the Wiener index of \(k\)-uniform partial cube-hypergraphs. The method is applied to cube-hypergraphs and hypertrees. Extensions of the method to hypergraphs arising in chemistry which are not necessary \(k\)-uniform and/or not necessary linear are also developed. \({}^{a}\) Faculty of Mathematics and Physics, University of Ljubljana, Slovenia [email protected] [email protected] \({}^{b}\) Faculty of Natural Sciences and Mathematics, University of Maribor, Slovenia \({}^{c}\) Institute of Mathematics, Physics and Mechanics, Ljubljana, Slovenia **Keywords:** hypergraph; Wiener index; cut method; partial cube-hypergraph; hypertree; phenylene; Clar structure ## 1 Introduction The cut method, whose standard form was introduced in 1995 in [17], has had a remarkable response in chemical graph theory. The method originally designed for the Wiener index of partial cubes was later developed for many other topological indices and has undergone many generalizations to more general situations than partial cubes. This applies in itself to many applications in mathematical chemistry where topological indices play important role. The basic idea is to first find a partition of the edges of a (molecular) graph and by removing parts of this partition construct smaller (weighted) graphs, called quotient graphs. After that, we infer back to the original graph from the quotient graphs. The state of research on the cut method up to 2015 is summarized in the survey article [18]. The method is still the subject of ongoing research, see [1, 3, 4, 6, 11, 31, 32] as well as references therein. Hypergraphs form a structure that greatly generalizes the concept of a graph. In chemical graph theory, the standard method of representing molecules is by means of associated (chemical) graphs. However, some molecules are more complicated than others and sometimes it is more convenient and more adequate to represent them as hypergraphs, see [14, 20] for some chemical problems dealing with hypergraph theory. As a result, various problems of importance in mathematical chemistry have been investigated on hypergraphs, including spectral aspects [2, 23, 29] and different topological indices [33, 34]. Very recently, while investigating molecular representations in drug design, a hypergraph-based topological framework was designed to characterize molecular structures and interactions at atomic level [25]. Interestingly, in the very same year when the cut method was introduced, Burosch and Ceccherini published the paper [7] on isometric embeddings into hypergraphs, which is the second main source for the present paper. The Wiener index is one of the most researched topics in the whole field of chemical graph theory. As already mentioned, the cut method was first designed for the Wiener index of graphs. In the last few years, the Wiener index has received a lot of attention also on hypergraphs. In [30] the authors investigate, among others, 3-uniform paths and lower bounds on the Wiener index of \(k\)-uniform hypergraphs. In [12, 22] hypergraphs are constructed from trees and their Wiener index investigated. The effect of some transformations on the Wiener index of a hypergraph and extremal hypertrees with respect to the Wiener index is studied in [24]. The \(k\)-uniform unicyclic hypergraphs with maximum/minimum and second maximum/minimum Wiener index are determined in [35], while the Wiener index of some composite hypergraphs and sunflower hypergraphs is the topic of [5]. Finally, in [8] the concept of the \(k\)-Wiener index is introduced and studied on the so called \(k\)-plex hypergraphs. We proceed as follows. In the next section we introduce the mathematical machinery on hypergraphs needed latter on. In particular, partial cube-hypergraphs are defined and their characterizations recalled. In Section 3 we develop the cut method for the Wiener index of a hypergraph. In the last section we provide applications and extensions of the cut method including cube-hypergraphs, hypertrees, and the so called linear phenylene hypergraphs. Preliminaries In this section, we set the scene for the hypergraph cut method. In the first part, we introduce the necessary concepts about hypergraphs, focusing on distance and their Cartesian products. We then introduce partial cube-hypergraphs on which the cut method will operate and recall two of their characterizations. ### Hypergraphs A hypergraph \(H=(V(H),E(H))\) has the vertex set \(V(H)\) and the edge set \(E(H)\), where each edge \(e\in E(H)\) is a non-empty subset of \(V(H)\). \(H\) is \(k\)_-uniform_ if the size of every edge \(e\in E(H)\) is \(k\) and is _linear_ if \(|e\cap e^{\prime}|\leq 1\) for every \(e,e^{\prime}\in E(H)\), \(e\neq e^{\prime}\). Let \(H\) and \(H^{\prime}\) be hypergraphs. If \(V(H^{\prime})\subseteq V(H)\) and \(E(H^{\prime})\subseteq E(H)\) we say that \(H^{\prime}\subseteq H\) is a _subhypergraph_ of \(H\). Clearly, if \(H\) is \(k\)-uniform, then \(H^{\prime}\) is also \(k\)-uniform. If \(F\subseteq E(H)\), then \(H-F\) denotes the subhypergraph of \(H\) obtained from \(H\) by removing all the edges from \(F\). Let \(u\) and \(v\) be different vertices of \(H\). A \(u,v\)_-path of length \(s\geq 1\)_ in \(H\) is a sequence \(u_{0}=u,e_{1},u_{1},\ldots,e_{s},u_{s}=v\), where \(u_{i}\) are pairwise different vertices, \(e_{i}\) are pairwise different edges, and \(\{u_{i-1},u_{i}\}\subseteq e_{i}\) for \(i\in[s]=\{1,\ldots,s\}\). The _distance_\(d_{H}(u,v)\) between vertices \(u\) and \(v\) is the length of a shortest \(u,v\)-path. We also set \(d_{H}(u,u)=0\). A subhypergraph \(H^{\prime}\subseteq H\) is _isometric_ if \(d_{H^{\prime}}(u,v)=d_{H}(u,v)\) holds for all \(u,v\in V(H^{\prime})\). We further say that a set of vertices \(X\subseteq V(H)\) is _convex_ in \(H\) if for every \(u,v\in X\) and every \(z\in V(H)\), the equality \(d_{H}(x,z)+d_{H}(z,y)=d_{H}(x,y)\) implies \(z\in X\). The _Wiener index_ of a hypergraph \(H\) is defined as the sum of the distances between all unordered pairs of vertices of \(H\), that is, \[W(H)=\sum_{\{u,v\}\in\binom{V(H)}{2}}d_{H}(u,v).\] The _Cartesian product_\(H\,\square\,H^{\prime}\) of hypergraphs \(H\) and \(H^{\prime}\) is a hypergraph with the vertex set \(V(H)\times V(H^{\prime})\) and the edge set \[\{\{u\}\times e^{\prime}:\ u\in V(H),\ e^{\prime}\in E(H^{\prime})\}\cup\{e \times\{u^{\prime}\}:\ e\in E(H),\ u^{\prime}\in V(H^{\prime})\}.\] Just as Cartesian products of graphs, Cartesian products of hypergraphs have several nice properties, cf [15, 16]. In particular, if \(H\) and \(H^{\prime}\) are \(k\)-uniform hypergraphs, then \(H\,\square\,H^{\prime}\) is also \(k\)-uniform, and the Cartesian product operation is associative. For \(k\geq 2\), let \(\mathcal{Q}_{k}\) denote the hypergraph with \(k\) vertices and a single edge containing all the vertices. For \(n\geq 1\), the \(k\)_-uniform \(n\)-cube_\(\mathcal{Q}_{k}^{n}\) is the Cartesian product of \(k\) copies of \(\mathcal{Q}_{k}\). See Fig. 1 where \(\mathcal{Q}_{3}^{1}\), \(\mathcal{Q}_{3}^{2}\), and \(\mathcal{Q}_{3}^{3}\) are presented. The \(k\)-uniform \(n\)-cube \({\cal Q}_{k}^{n}\) can be equivalently described as follows. Its vertex set is \(\{0,1,\ldots,k-1\}^{n}\) and an edge consists of all \(n\)-tuples which coincide on \(n-1\) coordinates while the remaining coordinate ranges over \(\{0,1,\ldots,k-1\}\). It follows that \(|V({\cal Q}_{k}^{n})|=k^{n}\) and \(|E({\cal Q}_{k}^{n})|=n\cdot k^{n-1}\). Note that \({\cal Q}_{2}^{n}\) is a 2-uniform hypergraph which is as a graph known as the \(n\)-cube. ### Partial cube-hypergraphs A \(k\)-uniform hypergraph \(H\) is a _partial cube-hypergraph_ if \(H\) is an isometric subhypergraph of some \({\cal Q}_{k}^{n}\). A hypergraph \(H\) is _edge-gated_ if for any edge \(e=\{a_{1},\ldots,a_{k}\}\in E(H)\) and any vertex \(x\in V(H)\) there exists \(j\in[k]\) such that \(d_{H}(x,a_{i})=d_{H}(x,a_{j})+1\) for \(i\in[k]\), \(i\neq j\). We also say that \(a_{j}\) is the _gate_ of \(x\) in \(e\). Note that if \(x\in e\) then \(x\) is its own gate in \(e\). It is easy to see that in 2-uniform hypergraphs (alias graphs) \(H\) is edge-gated if and only if \(H\) is bipartite. From this reason, edge-gated hypergraphs were named bipartite hypergraphs in [7], where this concept was originally introduced. However, since there are numerous ways how bipartite graphs can be extended to hypergraphs we decided to change the terminology. The present terminology also mimics the established graph terminology, cf. [10]. It is easy to see that if hypergraphs \(H\) and \(H^{\prime}\) are both edge-gated then so is \(H\,\square\,H^{\prime}\). Also, if \(H^{\prime}\) is a connected isometric subgraph of an edge-gated hypergraph \(H\), then \(H^{\prime}\) is edge-gated as well. It follows that partial cube-hypergraphs and hence Figure 1: Cube-hypergraphs in particular \(k\)-uniform \(n\)-cubes are edge-gated. If \(x\) and \(y\) are two (adjacent) vertices of a hypergraph \(H\), then let \(H(x,y)\) denote the set of vertices that are closer to \(x\) than to \(y\), that is, \[H(x,y)=\{z\in V(H):\ d_{H}(z,x)<d_{H}(z,y)\}.\] Further, if \(e=\{a_{1},\ldots,a_{k}\}\in E(H)\), then let \[H(a_{i},e)=\{z\in V(H):\ d_{H}(z,a_{i})<d_{H}(z,a_{j}),\ j\neq i\}.\] In addition, set \[H_{e}=\{H(a_{1},e),\ldots,H(a_{k},e)\}.\] Let now \(H\) be an edge-gated hypergraph and \(e=\{a_{1},\ldots,a_{k}\}\in E(H)\). Since \(a_{i}\in H(a_{i},e)\) we have the following important facts. **Lemma 2.1**.: [7, Lemma 1(ii), Lemma 2] _If \(H\) is an edge-gated hypergraph and \(e=\{a_{1},\ldots,a_{k}\}\in E(H)\), then the following statements hold._ 1. \(H_{e}\) _is a partition of_ \(V(H)\)_._ 2. _If_ \(e^{\prime}\in E(H)\)_, then either_ \(|e^{\prime}\cap H(a_{i},e)|=1\) _for all_ \(i\in[k]\) _or_ \(e^{\prime}\subseteq H(a_{i},e)\) _for some_ \(i\in[k]\)_._ We next recall the following, key definition from [7]. If \(H\) is a hypergraph, then the binary relation \(\Theta\) is defined on \(E(H)\) as follows: \[e\Theta e^{\prime}\ \equiv\ \forall A\in H_{e}:\ e^{\prime}\cap A\neq\emptyset.\] Note first that for any edge \(e\in E(H)\) we have \(e\Theta e\). If \(H\) is edge-gated, then \(\Theta\) is also symmetric by Lemma 2.1(ii). Moreover, we recall the following important fact. **Lemma 2.2**.: [7, Lemma 3] _If \(H\) is an edge-gated hypergraph and for every \(e\in E(H)\), every \(A\in H_{e}\) is convex, then \(f\Theta f^{\prime}\) if and only if \(H_{f}=H_{f^{\prime}}\)._ For hypergraphs which fulfil the conditions of Lemma 2.2, the relation \(\Theta\) is an equivalence relation where the transitivity is guaranteed by Lemma 2.2. Partial cube-hypergraphs which are \(k\)-uniform can now be characterized as follows. **Theorem 2.3**.: [7, Theorem 1] _A \(k\)-uniform hypergraph \(H\) is a partial cube-hypergraph if and only if \(H\) is edge-gated and for every \(e\in E(H)\), every \(A\in H_{e}\) is convex._ **Theorem 2.4**.: [7, Theorem 2] _A \(k\)-uniform hypergraph \(H\) is a partial cube-hypergraph if and only if \(H\) is edge-gated and \(\Theta\) is transitive._ Cut method for hypergraphs We now have all the tools needed for the main theorem of this article. But before we can formulate it, we need two additional auxiliary results and the following concepts. If \(H\) is a connected hypergraph, then \(F\subseteq E(H)\) is a _cut_ if the edges from \(F\) are pairwise disjoint and \(H-F\) consists of at least two components. We further say that the cut \(F\) is a _convex cut_ if the vertex set of each component of \(H-F\) is a convex set. Let \(H\) be a \(k\)-uniform partial cube-hypergraph. Theorems 2.3 and 2.4 imply that the \(\Theta\) relation is an equivalence relation on \(E(H)\). We will denote its equivalence classes by \(F_{1},\ldots,F_{m}\). In addition, if \(e\in E(H)\), then the equivalence class with the representative \(e\) will also be denoted by \(F_{e}\), that is, \(F_{e}=\{f\in E(H)\ :\ e\Theta f\}\). From Lemma 2.2 we infer that the hypergraph \(H-F_{e}\) consists of components whose vertex sets are precisely the sets from \(H_{e}\). This yields the following important fact. **Proposition 3.1**.: _Let \(H\) be \(k\)-uniform partial cube-hypergraph and let \(e\in E(H)\). Then \(H-F_{e}\) has exactly \(k\) components._ We also need the following auxiliary result. **Proposition 3.2**.: _Let \(H\) be \(k\)-uniform partial cube-hypergraph and let \(e\in E(H)\). If \(u\) and \(v\) are vertices from different components of \(H-F_{e}\), then every shortest \(u,v\)-path contains exactly one edge from \(F_{e}\)._ Proof.: By Proposition 3.1, \(H-F_{e}\) contains \(k\) components which we denote by \(H_{1},\ldots H_{k}\). We may without loss of generality assume that \(u\in H_{1}\) and \(v\in H_{k}\). Furthermore, let \(F_{e}=\{e_{1},\ldots,e_{\ell}\}\). By Lemma 2.2(ii) the vertices \(u_{i}\) and \(v_{i}\) defined as \[\{u_{i}\}=V(H_{1})\cap e_{i}\quad\text{and}\quad\{v_{i}\}=V(H_{k})\cap e_{i}\] are well-defined for every \(i\in[\ell]\). From the edge-gated property of \(H\) it follows that \(d_{H}(u_{i},v)=d_{H}(v_{i},v)+1\). Since every \(u,v\)-path contains at least one of the vertices \(u_{i}\), every shortest \(u,v\)-path contains exactly one of the edges \(e_{i}\), \(i\in[\ell]\). Let \(H\) be a \(k\)-uniform partial cube-hypergraph and let \(F_{1},\ldots,F_{m}\) be its \(\Theta\)-classes. By Proposition 3.1, \(H-F_{i}\) has \(k\) components, we denote them in the sequel by \(H_{1}(F_{i}),\ldots,H_{k}(F_{i})\). Set in addition \[n_{j}(F_{i})=|V(H_{j}(F_{i}))|,\ j\in[k],\ i\in[m]. \tag{1}\] The cut method for hypergraphs now reads as follows. **Theorem 3.3**.: _If \(H\) is a \(k\)-uniform partial cube-hypergraph, \(F_{1},\ldots,F_{m}\) are its \(\Theta\)-classes, and integers \(n_{j}(F_{i})\) are defined as in (1), then_ \[W(H)=\sum_{i=1}^{m}\sum_{\{j,j^{\prime}\}\in{[k]\choose 2}}n_{j}(F_{i})\cdot n_{j ^{\prime}}(F_{i})\,.\] Proof.: Since \(F_{1},\ldots,F_{m}\) form a partition of \(E(H)\), the idea is to consider the contribution of each edge to \(W(H)\). Consider arbitrary vertices \(u\) and \(v\) of \(H\) and an arbitrary \(u,v\)-shortest path \(P\). By Proposition 3.2, edges from \(P\) pairwise lie in different \(\Theta\)-classes of \(E(H)\). If \(e\) is an edge of \(P\), then the contribution of \(F_{e}\) to the distance \(d_{H}(u,v)\) is exactly \(1\). Consequently, the contribution of \(F_{e}\) to \(W(H)\) is exactly \[\sum_{\{j,j^{\prime}\}\in{[k]\choose 2}}n_{j}(F_{e})\cdot n_{j^{\prime}}(F_{e}).\] Summing over all \(\Theta\)-classes the result follows. ## 4 Some applications In this section we give some examples and applications of Theorem 3.3. ### Cube-hypergraphs Cube-hypergraphs are partial cube-hypergraphs by definition. Hence Theorem 3.3 applies to them and leads to the following result. **Proposition 4.1**.: _If \(n\geq 1\) and \(k\geq 2\), then_ \[W(\mathcal{Q}_{k}^{n})=n{k\choose 2}k^{2(n-1)}.\] Proof.: To apply Theorem 3.3, we first determine the \(\Theta\)-classes of \(\mathcal{Q}_{k}^{n}\). Let an edge \(e\in E(\mathcal{Q}_{k}^{n})\) be of the form \(\{a_{i}=(i,0,\ldots,0)\ :\ i\in\{0,1,\ldots,k-1\}\}\). Then \(H(e,a_{i})\) contains the vertices \((i,v_{2},\ldots,v_{n})\), where \((v_{2},\ldots,v_{n})\in\{0,1,\ldots,k-1\}^{n-1}\). By Theorem 2.3, sets \(H(e,a_{i})\) are convex and the subhypergraphs induced by them are isomorphic to \(\mathcal{Q}_{k}^{n-1}\). Then the \(\Theta\)-class \(F_{e}=F_{1}\) contains all the edges whose last \(n-1\) coordinates are fixed and the first coordinate ranges from \(0\) to \(k-1\). Using the same reasoning we get that every \(\Theta\)-class is of the above form. Therefore \(\mathcal{Q}_{k}^{n}\) has \(\Theta\)-classes \(F_{1},\ldots,F_{n}\) where \(\mathcal{Q}_{k}^{n}-F_{i}\) has components which are isomorphic to for \(i\in[n]\). It then follows that \(n_{j}(F_{i})=k^{n-1}\) for every \(j\in[k]\) and \(i\in[n]\). From Theorem 3.3 it follows that \[W(\mathcal{Q}_{k}^{n})=\sum_{i=1}^{n}\sum_{\{j,j^{\prime}\}\in\binom{[k]}{2}}k^{ n-1}\cdot k^{n-1}=n\binom{k}{2}k^{2(n-1)},\] which we wanted to show. Setting \(k=2\), the hypergraph \(\mathcal{Q}_{2}^{n}\) is the \(n\)-cube graph \(Q_{n}\) and Proposition 4.1 implies a well-known result \(W(Q_{n})=n4^{n-1}\), which can in particular be deduced from the formula for the Wiener index of Cartesian products [13]. ### Hypertrees A hypergraph \(T\) is a _hypertree_ if it is connected, linear, and has no cycles. Here a _cycle_ in a hypergraph is defined just as we defined a path except that the first and the last vertex from the corresponding sequence coincide. A hypertree which is linear and \(k\)-uniform is a partial cube-hypergraph where every edge \(e\) is it own \(\Theta\)-class. Hence Theorem 3.3 as a special case yields the following result. **Corollary 4.2**.: _If \(T\) is a \(k\)-uniform hypertree, then_ \[W(T)=\sum_{e\in E(T)}\sum_{\{j,j^{\prime}\}\in\binom{[k]}{2}}n_{j}(e)\cdot n_ {j^{\prime}}(e),\] _where \(n_{j}(e)=n_{j}(F_{e})\)._ Actually Corollary 4.2 holds also if we do not require that a hypertree is uniform. For this sake one just needs to reformulate Proposition 3.1 such that its conclusion asserts that for any edge \(e\in E(T)\), the hypergraph \(T-e\) has exactly \(|e|\) components. Moreover the second key auxiliary result, Proposition 3.2, also holds by the fact that in a hypertree there is a unique shortest path between two vertices. In this way Corollary 4.2 extends to **Theorem 4.3**.: [28, Theorem 3] _If \(T\) is a hypertree, then_ \[W(T)=\sum_{e\in E(T)}\sum_{\{j,j^{\prime}\}\in\binom{[|e|]}{2}}n_{j}(e)\cdot n _{j^{\prime}}(e).\] For an example consider a hypertree \(T_{1}\) from Figure 2. The hypertree \(T_{1}\) has seven vertices and four edges. We now apply Theorem 4.3. For instance consider the edge \(e=\{a_{1},a_{2},a_{3}\}\) as shown in the figure. Then \(n_{1}(e)=2\), \(n_{2}(e)=1\) and \(n_{3}(e)=4\). Therefore the contribution of \(e\) to the formula of Theorem 4.3 is \(2\cdot 1+1\cdot 4+2\cdot 4\). Doing similar computations for the other three edges (see the bottom line of Fig. 2) we get \[W(T_{1})=1\cdot 6+(2\cdot 1+1\cdot 4+2\cdot 4)+(5\cdot 1+5\cdot 1+1\cdot 1)+6 \cdot 1=37.\] A limitation of Theorem 4.3 is that it only works for linear hypertrees. On the other hand, there exist many different definitions of _acyclicity_ in hypergraphs, where some of them also allow for non-linear hypergraphs. See for example [27]. We next show with an example that the main idea of Theorem 4.3 can sometimes be generalized to such cases as well. Define the _linear phenylene_ hypergraphs \(LP_{n}\), \(n\geq 2\), as follows. (For some recent studies of phenylenes in mathematical chemistry see [9, 19, 21, 26].) \(LP_{n}\) has vertex set \([6n]\). It has \(2n-1\) hyperedges. The first \(n\) of them are of the form \(\{6i+1,6i+2,\ldots,6i+6\}\) where \(i\in\{0,1,\ldots,n-1\}\), and the remaining \(n-1\) hyperedges edges are of the form \(\{6i+5,6i+6,6i+7,6i+8\}\), where \(i\in\{0,1,\ldots,n-2\}\). In Figure 3 the hypergraph \(LP_{4}\) is drawn. It is easy to see that every edge \(e\in E(LP_{n})\) is a convex cut with the following property. Taking any two vertices \(u,v\) from different components of \(LP_{n}-e\), every shortest \(u,v\)-path contains \(e\) (exactly once). Note, however, that the two vertices which lie in the intersection of two hyperedges are not separated by any of the cuts. But it is clear that the distance between such two vertices is \(1\). Together there are \(2(n-1)\) such pairs and therefore this number needs to be added to the Wiener index Figure 2: Hypertree \(T_{1}\). follows. Removing an edge of the form \(\{6i+1,6i+2,\ldots,6i+6\}\), where \(i\in[n-2]\), produces four components where two of them contain a single vertex and the remaining two have \(6i+2\) and \(6(n-i-1)+2\) vertices, respectively. The cases when \(i=0\) or \(i=n-1\) give five components each, four of them contain a single vertex, while the remaining one contains \(6n-4\) vertices. On the other hand, removing an edge of the form \(\{6i+5,6i+6,6i+7,6i+8\}\) produces two components with \(6(i+1)\) and \(6(n-i-1)\) vertices, respectively. Therefore, the contribution of all these cuts to the Wiener index of \(LP_{n}\) for \(n>1\) is \[\sum_{i=1}^{n-2}\left[2(6i+2+6(n-i-1)+2)+(6i+2)(6n-6i-4)+1\right]\] \[+2\left(\binom{4}{2}+4(6n-4)\right)\] \[+\sum_{i=0}^{n-2}\left[6(i+1)\cdot 6(n-i-1)\right]=12n^{3}+6n^{2}- 5n+2,\] where the second line above comes from the contribution of the first hyperedge and the last hyperedge containing six vertices. Adding to this expression the contribution \(2(n-1)\) from previous paragraph and performing a straightforward computation we arrive to the following result. **Proposition 4.4**.: _If \(n\geq 2\) then, \(W(LP_{n})=12n^{3}+6n^{2}-3n\)._ ### More elaborate example The cut method as developed in Section 3 assumes that a hypergraph is a \(k\)-uniform partial cube-hypergraph. In general this is a strong assumption. We have just demonstrated in Section 4.2 that the method can be extended also when the hypergraph is not \(k\)-uniform partial cube-hypergraph, provided that Propositions 3.1 Figure 3: Hypergraph \(LP_{4}\). and 3.2 remain valid. In the subsequent example we further elaborate this idea on a mulecular hypergraph \(H\) of a Clar structure which is shown in Figure 4(a) and in [14, Fig. 3]. There are two different types of cuts in \(H\). The cut of type I consists of the central 6-edge and three 2-edges that do not intersect it as can be seen in Figure 4(b). A cut of type II consists of a non-central 6-edge and its opposite 2-edge as can be seen in Figure 4(c). Both cuts are convex and also the conclusion of Proposition 3.2 holds. This, together with the fact that \(E(H)\) partitions into one cut of type I and six cuts of type II, allows us to use the cut method to calculate Wiener index of \(H\) as \[W(H)=\binom{6}{2}7\cdot 7+6\left(\binom{4}{2}+4\cdot 7+4\cdot 31+7\cdot 31 \right)=2985.\] ## Acknowledgements This work has been supported by the financial support from the Slovenian Research Agency (research core funding P1-0297 and projects J1-2452 and N1-0285). ## Declaration of interests The authors declare that they have no conflict of interest. Figure 4: Hypergraph \(H\) and its convex cuts. ## Data availability Our manuscript has no associated data.
2302.10673
A Framework for UAV-based Distributed Sensing Under Half-Duplex Operation
This paper proposes an unmanned aerial vehicle (UAV)-based distributed sensing framework that uses frequency-division multiplexing (OFDM) waveforms to detect the position of a ground target under half-duplex operation. The area of interest, where the target is located, is sectioned into a grid of cells, where the radar cross-section (RCS) of every cell is jointly estimated by the UAVs, and a central node acts as a fusion center by receiving all the estimations and performing information-level fusion. For local estimation at each UAV, the periodogram approach is utilised, and a digital receive beamformer is assumed. The fused RCS estimates of the grid are used to estimate the cell containing the target. Monte Carlo simulations are performed to obtain the detection probability of the proposed framework, and our results show that the proposed framework attains improved accuracy for the detection of a target than other OFDM bi-static radar approaches proposed in the literature.
Xavier A. Flores Cabezas, Diana P. Moya Osorio, Markku Juntti
2023-02-21T13:45:27Z
http://arxiv.org/abs/2302.10673v1
# A Framework for UAV-based Distributed Sensing Under Half-Duplex Operation ###### Abstract This paper proposes an unmanned aerial vehicle (UAV)-based distributed sensing framework that uses frequency-division multiplexing (OFDM) waveforms to detect the position of a ground target under half-duplex operation. The area of interest, where the target is located, is sectioned into a grid of cells, where the radar cross-section (RCS) of every cell is jointly estimated by the UAVs, and a central node acts as a fusion center by receiving all the estimations and performing information-level fusion. For local estimation at each UAV, the periodogram approach is utilised, and a digital receive beamformer is assumed. The fused RCS estimates of the grid are used to estimate the cell containing the target. Monte Carlo simulations are performed to obtain the detection probability of the proposed framework, and our results show that the proposed framework attains improved accuracy for the detection of a target than other OFDM bi-static radar approaches proposed in the literature. distributed sensing, unmanned aerial vehicle network, integrated sensing and communications ## I Introduction Toward the sixth generation of wireless networks (6G), a number of exciting applications will benefit from sensing services provided by future perceptive networks, where sensing capabilities are integrated in the communication network. Once the communication network infrastructure is already deployed with multiple interconnected nodes, a multi-static sensory mesh can be enabled and exploited to improve the performance of the network itself [1]. Therefore, the joint communications and sensing (JCAS) concept has emerged as an enabler for an efficient use of radio resources for both communications and sensing purposes, where high frequency bands, that are expected to be available in 6G, can favor very accurate sensing based on radar-like technology [2]. Relying on the coordination of the network and a distributed processing, sensing signals can be transmitted from one node, and the reflections on the environment can be received at multiple nodes, in a coordinated manner [2]. Thus, distributed multi-static sensing approaches can improve sensing accuracy while alleviating the need of full-duplex operation at sensing nodes. In this context, the high-gain directional beams provided by performing beamforming in multiple-input multiple-output (MIMO) and massive MIMO systems, which are essential for the operation of communication systems at higher frequencies, will be also exploited for improving sensing by considering distributed implementations [3, 4]. In multi-static MIMO radar settings, the synchronization among sensing nodes is crucial, thus, this issue has motivated the study of feasibility of synchronization. For instance, a synchronization loop using in-band full duplex (IBFD) was demonstrated for a system with two MIMO satellites sensing two ground targets in [4]. Additionally, multicarrier signals such as orthogonal frequency-division multiplexing (OFDM) waveforms have proven to provide several advantages for the use on JSAC systems, including independence from the transmitted user data, high dynamic range, possibility to estimate the relative velocity, and efficient implementation based on fast Fourier transforms [5]. For instance, uplink OFDM 5G New Radio (NR) waveforms have been effectively used for indoor environment mapping in [6]. Therein, a prototype full-duplex transceiver was used to perform range-angle chart estimation and dynamic tracking via extended Kalman Filter. Moreover, the capabilities of distributed sensing system can be further extended by relying on the advantages of flexible nodes as unmanned aerial vehicles (UAVs), which have already raised significant attention for their applicability in numerous scenarios and even in harsh environments [7]. Therefore, UAVs have been already considered for sensing purposes [8, 9, 10]. For instance, in [8], UAVs are explored to perform simultaneous jamming and sensing of UAVs acting as eavesdroppers by exploiting the jamming signals for sensing purposes. Therein, sensing information is used to perform optimal online resource allocation to maximise the amount of securely served users constrained by the requirements on the information leakage to the eavesdropper and the data rate to the legitimate users. Besides in [9], a UAV-based distributed radar is proposed to perform distributed sensing to locate and track malicious UAVs using frequency modulated continuous wave (FMCW) waveforms. It was shown that the mobility and distributed nature of the UAV-based radar benefits the accuracy for tracking mobile nodes if compared with a fixed radar. However, it does not make complete use of its distributed nature, as each UAV performs local sensing accounting for only the sensing information of its neighbouring UAVs, and there is no consideration on communication tasks. In the context of JSAC, in [10], a general framework for a full-duplex JSAC UAV network is proposed, where area-based metrics are developed considering sensing and communication parameters of the system and sensing requirements. This work uses a full-duplex operation for local sensing at each UAV while considering reflections from other UAVs as interference. Different from previous works and considering the complexity of full-duplex systems, this work focuses on half-duplex operation and proposes a framework for performing a grid-based distributed sensing relying on the coordination of multiple UAVs to sense a ground target located on an area of interest. It is considered that MIMO UAVs employ OFDM waveforms and digital beamforming is implemented at the receiver side. A periodogram is used for the estimation of the radar cross-section (RCS) of each cell in the grid, leveraging the knowledge of the geometry of the system. The RCS estimation is performed by all of the non-transmitting UAVs, simultaneously, while one UAV is illuminating a certain sub-area of the grid. This process is performed until all UAVs have illuminated their respective sub-areas, then all UAVs inform the measured RCS per cell on the grid to a UAV acting as a fusion center (FC), which performs information-level fusion. This process allows a half-duplex operation in a distributed sensing setting. ## II System Model Consider the system depicted in Fig. 1, where a single point-like target of RCS \(\sigma_{\mathrm{T}}\) is positioned on a square area \(S\) of \(\ell\) meters of side length. \(U\) UAVs are deployed (for simplicity and insightfulness) at a common altitude \(h\) and are coordinated to perform distributed sensing to locate the ground target. Each UAV \(u\) in the set of all UAVs \(\mathcal{U}\), with \(u\in\mathcal{U}\), is positioned at coordinates \(\mathbf{r}_{u}=(x_{u},y_{u},h)\), with \(|\mathcal{U}|=U\). Also, the RCS of a ground cell is denoted by \(\sigma_{G}\). Similar to [10], it is assumed that each UAV has two arrays of antennas namely a square uniform planar array (UPA) (mounted facing downward) for sensing and a uniform linear array (ULA) (mounted horizontally) to communicate with the FC for information fusion and coordination tasks. The square UPA consists of \(n\times n\) isotropic antenna elements spaced \(\lambda/2\) from each-other, where \(\lambda=f_{0}/c_{0}\) is the wavelength of the signal, \(f_{0}\) is the frequency of the signal, and \(c_{0}\) is the speed of light. To perform sensing, the UAV \(u\in\mathcal{U}\) estimates the RCS of a certain point on the ground, denoted as \(p\), located at the coordinates \(\mathbf{r}_{p}=(x_{p},y_{p},0)\). For this purpose, \(u\) utilizes a digital receive beamformer \(\mathbf{w}_{p}\in\mathbb{C}^{n\times 1}\). The reflection from point \(p\) arriving at UAV \(u\) has an angle-of-arrival (AoA) of \(\varphi_{p,u}=(\theta_{p,u},\phi_{p,u})\), where \(\theta_{p,u}\) corresponds to the elevation angle and \(\phi_{p,u}\) to the azimuth. The corresponding beam-steering vector \(\mathbf{g}(\varphi_{p,u})\) has its elements \(g_{ij}(\varphi_{p,u})\) for all \(i,j=0,...,n-1\), where \(i\) is the index corresponding to the antenna element in the \(x\) axis and \(j\) in the \(y\) axis of the UPA defined as [11] \[g_{ij}(\varphi_{p,u})= e^{-j\pi(i-1)\sin(\theta_{p,u})\sin(\phi_{p,u})}\times\] \[e^{-j\pi(j-1)\sin(\theta_{p,u})\cos(\phi_{p,u})}. \tag{1}\] The steering matrix \(\mathbf{G}_{u}\in\mathbb{C}^{n^{2}\times H}\) contains the steering vectors corresponding to the \(H\) reflections captured at UAV \(u\) as \[\mathbf{G}_{u}=[\mathbf{g}(\varphi_{1,u}),...,\mathbf{g}(\varphi_{H,u})]_{n^{2 }\times H}, \tag{2}\] where \(n^{2}\) is the total number of antennas at UAV \(u\). After applying the receive beamformer \(\mathbf{w}_{p}\) at reception, the beam pattern from the reflections captured at \(u\) is given by \[\boldsymbol{\chi}=\mathbf{G}_{u}^{T}\mathbf{w}_{p}=[\chi(\varphi_{1,u}),..., \chi(\varphi_{H,u})]^{T}, \tag{3}\] where \(\chi(\varphi_{p,u})\) is the gain of the reflection coming from \(p\) and \(\boldsymbol{\chi}\) is the beam pattern vector of size \(H\times 1\) at every AoA by applying beamformer \(\mathbf{w}_{p}\) at UAV \(u\). ## III Distributed Sensing Protocol For the sensing process, it is considered that the total area \(S\) is sectioned into a grid composed of \(L\times L\) square cells with dimensions \(d\times d\) with \(d=\ell/L\). Each cell is characterised by its middle point \(p\) of position \(\mathbf{r}_{p}=(x_{p},y_{p},0)\) such that \(p\in\mathcal{P}\), where \(\mathcal{P}\) is the set of all cells. For notational simplicity we will refer a certain cell by its middle point \(p\). The point \(p^{*}\) represents the target, which is located in the position \(\mathbf{r}_{p^{*}}=(x_{p^{*}},y_{p^{*}},0)\). It is also considered that, at a certain time, a UAV \(u\in\mathcal{U}\) illuminates straight down with its UPA working as a phased array, thus the half power beam width (HPBW) projection forms a circle on the ground. In this sense, it is assumed that the cells that are completely inside the largest inscribed square of the HPBW projection are the intended ones to be sensed by the reflections produced from the illumination of UAV \(u\), and are characterised as the cell set \(\mathcal{P}_{u}\), while the set of non-intended illuminated cells \(\mathcal{P}_{u}^{\prime}\) contains the cells that are not inside the largest inscribed squared, which are treated as clutter, as illustrated in Fig. 2. In total, the set of illuminated cells is given as \(\mathcal{Q}_{u}=\mathcal{P}_{u}\cup\mathcal{P}_{u}^{\prime}\). The distributed sensing framework is summarized as follows **Step 1:** The UAVs coordinate and assume their positions to cover the whole area of interest \(S\), such that every cell in \(\mathcal{P}\) is contained in a single \(\mathcal{P}_{u}\), \(u\in\mathcal{U}\). Fig. 1: System model. Fig. 2: Illumination grid. **Step 2:** UAV \(u\in\mathcal{U}\) illuminates the ground directly below acting as a phased array, illuminating the elements of \(\mathcal{Q}_{u}\), and potentially, the target \(p^{*}\). **Step 3:** Every other UAV \(u^{\prime}\in\mathcal{U}\backslash\{u\}\) processes the incoming reflections by choosing a cell \(p\in\mathcal{P}_{u}\) and for that cell * computes and applies a digital receive beamformer as described in Section IV, and * computes the periodogram corresponding to \(p\), and estimates its RCS as described in Section V. **Step 4:** Repeat Step 3 for all cells \(p\in\mathcal{P}_{u}\). **Step 5:** Repeat Steps 2-4 for all UAVs \(u\in\mathcal{U}\). After this, each UAV \(u\) has an estimated RCS map of the grid, \(\hat{\boldsymbol{\Gamma}}_{u}\), which is a matrix of RCS estimates of all cells in \(\mathcal{P}\setminus\mathcal{P}_{u}\). This occurs because the UAV \(u\) does not estimate the RCS of the cells in \(\mathcal{P}_{u}\), thus avoiding the need for a full-duplex system at the UAVs. **Step 6:** All UAVs \(u\in\mathcal{U}\) send their RCS estimation maps \(\hat{\boldsymbol{\Gamma}}_{u}\) to the FC for information-level fusion. **Step 7:** The FC fuses the estimates together into the fused RCS map \(\hat{\boldsymbol{\Gamma}}\), and, by assuming a non-reflective ground such that the RCS of the ground is smaller than that of the target (\(\sigma_{\text{G}}<\sigma_{\text{T}}\)), the target is estimated to be located in the cell of highest estimated RCS, i.e. in \(\operatorname*{argmax}\tilde{\boldsymbol{\Gamma}}\), as described in Section VI. ## IV Beamformer Design The receive beamformer is designed to have the main lobe of the resulting beam pattern steered towards the intended cell \(p\) in order to estimate its RCS. To this end, two different approaches are considered for the design of the receive beamformer, namely least-squares (LS) heuristic formulation and the minimum variance beam pattern based on Capon method. These approaches are described in the following. ### _Least-Squares heuristic approach_ For this approach, the beamformer is obtained by solving the following constrained LS optimisation problem [12, 13] \[\textbf{P1:}\quad\operatorname*{minimise} \quad||\textbf{G}^{T}\textbf{w}_{p}-\textbf{v}||_{2}^{2} \tag{4a}\] \[\operatorname*{subject}\,\operatorname*{to} \quad||\textbf{w}_{p}||_{2}^{2}=1, \tag{4b}\] where **v** is the desired response vector over the \(H^{\prime}\) AoAs in the beam-steering matrix \(\textbf{G}\in\mathbb{C}^{n\times H^{\prime}}\). In this approach, a heuristic is employed, in which the AoAs in **A** are chosen such that they span evenly on the elevation and azimuth ranges, centred around the intended AoA \(\varphi_{p,u}\). The AoAs are taken as a mesh of \(n\) elevation angles and \(4n\) azimuth angles respectively given by \[\theta_{i}=\mod\,\left(\theta_{p,u}+\frac{i\pi}{2(n-1)},\frac{ \pi}{2}\right),\quad i=0,...,n-1 \tag{5}\] \[\phi_{j}=\mod\,\left(\phi_{p,u}+\frac{j2\pi}{4n-1},2\pi\right),\, \,j=0,...,4n-1, \tag{6}\] such that \(H^{\prime}=4n^{2}\). The solution of **P1** is well known to be \(\textbf{w}_{p}=(\textbf{A}^{T})^{\dagger}\textbf{v}\)[13] where \((\cdot)^{\dagger}\) is the pseudo-inverse, but, since \(\textbf{A}^{T}\) is a matrix with more rows than columns, it can be efficiently solved by applying Cholesky factorization. Therefore, the iterative LS algorithm proposed in [12] can be employed to solve **P1**. ### _Capon method_ The Capon method provides minimum-variance distortion-less response beamforming and can be formulated as a quadratic program (QP) convex optimisation problem [14] \[\textbf{P2:}\quad\operatorname*{minimise} \quad\textbf{w}_{p}^{H}\textbf{R}\textbf{w}_{p} \tag{7a}\] \[\operatorname*{subject}\,\operatorname*{to} \quad\textbf{w}_{p}^{H}\textbf{g}(\varphi_{p,u^{\prime}})=1, \tag{7b}\] where \(\textbf{R}\in\mathbb{C}^{n\times n}\) is the covariance matrix of the received signal over the desired direction, which can be defined as \(\textbf{R}=\textbf{g}(\varphi_{p,u^{\prime}})\textbf{g}(\varphi_{p,u^{\prime }})^{H}+\alpha\textbf{I}\)[12], where \(\textbf{I}\in\mathbb{R}^{n\times n}\) is the identity matrix and \(\alpha\) is a small real number. The solution for **P2** is obtained as in [14], and given by \[\textbf{w}_{\textbf{p}}=\frac{\textbf{R}^{-1}\textbf{g}(\varphi_{p,u^{\prime }})}{\textbf{g}(\varphi_{p,u^{\prime}})^{H}\textbf{R}^{-1}\textbf{g}(\varphi_{ p,u^{\prime}})}. \tag{8}\] ## V Periodogram For performing sensing, the UAVs illuminating ground transmit frames consisting of \(N\) OFDM symbols, each consisting of \(M\) orthogonal subcarriers. The transmitted OFDM frame can be expressed as a matrix denoted by \(\textbf{F}_{\textbf{TX}}=[c_{k,l}^{TX}]\in\mathcal{A}^{N\times M}\) with \(k=0,...,N-1\), \(l=0,...,M-1\) and \(\mathcal{A}\) is the modulated symbol alphabet. At the side of sensing UAVs, the received frame matrix is denoted by \(\textbf{F}_{\textbf{RX}}=[c_{k,l}^{RX}]\) and is composed by the received baseband symbols corresponding to all reflections from \(\mathcal{Q}_{\text{u}}\) at UAV \(u^{\prime}\). The elements of the received frame matrix have the form [15, 16] \[c_{k,l}^{RX}=\,\,\,b_{p}c_{k,l}^{TX}\chi(\varphi_{p,u^{\prime}})e ^{j2\pi f_{D,p}T_{o}l}e^{-j2\pi\tau_{p}\Delta fk}e^{-j\zeta_{p}}\] \[+\sum_{p^{\prime}\in\mathcal{Q}_{\text{u}}\backslash\{p\}}b_{p^{ \prime}}c_{k,l}^{TX}\chi(\varphi_{p^{\prime},u^{\prime}})e^{j2\pi f_{D,p^{\prime }}T_{o}l}e^{-j2\pi\tau_{p^{\prime}}\Delta fk}e^{-j\zeta_{p^{\prime}}}\] \[+\delta_{u}b_{p}\text{-}c_{k,l}^{TX}\chi(\varphi_{p^{\prime},u^{ \prime}})e^{j2\pi f_{D,p^{\prime}}T_{o}l}e^{-j2\pi\tau_{p^{\prime}}\Delta fk}e^{ -j\zeta_{p^{\prime}}}+z_{k,l}, \tag{9}\] where \(f_{D,p}\) is the Doppler experienced by the reflection from \(p\) (assumed constant through the frame), \(T_{o}\) is the OFDM symbol duration (including the cyclic prefix time \(T_{CP}\)), \(\tau_{p}\) is the delay of the reflection from \(p\), \(\Delta f\) is the inter-carrier spacing, \(\zeta_{p}\) is a random phase shift of the reflection from \(p\), \(z_{k,l}\) is additive white Gaussian noise (AWGN) of spectral density \(N_{0}\), and \(b_{p}\) is a term embedding the propagation of the wave and the reflecting characteristics of the reflector in \(p\). In this expression, the first term corresponds to the intended cell \(p\); the second term corresponds to the interference from the other cells, \(p^{\prime}\), in \(\mathcal{Q}_{u}\); and the third term corresponds to the target reflection, with \(\delta_{u}=1\) if the target has been illuminated by UAV \(u\), and \(\delta_{u}=0\) otherwise. Considering a point-source model, \(b_{p}\) is the amplitude attenuation of the signal, given by [16] \[b_{p}=\sqrt{\frac{P_{T}G_{T}\sigma_{p}\lambda^{2}}{(4\pi)^{3}d_{p,1}^{2}d_{p,2}^{2 }}}, \tag{10}\] where \(\sigma_{p}\in\{\sigma_{\rm G},\sigma_{\rm T}\}\), \(P_{T}\) is the transmit power, \(G_{T}\) is the transmit antenna gain, \(d_{p,1}\) is the distance from \(u\) to \(p\) and \(d_{p,2}\) is the distance from \(p\) to \(u^{\prime}\). The received complex symbols \(c_{k,l}^{RX}\) contain the transmitted symbols \(c_{k,l}^{TX}\), thus, are data-dependent. In order to process \(\mathbf{F_{RX}}\), this data-dependency is removed by performing element-wise division, \(\mathbf{F}\)=\(\mathbf{F_{RX}}\odot\mathbf{F_{TX}}\), to obtain the processed samples consisting of elements \(c_{k,l}\)=\(c_{k,l}^{RX}/c_{k,l}^{TX}\). To estimate the delay and Doppler from \(\mathbf{F}\), a common approach for OFDM signals is to use the periodogram, which provides the maximum likelihood (ML) estimator [15]. The periodogram takes the fast Fourier transform (FFT) of \(\mathbf{F}\) over OFDM symbols, followed by the inverse FFT (IFFT) over subcarriers at a given delay-Doppler pair \((n,m)\) as [15] \[P_{\mathbf{F}}(n,m)=\] \[\frac{1}{NM}\left|\sum_{k=0}^{N^{\prime}-1}\left(\sum_{l=0}^{M^{ \prime}-1}c_{k,l}e^{-j2\pi l\frac{m}{M^{\prime}}}\right)e^{-j2\pi k\frac{n}{N^ {\prime}}}\right|^{2}, \tag{11}\] where \(M^{\prime}\geq M\) and \(N^{\prime}\geq N\) are the lengths of the FFT and IFFT operations respectively, \(n=0,...,N^{\prime}-1\) and \(m=0,...,M^{\prime}-1\)1. It has been proven that the ML estimator of the delay and Doppler for a single target coincides with the maximum point in the periodogram as \((\hat{n},\hat{m})=\operatorname*{argmax}_{n,m}P_{\mathbf{F}}(n,m)\)[15], which is maximised when Footnote 1: If \(M^{\prime}>M\) or \(N^{\prime}>N\) is needed in order to have more \(n\) or \(m\) values, padding is used by setting the added padded symbols to zero. \[\frac{f_{D}}{\Delta f}-\frac{\hat{m}}{M}=0\quad\wedge\quad\frac{\tau}{T_{o}}- \frac{\hat{n}}{N}=0. \tag{12}\] Then, from (9), (10) and (11), \(\sigma_{p}\) can be estimated as \[\hat{\sigma}_{p}=\left(\frac{1}{NM}\right)\frac{P_{\mathbf{F}}(\hat{n},\hat{ m})(4\pi)^{3}d_{p,1}^{2}d_{p,2}^{2}}{P_{T}G_{T}\lambda^{2}}. \tag{13}\] Then, considering the the geometry and protocol of the system, each UAV can set \(\hat{m}\), \(M\), \(\hat{n}\) and \(N\) so that (12) is met exactly for each cell \(p\) to be sensed, and the corresponding RCS estimate is obtained by computing (13). ## VI Information-Level Fusion After all UAVs \(u\in\mathcal{U}\) have sent their local RCS estimates for each cell on the grid \(\hat{\mathbf{\Gamma}}_{u}\) to the FC, it performs information-level fusion of the local estimates to obtain a global estimate \(\hat{\mathbf{\Gamma}}\). Then, the following hypothesis test is performed for all the elements of the grid, \(p\in\mathcal{P}\) \[\mathcal{H}_{0}: ||\mathbf{r}_{p^{*}}-\mathbf{r}_{p}||_{\infty}>\frac{d}{2} \tag{14a}\] \[\mathcal{H}_{1}: ||\mathbf{r}_{p^{*}}-\mathbf{r}_{p}||_{\infty}\leq\frac{d}{2}, \tag{14b}\] where \(||\cdot||_{\infty}\) is the \(L^{\infty}\) norm. Hypothesis \(\mathcal{H}_{1}\) states the cases when the target \(p^{*}\) is considered to be located in the corresponding cell \(p\), and is considered to be met at the cell \(p\) that has the maximum value estimate \(\hat{\sigma}=\max\hat{\mathbf{\Gamma}}\). On the other hand, \(\mathcal{H}_{0}\) states the cases when the target is not located at \(p\), and considered to be met at every other cell \(p\) that has an estimate \(\hat{\sigma}\) such that \(\hat{\sigma}<\max\hat{\mathbf{\Gamma}}\). The information-level fusion will be carried out using two techniques, namely averaging and pre-normalising the local estimates before averaging. **Average:** The FC averages the values of the cells over the local maps from all UAVs in \(\mathcal{U}\) such that \(\hat{\mathbf{\Gamma}}=\frac{1}{U}\sum_{u\in\mathcal{U}}\hat{\mathbf{\Gamma}}_ {u}\). **Pre-normalised average:** An average of the pre-normalised local maps is obtained, in which each local map \(\hat{\mathbf{\Gamma}}_{u}\) is normalised between 0 and 1 as \[\bar{\sigma}=\frac{\hat{\sigma}-\min\left(\hat{\mathbf{\Gamma}}_{u}\right)}{ \max\left(\hat{\mathbf{\Gamma}}_{u}\right)-\min\left(\hat{\mathbf{\Gamma}}_{u} \right)}\ \,\ \ \forall\hat{\sigma}\in\hat{\mathbf{\Gamma}}_{u}\ \,\ \ \forall u\in\mathcal{U}. \tag{15}\] The resulting normalised local maps \(\bar{\Gamma}_{u}\) are then averaged as in the previous approach. ## VII Numerical Results In this section, the performance of the proposed sensing protocol will be evaluated in terms of the probability of detection, where detection is considering to occur whenever \(\mathcal{H}_{1}\) is achieved for the cell that contains the target. For this purpose, Monte Carlo simulations were performed, where the target is randomly located at each iteration, and the simulation parameters are presented in Table I, unless stated otherwise. The value for \(\sigma_{\rm T}\) is assumed as 10 dBsm2, which is a reasonable value for large vehicles [17], and -30 dBsm for the ground, which is reasonable for grass or dirt [18]. The OFDM parameters are taken from [10]. The UAVs are set uniformly distributed across \(S\) in a way to cover the whole grid, and each one illuminates \(L/\sqrt{U}\times L/\sqrt{U}\) cells and avoiding intersections between the \(\mathcal{P}_{u}\) cells, unless stated otherwise. Footnote 2: \(\sigma[\)dBsm\(]=10\log_{10}(\sigma[m^{2}]/1m^{2})\) In Fig. 3 the detection probability is shown as a function of the cell side length \(d\), for different \(\sigma_{\rm G}\) values. The number of intended cells per UAV is maintained constant, thus the size of the cells determine the total size of the area \(S\) and the altitude of the UAVs \(h\), which increases as \(d\) increases to accommodate the same number of larger cells in its HPBW. Note that, \(d\) increases as \(h\) increases, and the \(b_{p}\) value from (10) is closer to the noise level, then the probability of detection decreases. There exists a maximum point around \(d=2\)m, where the best probability of detection is achieved. As expected, as \(\sigma_{\rm G}\) increases, the difference between the RCS of the ground and the target decreases, so that the probability of detection also decreases. By comparing beamforming techniques, both show a similar behaviour. However, when comparing fusion techniques, pre-normalising the local estimates performs better only for larger \(d\) and \(\sigma_{\mathrm{G}}\) values. Conversely, in Fig. 4 the total size of the area \(S\) and the altitude of the UAVs \(h\) is kept constant, while varying \(d\). This is accomplished by adjusting the number of cells in the grid \(L\). In this case, note that higher \(d\) values lead to better probability of detection, as there is a higher area per cell. However, a local optimum can be appreciated around \(d=4\)m, which shows the presence of a local optimum that offers more precise detection. Furthermore, in Fig. 5 the detection probability is plotted for different values of \(\sigma_{\mathrm{G}}\), different values of a modified threshold \(d(\frac{1}{2}+\Delta)\) in the hypothesis test (14), and different values of \(d\). The curves show the probability of detection of the target at \(\Delta\) cells away from the cell with the maximum value in \(\bar{\Gamma}\). Note that values of \(\sigma_{\mathrm{G}}\leq-10\)dBsm present probability of detection close to 100% for \(\Delta\geq 1\), which is within a distance of one cell. This suggests a probability of detection above 99.89% with high accuracy within \(5\)cm (\(d=0.01\)m, \(\Delta=2\), \(\sigma_{\mathrm{G}}\leq-10\)dBsm), which is more accurate than other state-of-the-art works utilising MIMO OFDM bistatic-radar such as in [19], where they achieve an accuracy of \(3\)m which uses passive radar in a multi-user MIMO OFDM setting. The results show that for small \(\sigma_{\mathrm{G}}\) values, most misdetections occur in an adjacent cell. Fig. 6 illustrates the detection probability as a function of the number of antennas in the UPAs of the UAVs, for different \(\sigma_{\mathrm{G}}\) values. Therein, the number of UAVs and the number of illuminated cells per UAV is maintained constant, so that narrower beams imply that the UAVs increase their altitudes to accommodate the same intended cells. It is worth noting that the increase of the number of antennas derives into a narrower main beam, and as the beam becomes narrower (higher \(n\) values), it is observed an improvement on the probability of detection due to the increased directionality and precision towards the intended sensed cells. However when the beam becomes too narrow, asmall beam misalignment have a bigger impact on the detection of the target, and the increases in the UAV altitudes causes a stronger pathloss, making the received signal closer to the noise level, thus the probability of detection decreases. For larger \(\sigma_{\mathrm{G}}\) values, the probability of detection decreases even further, as expected. It is also noticed that both fusion techniques show a similar detection probability results, similar to the case with both beamforming techniques. However, the Capon method shows a slightly better performance for a high number of antennas and a small \(\sigma_{\mathrm{G}}\) value. Moreover, for smaller \(\sigma_{\mathrm{G}}\) values, the Fig. 4: Detection probability of the target for different cell length \(d\) for different beamforming and fusion techniques. Total area and UAV altitude is kept constant, so larger \(d\) values imply less cells illuminated per UAV. Fig. 5: Detection probability of the target at a \(\Delta\) cells distance for different \(\sigma_{\mathrm{G}}\) values and different cell size \(d\). Fig. 3: Detection probability of the target for different cell length \(d\) for different beamforming techniques, fusion techniques and \(\sigma_{\mathrm{G}}\) values. The number of UAVs and number of cells illuminated per UAV is kept constant, so larger \(d\) values imply total area and higher UAV altitude. Fig. 6: Detection probability of the target for different total number of antennas for the UAV UPA \(n^{2}\) for different beamforming techniques, fusion techniques and \(\sigma_{\mathrm{G}}\) values. fusion by averaging slightly outperforms the pre-normalised averaging approach, while for higher \(\sigma_{\mathrm{G}}\) values the opposite is true. Fig. 7 illustrates the detection probability as a function of the common UAV altitude \(h\) for varying \(\sigma_{\mathrm{G}}\) and \(\Delta\) values. The UAVs are positioned in a similar configuration to the previous figure, thus, less cells are covered by the main beam of the transmitting UAVs at smaller altitude, thus resulting in cells not being illuminated by any UAV. The maximum altitude is considered to be the one where all cells are illuminated once (no overlapping). As the \(h\) value increases, each \(\mathcal{P}_{u}\) set goes from allocating \(1\times 1\) cell, to \(3\times 3\) cells and finally to \(5\times 5\) cells, such that all cells are illuminated once. This behaviour can be seen in the \(\Delta=0\) curve, where a sudden increasing in the probability of detection is observed at altitudes where more cells are allocated in \(\mathcal{P}_{u}\), whereas this tendency is also observed for higher \(\sigma_{\mathrm{G}}\) values, with worse performance. For higher \(\Delta\) values, the probability of detection is higher and increases smoothly with \(h\) as higher \(\Delta\) implies that the detection can be considered as successful on non-illuminated cells that are adjacent to illuminated cells. This is particularly observed for \(\Delta=2\), where every cell in the grid is considered for detection. ## VIII Conclusions In this paper, a half-duplex distributed sensing framework for UAV-assisted networks was proposed in which the area of interest is sectioned into a grid, and the RCS of each cell is estimated by employing receive digital beamforming and a periodogram-based approach, and later sent to a FC for information-level fusion. Results show that the detection probability of the system increases for ground cells of smaller RCS values and that higher accuracy can be achieved within a one-cell distance. Increasing the number of antennas in the UAVs improves the detection probability of the target, however the increase of the altitude of the UAVs can deteriorate it. Moreover, it was found that detection probability is higher for larger cell size \(d\) if the UAV altitude is kept constant, however there is a small \(d\) value local maximum. Future works can consider the effect of the Doppler and position control of the UAVs to increase the sensing performance of the framework. ## Acknowledgement This research has been supported by the Academy of Finland, 6G Flagship program under Grant 346208 and project FAITH under Grant 334280.
2308.05543
Deep Richardson-Lucy Deconvolution for Low-Light Image Deblurring
Images taken under the low-light condition often contain blur and saturated pixels at the same time. Deblurring images with saturated pixels is quite challenging. Because of the limited dynamic range, the saturated pixels are usually clipped in the imaging process and thus cannot be modeled by the linear blur model. Previous methods use manually designed smooth functions to approximate the clipping procedure. Their deblurring processes often require empirically defined parameters, which may not be the optimal choices for different images. In this paper, we develop a data-driven approach to model the saturated pixels by a learned latent map. Based on the new model, the non-blind deblurring task can be formulated into a maximum a posterior (MAP) problem, which can be effectively solved by iteratively computing the latent map and the latent image. Specifically, the latent map is computed by learning from a map estimation network (MEN), and the latent image estimation process is implemented by a Richardson-Lucy (RL)-based updating scheme. To estimate high-quality deblurred images without amplified artifacts, we develop a prior estimation network (PEN) to obtain prior information, which is further integrated into the RL scheme. Experimental results demonstrate that the proposed method performs favorably against state-of-the-art algorithms both quantitatively and qualitatively on synthetic and real-world images.
Liang Chen, Jiawei Zhang, Zhenhua Li, Yunxuan Wei, Faming Fang, Jimmy Ren, Jinshan Pan
2023-08-10T12:53:30Z
http://arxiv.org/abs/2308.05543v1
# Deep Richardson-Lucy Deconvolution for Low-Light Image Deblurring ###### Abstract Images taken under the low-light condition often contain blur and saturated pixels at the same time. Deblurring images with saturated pixels is quite challenging. Because of the limited dynamic range, the saturated pixels are usually clipped in the imaging process and thus cannot be modeled by the linear blur model. Previous methods use manually designed smooth functions to approximate the clipping procedure. Their deblurring processes often require empirically defined parameters, which may not be the optimal choices for different images. In this paper, we develop a data-driven approach to model the saturated pixels by a learned latent map. Based on the new model, the non-blind deblurring task can be formulated into a maximum a posterior (MAP) problem, which can be effectively solved by iteratively computing the latent map and the latent image. Specifically, the latent map is computed by learning from a map estimation network (MEN), and the latent image estimation process is implemented by a Richardson-Lucy (RL)-based updating scheme. To estimate high-quality deblurred images without amplified artifacts, we develop a prior estimation network (PEN) to obtain prior information, which is further integrated into the RL scheme. Experimental results demonstrate that the proposed method performs favorably against state-of-the-art algorithms both quantitatively and qualitatively on synthetic and real-world images. Saturated pixels, non-blind deblurring, deep Richardson-Lucy deconvolution ## 1 Introduction Non-blind deblurring aims to recover a sharp image given a blurry one and the corresponding blur kernel. Basically, the blurring process can be modeled by: \[B=I\otimes K, \tag{1}\] where \(B\), \(I\), and \(K\) are the blurry image, latent image, and the blur kernel, respectively, and \(\otimes\) denotes the convolution operation. This problem is highly ill-posed and has gained considerable attention in recent years [23, 44, 61, 59, 13, 18, 14]. Despite their effectiveness in many cases, most of the above mentioned methods fail to consider the side-effects brought by saturated pixels, which are not rare when images are captured in a high dynamic range scenario at night, as shown in Fig. 1 (a). Without proper handling, saturated pixels will cause severe artifacts in the deblurred results from the above mentioned methods as shown in Fig. 1 (g) and (h). The main reason is that saturated pixels cannot be well modeled by the linear blur model in Eq. (1). To this end, some studies [10, 33, 20, 7, 9] suggest discarding saturated pixels and using only unsaturated pixels for the deblurring process, which reformulates the blur model into: \[\tilde{M}\circ B=\tilde{M}\circ(I\otimes K), \tag{2}\] where \(\tilde{M}\) is the weighting matrix. When \(\tilde{M}\) is sufficiently small (i.e. often takes value 0), the corresponding pixels will not contribute during the deblurring process, and vice versa for pixels assigning with large \(\tilde{M}\) values. However, separating the saturated and unsaturated regions from the blurry images is not trivial. If the separation is less accurate, the deblurred images may contain significant artifacts around the saturated regions (Fig. 1 (d)). In addition, due to the ignoring of saturated regions during the deblurring process, blur in these regions cannot be entirely removed as presented in the green boxes in Fig. 1 (c), (d), and (e). Based on the property of the saturated pixels, several algorithms [10, 52, 7] usually formulate the blurring process with saturated pixels by: \[B_{i}=C\left((I\otimes K)_{i}\right), \tag{3}\] where \(\mathcal{C}(\cdot)\) is the clipping function. When the pixel \(i\) is within the dynamic sensor range, \(C((I\otimes K)_{i})=(I\otimes K)_{i}\); otherwise, \(C((I\otimes K)_{i})\) returns the maximum intensity of the sensor range. Compared to the blur models in Eqs. (1) and (2), all pixels are included in the imaging process and can be modeled by Eq. (3). However, the clipping function \(C(\cdot)\) involved in Eq. (3) is non-differentiable. To solve the problem, Whyte et al. [52] approximate it with a specially-designed smooth function [4]. To estimate the latent image, they propose a maximum likelihood (ML) framework and use a Richardson-Lucy (RL) updating scheme [29, 41] for optimization. Although the blur in the saturated regions can be removed to a certain extent, their result in Fig. 1 (b) still contains some rings and artifacts around image edges due to the lack of appropriate image prior information. Moreover, as the parameters in the smooth function are important for modeling the saturated pixels, only empirical observation to determine these parameters may not model the saturated pixels in different images. In addition, the computational costs for their method are also high because of the tedious optimization process. With the development of the convolution neural network (CNN), some methods [30, 39, 49, 25, 51, 50] utilize the large capacity of neural networks to directly restore the sharp image from the blurry input. Without using the blur kernel and blur model, they do not effectively remove blur as shown in Fig. 1 (f). The CNN models are also adopted in the non-blind deblurring task [59, 61]. Most of them iteratively deconvolve and denoise the blurry image in a half-quadratic splitting framework, where CNNs are served as denoisers. However, these methods are less effective for saturated blurry images (Fig. 1 (g)) as they are based on the linear blur model Eq. (1). In this paper, we propose a new method for deblurring saturated images. First, based on the blur model in Eq. (3), our method uses a learnable latent map \(M\), which is determined by the latent image and the blur kernel, to replace the clipping function. Because both saturated and unsaturated pixels can be well modeled by Eq. (3), our method does not require sophisticated saturated pixel discarding processes [9], thus avoiding the issues brought by this strategy. We discuss the detailed differences between our work and the model used in NBDN [9] in Sec. 7.1.3. Meanwhile, compared to the previous model in [52] that adopts a smooth function to approximate the clipping function, the proposed method does not require heuristically designed functions and avoids tedious parameter tuning for different images. Comparisons of our model against that in [52] can be found in Sec. 7.1.2 Then, we formulate the deblurring task into a maximum a posterior (MAP) problem and solve it by iteratively computing the latent map and the latent image. During the latent map estimation step, we use a map estimation network (MEN) to compute \(M\) in every iteration. For the latent image updating step, we use an RL updating strategy to optimize the posterior maximization problem. Recall that RL method often introduces artifacts in the deblurred results [48], some works [10, 20] propose to use a sparse prior [27] to suppress ringings. Even though this prior is effective, it may not be the best choice in practice. Thus, we further propose a prior estimation network (PEN), which is trained from a large amount of synthetic saturated blurry and clean image pairs, and integrate it into the RL updating strategy to facilitate image restoration. As shown in (Fig. 1 (i)), our method is able to handle the blurry images with large situated regions and generates a better deblurred result. The overview of the proposed network is shown in Fig. 2. Extensive experimental results demonstrate the effectiveness of our method against state-of-the-arts for synthetic and real saturated blurry images. ## 2 Related Work This section briefly reviews image deconvolution techniques and advances made in the non-blind deblurring task and pioneer arts involving saturation. **Image deconvolution.** Deconvolution is a general image processing technique used to remove the blur caused by a known or estimated blur kernel (which is often obtained by blind deblurring methods [6, 5]). Besides applied in the deblurring task [47, 14, 46, 59], image deconvolution techniques are also widely-used in other image restoration tasks, such as image super-resolution [16, 26] and enhancement [2, 32]. Before the widespread use of deep learning techniques, deconvolution has been studied by various researchers using different approaches. Those efforts include Weiner filter [53, 21], Richardson-Lucy algorithm [29, 41], and shock filter [32, 1], to name a few. In recent years, with the development of CNN, deconvolution has been integrated with deep leaning skills to further improve the image restoration performance [14, 61, 60]. Our method also falls into this category. In Sec. 3, we provide more details elaborated on the basis of our method. **Deblurring without saturation.** Due to the ill-posed nature of the inverse problem, strong image priors are required to regularize the solution space. Hyper-laplacian prior, known for its sparsity Fig. 1: Deblurring results of a saturated blurry image. The estimated kernel is shown in the white box of (a). Methods based on Eq. (1) [59, 14] generate results with many ringings in the saturated regions. Large blurs still remain in the result from the end-to-end learning-based method [49]. Saturated pixels cannot be properly handled for the robust models [10, 33, 9, 52], and they are ineffective in removing blur and ringings around the saturated regions, as shown in the green boxes. In comparison, the proposed method can generate a high-quality result with fewer artifacts. Please zoom-in for a better view. and fitness of heavy-tail distribution of natural images, is broadly applied in recent deblurring works [27, 23, 10, 33]. In [43], Roth and Black use a field of experts (FOE) framework to model image priors. The FOE framework is further extended in [45, 44, 54, 13, 38] to restore blurry images. Although good results have been achieved, solving models related to FOE bears a large computational burden. Instead of developing manually-designed regularizers, numerous image restoration algorithms turn to CNNs [46, 55, 59, 61, 13, 40]. Schuler et al. [46] first impose a regularized inversion of the blur in the Fourier domain and then propose to remove the artifacts using a learned multi-layer perceptron (MLP). In [59] and [61], they use the variable substitution method to decouple the non-convex restoring task into a quadratic deconvolution step and a denoising problem which can be effectively solved by a learned denoiser. Instead of stacking learned regularizers in a deblurring chain, Gong et al. [18] suggest learning a universal projector for different updating steps. Due to the neglecting of side-effects brought by saturating, although effective in most cases, these methods are ineffective in dealing with saturated blurry images. **Deblurring with saturation.** Saturation has not received wide attention in the non-blind deblurring literature. Cho et al. [10] consider saturated pixels as outliers, and they propose an EM-based algorithm to iteratively identify and exclude outliers in the blurry image. This idea is broadly extended in some recent works [20, 33, 17, 7, 9]. In [33], saturated pixels are detected by computing the residual between the blurry image and the convolution output (i.e. \(B-I\otimes K\)). Then they develop a specially-designed fidelity term, where smaller weights are applied to the detected saturated pixels, for optimization. Moreover, based on this saturated pixel detection, Chen et al. suggest using more faithful ways to detect saturated pixels, which are based on the maximum entropy rule [7] or implemented using CNN [9]. To better handle the saturated regions, Hu et al. [20] combine the works from [52] and [10], and they further propose an EM-based regularized RL deconvolution method. Different from previous works that explicitly exclude saturated pixels, some works suggest altering the clipping function in Eq. (3) so that all pixels are included in the deblurring process. Whyte et al. [52] extend the RL algorithm with an approximation function [4] to model the property of saturated pixels. But their approaches require empirical parameter settings during optimization, and the deblurring process is also time-consuming. Recently, Chen et al. [8] propose to use a latent map to replace the clipping function, where the latent map is naively determined by the reverse of the convolving result. However, their work requires an empirically defined threshold. This setting works well for the blind deblurring task since a modest latent image can also lead to a precisely estimated kernel. When it comes to the non-blind deblurring task, the empirically defined threshold may jeopardize the final result. Differently, we use a learning-based approach to compute the latent map, which avoids the empirical parameter tuning and naive settings. There are also works suggest using CNN to directly deal with the low-light condition. Xu et al. [55] train an end-to-end network to handle blurry images with outliers, but their work demands fine-tuning for every blur kernel. Ren et al. extend [40] to handle any blur kernel by a generalized low-rank approximation. However, their method does not work well for motion blur. ## 3 Preliminary This section describes the basic formulations of a standard non-blind deblurring method and the original Rishcardson-Lucy (RL) deconvolution algorithm. ### _Non-Blind Deblurring_ The blurring process formulated in Eq. (1) often involves noise that is often modeled as following either a Poisson or Gaussian distribution. For both cases, the non-blind deblurring process can be formed as the following minimization problem: \[\min_{I}\mathcal{L}(B,I\otimes K)+\lambda P(I), \tag{4}\] where the fidelity term \(\mathcal{L}\) measures the distance between the blurry image and the convolving result between the estimated sharp image and the blur kernel. Assuming the noise involved in the blurring process follows a known distribution, we can reformulate the fidelity term into the negative log-likelihood: Fig. 2: Overview of our deep Richardson-Lucy (RL) deconvolution method. We obtain the sharp image by iteratively updating the latent image and the latent map. PEN and MEN denote the prior estimation network and map estimation network, respectively. Please see the text for details. \(\mathcal{L}(B,I\otimes K)=-\log\mathcal{P}(B|I\otimes K)\); \(P(I)\) is the prior term for \(I\); The scalar weight \(\lambda\) balances the contribution of these two terms. If the noise follows a Gaussian distribution, we can model \(\mathcal{L}(B,I\otimes K)=-\log(e^{-\frac{1B\cdot\theta K+1}{2\sigma^{2}}})\), where \(\sigma\) is the standard deviation of the Gaussian noise mode. With the defined fidelity term and prior term, solving Eq. (4) is not difficult, and it is typically solved using standard linear least-squares algorithms, such as conjugate gradient descent. If the noise follows a Poisson distribution, we can model \(\mathcal{L}(B,I\otimes K)=-\log(\frac{e^{-\theta K\cdot\theta K}(I\otimes K)^ {\theta}}{B!})\). Note that image noise is assumed to be independent and identically distributed (i.i.d.). Hence, \(\mathcal{L}(B,I\otimes K)\) is accessed per pixel. The classic deblurring method for deblurring images with Poisson noise is the RL algorithm [29, 41], and we give detailed derivation steps in Sec. 3.2. Following practices from existing deblurring works that aim for saturated images [52, 20, 8], we also assume the noise involved in the blurring process to follow a Poisson distribution. ### _Richardson-Lucy Deconvolution Algorithm_ The RL algorithm [29, 41] is well known for deconvolving blurry images involved in Poisson processes. In this section, we present the deviation of the basic RL for solving Eq. (4) when the noise in the blurring process is assumed to follow a Poisson distribution. Same as the original idea in [29, 41], we only consider the fidelity term in the function. Given the likelihood \(\mathcal{P}(B|I\otimes K)=\frac{e^{-I\theta K}(I\otimes K)^{\theta}}{B!}\), we can reformulate the fidelity term from Eq. (4) into: \[\min_{I}I\otimes K-\log(I\otimes K)\circ B, \tag{5}\] where \(\circ\) is the Hadamard product. Taking the derivative of the equation w.r.t. \(I\) and setting it to be zero, we obtain \[\mathbf{1}\otimes\widetilde{K}-\frac{B}{I\otimes K}\otimes\widetilde{K}=0, \tag{6}\] where \(\mathbf{1}\) is an all-one image; \(\widetilde{K}\) can be obtained by flipping \(K\) upside-down and then left-to-right. Recall that \(\mathbf{1}\otimes\widetilde{K}=\mathbf{1}\), from Eq. (6) we obtain the fixed point iteration by: \[\frac{I^{t+1}}{I^{t}}=\mathbf{1}=\frac{B}{I^{t}\otimes K}\otimes K\Rightarrow I ^{t+1}=I^{t}\circ(\frac{B}{I^{t}\otimes K}\otimes K), \tag{7}\] where \(t\) is the iteration index. The updating scheme in Eq. (7) is known as RL deconvolution algorithm. The convergence property is also analyzed in [29]. Please refer to the original papers [41, 29] for detailed descriptions. ## 4 Proposed Method The overview of the proposed network is shown in Fig. 2. Our method simultaneously estimates the latent map for modeling saturated pixels, the prior information as the image prior, and the latent clean image in an iterative optimization framework. Inputs of our model include a saturated blurry image \(B\) and the corresponding blur kernel \(K\). For every iteration step \(t\), given the latent image \(I^{t}\) updated from the previous iteration, we first use the proposed prior estimation network (PEN) and map estimation network (MEN) to estimate the prior information \(\lambda P^{\prime}(I^{t})\) and the latent map \(M\). Then, the latent image \(I^{t+1}\) can be updated through a Richardson-Lucy-based optimization scheme, which is further used as the input for the next iteration. We use the blurry image \(B\) as the initialization of the latent image \(I^{0}\). We present details of our model in the following. ### _Proposed Blur Model_ The degrading process in Eq. (3) involves a non-linear clipping function, which is also non-differentiable. To make the problem more tractable, instead of finding complicated approximations to model the degrading process, we propose a learnable latent map \(M\) to replace the clipping function. Assume the imaging process follows the Poisson distribution, then the observed noisy blurry image \(B\) of Poisson distribution with mean \(M\circ(I\otimes K)\) obeys the conditional probability: \[\mathcal{P}(B|M\circ(I\otimes K))= \prod_{i}\frac{e^{-(M\circ(I\otimes K))_{i}}((M\circ(I\otimes K) )_{i})^{B_{i}}}{B_{i}!}, \tag{8}\] \[\text{s.t.}\;\;M=\mathcal{F}(I,K)\] where \(i\) denotes a pixel location, and \(\mathcal{F}(\cdot)\) represents a function that determines \(M\) based on \(I\) and \(K\). It is noteworthy that \(M\) has a similar effect to the clipping function that can keep the blurry result within the sensor range. With the help of \(M\), saturated pixels can be well modeled by the proposed blur model. Meanwhile, compared to the blur model in Eq. (3), the proposed model in Eq. (8) is differentiable w.r.t. \(I\) and \(K\), which facilitates the following optimization steps. Note our model is different from that in [8]. Because their blind deblurring task does not require a high quality latent image during the kernel estimation step, thus the can use a naive form of \(\mathcal{F}\) (i.e. assuming a uniform threshold \(v\) in the image, for every pixel location \(i\), \(M_{i}=1\) if \((I\otimes K)_{i}<v\), or \(M_{i}=v/(I\otimes K)_{i}\) if \((I\otimes K)_{i}>v\)) to obtain a roughly estimated \(I\). However, our non-blind deblurring task requires a much more accurate \(I\). Considering that determining proper thresholds for all situations is rather challenging, thus their empirically selected value (i.e. \(v\) is fixed as 0.9 in their implementation for all images) is far from convincing in our setting. Differently, we suggest using CNN to approximate \(\mathcal{F}\) which can adapt to different situations given its large learning capacity. We further present detailed comparisons between our model and that from [8] in Section 7.1.4. ### _Optimization_ By formulating the optimization task in a Maximum a Posteriori (MAP) scheme, our objective is to find the optimal \(\{I^{*},M^{*}\}\) so that \[\{I^{*},M^{*}\}=\arg\max_{I,M}\log\mathcal{P}(I,M|B,K),\;\;\text{s.t.}\;\;M= \mathcal{F}(I,K) \tag{9}\] Based on the Bayesian theorem, we obtain \[\begin{split}\{I^{*},M^{*}\}=\arg\max_{I,M}\{&\log \mathcal{P}(B|K,I,M)+\log\mathcal{P}(I,M)\},\\ &\text{s.t.}\;\;M=\mathcal{F}(I,K)\end{split} \tag{10}\] We can solve the above problem by the alternate minimization approach that iteratively updates \(I\) and \(M\). Thus, for each iteration step \(t\), we can obtain the solutions by solving the following two sub-problems, \[\left\{\begin{aligned} M=\mathcal{F}(I^{t},K),\\ I^{t+1}=\arg\max_{I}\log\mathcal{P}(B|K,I,M)+\log\mathcal{P}(I). \end{aligned}\right. \tag{11}\] Eq. (11) is deduced from the hard constraint in Eq. (10) that any feasible solutions of \(M\) are completely determined by \(I\) and \(K\), which is also the projection of the solution of \(M\) in Eq. (10). The above updating scheme is similar to the projected alternating minimization (PAM) method in [37]. Solving the sub-problem w.r.t. \(M\).Defining a specific form of the function \(F(\cdot)\) is not trivial. Current methods [7, 8] resort to empirical findings for the definition, which may not suit different images. Given the strong approximation ability of deep neural networks, we develop a map estimation network (MEN) to directly approximate the function. MEN uses the current estimated image \(I^{t}\), and convolving result \(I^{t}\otimes K\) as inputs and outputs \(M\). Directly solving Eq. (11) using MEN, we can avoid the definition of \(F(\cdot)\) and the possible heuristical settings brought by it. Solving the sub-problem w.r.t.\(I\).Putting Eq.(8) into Eq.(12) and defining the image prior \(\mathcal{P}(I)\) as \(\mathcal{P}(I)=\exp(-\lambda P(I))\), where \(\lambda\) is the weight parameter, the objective in Eq.(12) can be reformulated as, \[\begin{split} I^{t+1}&=\operatorname*{arg\,max}_{I }\log\mathcal{P}(B|K,I,M)+\log\mathcal{P}(I)\\ &=\operatorname*{arg\,max}_{I}\log\prod_{i}\frac{e^{-(M\circ(I \otimes K))_{i}}((M\circ(I\otimes K))_{i})^{B_{i}}}{B_{i}!}-\lambda P(I)\\ &=\operatorname*{arg\,min}_{I}M\circ(I\otimes K)-\log(M\circ(I \otimes K))\circ B+\lambda P(I).\end{split} \tag{13}\] We use the RL updating scheme to solve Eq. (13) as: \[I^{t+1}=\frac{I^{t}\circ((\frac{B}{I\otimes K}-M+\mathbf{1})\otimes\overline{ K})}{\mathbf{1}+\lambda P^{\prime}(I^{t})}, \tag{14}\] where \(P^{\prime}(\cdot)\) denotes the derivative of \(P(\cdot)\). Here the divisions are both element-wise operations. Please refer to the appendix for detailed derivations. Generally, \(P(I)\) can be the form of the classical sparse prior [27]. But it may not be the best choice for all scenarios. In this paper, we propose a prior estimation network (PEN) to directly estimate the derivative of the regularization term (i.e. \(\lambda P^{\prime}(\cdot)\)). PEN takes \(I^{t}\) as inputs and outputs \(\lambda P^{\prime}(I^{t})\), it implicitly learns the prior information w.r.t. \(I^{t}\) and can be efficiently plugged into the optimization process in Eq. (14). ## 5 Network Details By integrating MEN and PEN, which are trained from a large amount of training data, into the proposed framework, our method can avoid the heuristically defined functions and manually parameter tuning in the prior arts. In this section, we present the network configurations and the implementation details. ### _Network Designs_ The inputs of our method include the blurry image and the corresponding blur kernel. During each iteration stage, we perform the latent map (i.e. \(M\)) estimation via MEN and prior estimation (i.e. \(\lambda P^{\prime}(I)\)) via PEN. The updated latent image can be obtained with the updated \(M\) and \(\lambda P^{\prime}(I)\) by performing the deblurring step given in Eq. (14). The detailed algorithm is shown in Algorithm 1. Network architecture of MEN.In existing methods, e.g. [52, 8], different hyper-parameter settings may need to be tuned for different images, and a heuristically defined approximation function may not be the best choice in practice. Instead of seeking manually designed representations, we suggest directly learning to estimate the latent map \(M\) by the MEN from numerous training data. MEN takes \(I^{t}\) and \(I^{t}\otimes K\) as inputs and outputs \(M\). It is constructed with six res-blocks, and each block contains two convolution layers to generate 32 features. We add a rectified linear unit (ReLU) after the first convolution layer in every block, and a sigmoid layer is attached at the end. Network architecture of PEN.Instead of using a manually designed image prior and empirically selecting the weight parameter (i.e. \(\lambda\)), we suggest directly estimating the prior information from the intermediate estimated latent image. PEN uses \(I^{t}\) as the input and outputs the prior information \(\lambda P^{\prime}(I^{t})\). It is implemented by a 3-scale lightweight U-net. Specifically, each scale in the U-net contains two convolutions, and each convolution layer is attached with a ReLU layer for activation. The features from the first to the last scale are 8, 16, and 32, respectively. The proposed network is fully differentiable and can be trained in an end-to-end manner. Both MEN and PEN share weights through different iterations. ### _Network Training and Implementation Details_ To make the training process more stable, we train both PEN and MEN by minimizing the difference between the estimated deblurred image \(I^{t}_{n}\) from every stage with the ground truth \(I^{gt}\) as: \[\mathcal{E}=\frac{1}{NQ}\sum_{n=1}^{N}\sum_{I=1}^{Q}\|I^{t}_{n}-I^{gt}\|_{1}, \tag{15}\] where \(\|\cdot\|_{1}\) denotes the \(L_{1}\) norm loss, \(N\) is the number of training samples in every batch, and \(Q\) is the maximum updating stages during training. We first train PEN without considering MEN for all updating stages where \(M\) is fixed as \(\mathbf{1}\). Then both PEN and MEN are optimized until they are converged. Our implementation is based on PyTorch [36], we initialize our network according to [19]. The training is carried by ADAM optimizer [22] with \(\beta_{1}=0.9\), \(\beta_{2}=0.999\), \(\epsilon=10^{-8}\) and learning rate as \(0.0001\). We set the batch size as 4, image size as \(256\times 256\), and the number of iterations \(Q\) as 30, respectively. ## 6 Experiments In this section, we use both synthetic data and real-world saturated blurry images to evaluate the effectiveness of our method against state-of-the-art ones for non-blind deblurring. ### _Datasets_ Training datasets.We use the training dataset from [9] which contains 500 night images from Flickr. Then we randomly crop 10 patches of size \(256\times 256\) pixels from each of these images. To generate saturated blurry images, we follow the strategy in [20]. The pixels in patches, which are larger than a threshold, are enlarged by \(N\) times to obtain the saturated sharp patches1. In order to simulate motion blur, we use [3] to generate 5 motion kernels with sizes ranging from 11 to 33 pixels for every patch. Thus, a total of 25000 blurry and sharp image pairs are used in our training process. We clip the obtained sharp and blurred patches after convolving the sharp patches by the corresponding blur kernel. Merits of our training data synthesizing strategy can be found in Section 7.6. Footnote 1: The dynamic range of all the images is [0, 1] in this paper. The threshold and \(N\) are randomly sampled from 0.75-0.95 and 1.5-5. **Testing dataset.** For the testing data, we use the 100 test images from [9] without cropping to generate the blurry images in the same way as the training data. For every image, one motion kernel is randomly generated. The training data and test data do not overlap. For the 100 blurry images in the testing set, we compare different methods with the ground truth kernels and the estimated kernels from [20]. Moreover, we also test our methods in the saturate dataset provided in [20] and the unsaturate benchmark dataset from [28]. We further use real-world examples to evaluate different arts. ### _Comparisons with State-of-the-Arts_ In this section, we compare our method with the optimization-based methods [10, 52, 20, 33, 7] that are specially designed for saturated images and some recent learning-based arts, including FCNN [59], IRCNN [61], RGDN [18], DWDN [14], NBDN [9], SRN [49], Uformer [51], and Stripformer [50]. To ensure a fair comparison, we use the original implementation of these arts and finetune [59, 49, 14, 9, 51, 50] in our dataset. We also tune the hyper-parameters involved in the compared optimization-based arts for relatively better performances. **Saturated image from the proposed testing set.** We first evaluate different methods using the proposed testing set in terms of average PSNR and SSIM, and the results are demonstrated in Table I. For both ground truth and estimated kernels, our method performs the best among the models evaluated. Fig. 3 shows visual results from different methods. The deep learning-based approach [49] performs less effectively than other methods due to the lack of blur kernel information and ignoring of the imaging process as shown in Fig. 3 (g). The optimization-based methods [10, 20, 33, 7] contain artifacts in the saturated regions because some saturated pixels are not properly handled in their models (Fig. 3 (b), (d), (e) and (f)). Moreover, the details are also not recovered well in their results. This is mainly because of the ineffectiveness of the sparse image prior [27] that they adopted. Artifacts can also be found in the result from [9] (Fig. 3 (i)) because they suggest excluding saturated pixels in their deblurring process. This setting is ineffective because it is difficult to precisely separate saturated and unsaturated pixels in this example. The algorithm based on Eq. (3) (i.e. Whyte et al. [52]) can remove the blur in the saturated regions to a certain extent as shown in Fig. 3 (c). However, due to the lack of prior information, their result still contains many ringings around the saturated regions. The learning-based model [59] shows their advantages in the regions without saturation due to the learned \begin{table} \begin{tabular}{l c c c c c c c c c c c c} \hline \hline & Cho & Hu [20] & Whyte & Pan & Chen & SRN & Uformer & Stripformer & FCNN & IRCNN & RGDN & DWDN & NBDN & Ours \\ & [10] & & [52] & [33] & [7] & [49] & [51] & [50] & [59] & [61] & [18] & [14] & [9] & \\ \hline \multicolumn{11}{c}{Results with GT blur kernels} \\ \hline PSNR & 20.27 & 23.26 & 23.75 & 24.68 & 24.85 & 22.95 & 24.20 & 24.48 & 24.80 & 19.34 & 20.94 & 25.02 & 25.25 & **25.66** \\ SSIM & 0.7410 & 0.7755 & 0.8206 & 0.8550 & 0.7713 & 0.7701 & 0.8206 & 0.8293 & 0.8470 & 0.6755 & 0.7530 & 0.8515 & 0.8544 & **0.8595** \\ \hline \multicolumn{11}{c}{Results with estimated kernels from [20]} \\ \hline PSNR & 20.06 & 23.01 & 23.20 & 23.66 & 24.39 & 22.95 & 24.20 & 24.48 & 23.77 & 19.22 & 20.79 & 24.51 & 24.40 & **25.11** \\ SSIM & 0.7318 & 0.7657 & 0.7965 & 0.8239 & 0.8202 & 0.7701 & 0.8206 & 0.8293 & 0.8090 & 0.6728 & 0.7512 & 0.8295 & 0.8291 & **0.8367** \\ \hline \hline \end{tabular} \end{table} TABLE I: Quantitative evaluations on the given saturated blurry testing set. Fig. 3: Deblurring results of a saturated example from the proposed testing set. Our method performs favorably compared with existing non-blind deblurring methods, which generates a result with finer details and fewer artifacts in the saturated regions. Please zoom-in for a better view. prior as shown in Fig. 3 (h). But they are based on the blur model that does not explicitly consider saturation (e.g. Eq. (1)). It is not surprising that their result contains many artifacts around the saturated pixels. The same result can be observed from [14] (Fig. (j)). In comparison, the saturated pixels have less influence on our method because of the robust blur model Eq. (8). By taking advantage of the learned image prior, our method can restore a clear image with sharper edges and fewer artifacts around the saturated regions (Fig. 3 (k)). **Unsaturated images.** Although our method is specially designed for saturated image deblurring, we show it can also be used to deal with unsaturated data. Blurry images from the benchmark dataset [28] are used for evaluation. PSNR and SSIM values are shown in Table II. We can observe that although our method is designed for saturated images, it performs competitively against state-of-the-art methods on images without saturation. Deblurring results of an unsaturated example from different methods are shown in Fig. 4. Our method performs favorably against existing methods, which generates a result with finer details and fewer artifacts compared with others. **Saturated images from Hu et al. [20].** Besides the saturated data in the proposed testing set, we also use the benchmark dataset from Hu et al. [20] to show the effectiveness of the proposed algorithm. This dataset consists of a total of 154 saturated blurry images and 12 blur kernels. Results shown in Table II demonstrate that with either ground truth kernels or estimated blur kernels, our method can generate favorable results against existing methods. The results show that our method can generalize well to saturated images from another dataset. **Real-world saturated images.** We also use some challenging real-world examples to evaluate our method. The comparisons are presented in Fig. 5 and 6. We observe that the optimization-based methods [10, 33, 20, 52, 7] have difficulties in simultaneously restoring details and removing artifacts due to the ineffectiveness or the absence of the adopted image prior. In comparison, the learning-based methods [59, 61, 18, 14] can generate results with more details. However, saturated pixels are not specially considered in their models. As a result, the recovered results contain severe artifacts in the saturated regions. The deep learning-based deblurring methods [49, 51, 50] can hardly obtain satisfying results as the imaging process is ignored. Moreover, although the method from the robust learning-based method [9] can generate a result with fewer artifacts, we observe there are still ringings around the saturated region. This is mainly because their adopted model is ineffective in handling all saturated pixels. Different from those methods, the comparison results demonstrate that our method can prevent side-effects from the saturated pixels while obtaining clearer details at the same time. ### _Model Size and Running Time_ Table III summarizes total numbers of parameters from different algorithms [10, 33, 20, 52, 59, 61, 18, 14, 9] and their corresponding running time on a \(300\times 300\) blurry image. All methods are evaluated on the same PC with an Intel (R) Xeon (R) CPU and an Nvidia Tesla 1080 GPU. The proposed method does not require many parameters, and it performs favorably against state-of-the-art models in the term of running time. \begin{table} \begin{tabular}{l c c c c c c c c c c} \hline \hline & Cho [10] & Hu [20] & Whyte [52] & Pan [33] & Chen [7] & SRN [49] & FCNN [59] & IRCNN [61] & RGDN [18] & NBDN [9] & Ours \\ \hline \multicolumn{11}{c}{Results on the non-saturated data from Levin et al. [28]} \\ \hline PSNR & 32.59 & 31.27 & 26.54 & 32.03 & 33.03 & 31.18 & 33.22 & 33.14 & 33.86 & 32.91 & **34.12** \\ SSIM & 0.9135 & 0.8930 & 0.8301 & 0.9263 & 0.9054 & 0.8972 & 0.9267 & 0.9261 & **0.9335** & 0.9000 & 0.9329 \\ \hline \multicolumn{11}{c}{Results on the extra saturated data from Hu et al. [20]} \\ \hline PSNR & 20.94 & 22.79 & 21.43 & 22.47 & 21.80 & 22.21 & 23.57 & 19.68 & 21.62 & 24.60 & **25.11** \\ SSIM & 0.7022 & 0.7764 & 0.7414 & 0.7583 & 0.7014 & 0.7526 & 0.8004 & 0.6147 & 0.6592 & 0.8274 & **0.8439** \\ \hline \hline \end{tabular} \end{table} TABLE II: Quantitative evaluations on non-saturated data from Levin et al. [28] and extra saturated data from Hu et al. [20] Fig. 4: Deblurring results of an unsaturated example. The kernel in (a) is from [34]. Although specially designed for saturated images, our method can still perform favorably against existing methods on the unsaturated image, which generates a result with finer details and fewer artifacts as shown in the boxes. Please zoom-in for a better view. ## 7 Analysis and Discussion In this section, we first analyze the effectiveness of the proposed method. Then, we analyze the accuracies of the outputs of the proposed MEN and PEN. We further show the robustness of our method against noise and analyze the convergence property of our method. Finally, we discuss the merit of our training sample synthesizing step against that from [9]. ### _Effectiveness of the Proposed Blur Model_ #### 7.1.1 Compared to the Commonly-Used Blur Model. Different from other methods, we use a latent map \(M\) to explicitly handle saturated pixels in the blurry image. \(M\) can be similarly considered as a replacement of the ideal clipping function in Eq. (3). When the latent map is disabled (e.g. setting \(M=1\))), our model reduces to the blur model in Eq. (1), which is widely-adopted in many existing methods [59, 61, 18]. However, the restored image without the latent map often contains many artifacts in the saturated regions, as depicted in the green boxes of Fig. 7 (b). For the learned latent map \(M\) in Fig. 7 (d), we can observe that it has smaller values in the saturated regions and vice versa for unsaturated pixels. With the help of the latent map, our method can generate a result with fewer ringings around the saturated regions, as shown in Fig. 7 (h). To quantitatively compare our method with the strategy without the latent map, we conduct an ablation study on the proposed testing set. The results in Table IV show that the blur model with the proposed latent map can improve deblurring when saturated pixels are present. Fig. 5: Deblurring results of a real-world example with numerous saturated pixels. The kernel in the white box is from [12]. Our method performs favorably compared with existing non-blind deblurring methods, which generates a result with fewer color artifacts in the boxes. Please zoom-in for a better view. Fig. 6: Deblurring results of a real-world example with numerous saturated pixels. The kernel in the white box is from [10]. Our method performs favorably compared with existing non-blind deblurring methods, which generates a result with fewer color artifacts in the boxes. Please zoom-in for a better view. \begin{table} \begin{tabular}{c|c|c} \hline Methods & Total parameters (M) & Running time (s) \\ \hline Cho et al. [10] & - & 5.25 (CPU) \\ Whyte et al. [52] & - & 2.86 (CPU) \\ Hu et al. [20] & - & 5.61 (CPU) \\ Pan et al. [33] & - & 15.41 (CPU) \\ Chen et al. [8] & - & 5.45 (CPU) \\ FCNN [59] & 0.45 & 0.13 \\ IRCNN [61] & 0.15 & 1.11 \\ RGDN [18] & 1.26 & 5.34 \\ DWDN [14] & 7.05 & 2.60 \\ NBDN [9] & 0.39 & 0.25 \\ Ours & 0.16 & 0.70 \\ \hline \end{tabular} \end{table} TABLE III: Model size and running time comparisons. #### 7.1.2 Compared with the Blur Model by [52]. We also compare the proposed blur model with that from [52]. To solve the saturated deblurring problem, Whyte et al. designed the blur model based on Eq. (3). They use a specially-designed function [4] to approximate the clipping function. However, their approximation function often requires heuristic parameters, and it is also cumbersome work to find the best parameters for different blurry images. Differently, the clipping function is replaced by a learnable latent map in our model, which can be effectively estimated by a map estimation network (MEN). We conduct an ablation study on the proposed testing set w.r.t. these two blur models. To ensure a fair comparison between the proposed latent map-based model and the blur model from [52], we train our method by replacing the latent map with the approximation function in [52] with their default parameter setting. Evaluation results in Table IV illustrate the effectiveness of the proposed model over that from [52]. An example in Fig. 7 (g) shows the limitation of the model in [52], where the corresponding result contains artifacts and residual blur around the saturated pixels in the blue box. In comparison, without heuristic settings, the saturated pixels can be well considered in the latent map (Fig. 7 (d)) of our blur model. As a result, a higher quality result with fewer artifacts can be obtained as shown in Fig. 7 (j). #### 7.1.3 Compared with the Blur Model by NBDN [9]. Another blur model is from NBDN [9] where Chen et al. propose to explicitly discard saturated regions by assigning small weights to the saturated pixels so that they are not involved in the deblurring process, and they further propose to use CNN to better identify the saturated regions 2. This idea is first proposed in [10]. In their settings, the saturated pixels are located based on the residual of the blurry image and the convolution output of the latent image and the blur kernel (i.e. \(B-I\otimes K\)). Footnote 2: Taking no account of noise, the blur model in NBDN [9] can be formulated as \(\hat{M}\circ B=\hat{M}\circ(I\otimes K)\) s.t. \(\hat{M}\in\{0,1\}\), where pixels violate the linear blur model are assigned with small weights to make sure that they do not involve in the deblurring process. In our model, the blur model is \(B=M\circ(I\otimes K)\) s.t. \(M\in[0,1]\), and all pixels are considered during deblurring. However, this strategy has some intrinsic limitations. First, it is difficult to precisely separate saturated and unsaturated pixels using this model. The example shown in Fig. 7 (c) suggests that some informative pixels (i.e. the sharp edges around saturated regions), will be considered saturated and discarded during the deblurring process. Second, because saturated regions are not \begin{table} \begin{tabular}{l l|c|c|c|c} \hline \hline \multicolumn{5}{c}{GT blur kernels} \\ \hline & Ours w/o \(M\) & Ours w/ blur model in Whyte [52] & Ours w/ blur model in NBDN [9] & Ours w/ blur model in Chen [8] & Ours \\ PSNR & 23.34 & 24.07 & 24.69 & 24.13 & 25.66 \\ SSIM & 0.8285 & 0.8304 & 0.8400 & 0.8312 & 0.8595 \\ \hline \multicolumn{5}{c}{Estimated kernels from [20]} \\ \hline & Ours w/o \(M\) & Ours w/ blur model in Whyte [52] & Ours w/ blur model in NBDN [9] & Ours w/ blur model in Chen [8] & Ours \\ \hline PSNR & 23.18 & 23.56 & 24.23 & 24.07 & 25.11 \\ SSIM & 0.8053 & 0.8167 & 0.8199 & 0.8115 & 0.8367 \\ \hline \hline \end{tabular} \end{table} TABLE IV: Comparisons on the proposed testing set w.r.t. different blur models. Ours w/o \(M\) is implemented by fixing \(M=\textbf{1}\); Ours w/ [52] is implemented by replacing our latent map \(M\) with the approximation in [52]; Ours w/ [8] is implemented by computing \(M\) using the strategy from [8]. Fig. 7: Comparisons between different blur models. Artifacts are presented in the blue and green boxes in (b), (g) - (i), while the result from our model contains fewer artifacts and is close to the GT image. (c) is the detected saturated pixels from the blur model in NBDN [9] which includes almost all informative pixels in the blurry image. Darker pixels indicate smaller values in (c) - (e), where (d) and (e) are processed with the same gamma correction for better viewings. involved in the deblurring process, artifacts around these regions often remain in the recovered result as shown in Fig. 7 (h). In comparison, our method does not require a saturated pixel discarding process because all pixels can be modeled by our blur model in Eq. (8), thus avoiding the limitations of this strategy. To ensure a fair comparison, we reimplement our method with the blur model in NBDN [9] and train it with the proposed training data. Results presented in Table IV further validate the effectiveness of our model against that from NBDN [9]. #### 7.1.4 Compared with the Blur Model by [9]. To comprehensively evaluate the effectiveness of the proposed model, we compare it with that from Chen et al. [8]. The method in [8] is mainly designed for the blind deblurring task. Although the same latent map is proposed to replace the clipping function, the latent map \(M\) in their method is computed via a naive strategy: assuming a uniform thresh value \(v\) in the image, for every pixel location \(i\), \(M_{i}=1\) if \((I\otimes K)_{i}<v\), or \(M_{i}=v/(I\otimes K)_{i}\) if \((I\otimes K)_{i}>v\). To compare with this model, we finetune our model using the map estimation strategy from [8] and evaluate it using the proposed testing set. Results are shown in Table IV where our model outperforms that from [8]. The main reason is that the strategy in [8] assumes the same thresh value among all blurry images (i.e. they empirically use \(v=0.9\) for all blurry samples). This strategy can work in their blind deblurring task where salient edges of the latent image are the most crucial part for estimating the blur kernel, and it does not require an accurate estimated latent image necessarily. However, their strategy may be inapplicable in the non-blind deblurring task, because the same thresh value may not be the best choice for different images. Consequently, their computed latent map may be inaccurate in some examples. As shown in Fig. 7 (d), the latent map from [8] are not concordant with the saturated regions in the given example, and their deblurred result contain moderate artifacts (Fig. 7 (i)). Compared to their model, our latent map \(M\) is computed via a CNN, thus can avoid the naive settings and is suitable for most scenarios, and the recovered result is also shaper with fewer ringings (Fig. 7 (j)). ### _Effectiveness of PEN_ With only the fidelity term, the deblurring process often tends to amplify noise [11]. Meanwhile, an accurate blur kernel is often inaccessible in real life, and it will cause significant artifacts and result in undesired image structures [57]. Thus, a decent prior term is required for the deblurring task [57]. The proposed PEN is used to obtain prior information for the MAP model. Fig. 8 (b) and (c) show visualizations of the outputs of PEN at the \(t\)-th iteration (i.e. \(\lambda P^{\prime}(I^{t})\)), which serve as discriminative weights that facilitate the image restoration. We can observe that in regions without edges, \(\lambda P^{\prime}(I^{t})\) have small absolute values (close to 0), and it does not influence the optimization. \begin{table} \begin{tabular}{c c c c} \hline \hline & \multicolumn{3}{c}{Accuracy of \(M\)} \\ \hline iterations & 10 & 20 & 30 \\ \hline MSE of \(M\) & 0.0123 & 0.0091 & 0.0077 \\ \hline \multicolumn{3}{c}{Accuracy of \(\lambda P^{\prime}(I)\)} \\ \hline iterations & 10 & 20 & 30 \\ \hline SSIM of \(\tilde{I}\) (i.e. w/ \(\lambda P^{\prime}(I)\)) & 0.8460 & 0.8582 & 0.8591 \\ SSIM of \(I\) (i.e. w/ \(\lambda P^{\prime}(I)\)) & 0.8467 & 0.8588 & 0.8595 \\ \hline \hline \end{tabular} \end{table} TABLE VI: Evaluations of the accuracies of the outputs of MEN and PEN on the proposed testing set. Fig. 8: Visualization of the output of PEN (i.e., \(\lambda P^{\prime}(I)\)) and MEN (i.e. \(M\)), where \(I^{9}\), \(I^{10}\), \(I^{19}\), and \(I^{20}\) denote the intermediate results at the 9th, 10th, 19th, and 20th iterations, respectively. Parts enclosed in the red and yellow boxes show that our model restores better results over iterations. (i) is obtained by \(\frac{\tilde{I}^{i+1}}{\lambda P^{\prime}(I^{t})}\), and \(\tilde{I}^{i+1}\) is defined in Eq. (16). Our result with PEN is more visually pleasing than that without it (i.e. (j) vs. (g)), and intermediate results applied by PEN contain fewer ringings and noise than that before it (i.e. green boxes in (i) vs. (h)). Please zoom-in for a better view. \begin{table} \begin{tabular}{c|c c c} \hline \hline & & PSNR & SSIM \\ \hline GT blur kernels & Without prior & 23.88 & 0.8322 \\ prior-Laplacian prior & 24.30 & 0.8411 \\ prior from PEN & 25.66 & 0.8595 \\ \hline kernels from & Without prior & 23.34 & 0.8156 \\ 20 & hyper-Laplacian prior & 23.79 & 0.8202 \\ prior from PEN & 25.11 & 0.8367 \\ \hline \hline \end{tabular} \end{table} TABLE V: Comparisons on the given testing set with different image priors. In the edge regions, \(\lambda P^{\prime}(I^{t})\) have larger absolute values so they can preserve the sharp details and remove artifacts. Meanwhile, according to the iteration strategy in Eq. (14), the intermediate result \(\tilde{I}^{t+1}\) defined by: \[\tilde{I}^{t+1}\triangleq I^{t}\circ((\frac{B}{I^{t}\otimes K}-M+\mathbf{1}) \otimes\widetilde{K}), \tag{16}\] contains more artifacts (Fig. 8 (h)) than that after applying with PEN (Fig. 8 (i)). The example validates that our PEN can remove ringings and noises in the deblurred result. The comparison in Fig. 8 (g) and (k) further demonstrates that using PEN generates a better deblurred result. Quantitative evaluations in Table V also show that the proposed model trained without any prior information performs less effectively than that with it. Moreover, compared to the commonly-used hyper Laplacian prior [27], our PEN shows its advantages in the following aspects. First, the hyper-Laplacian prior is effective at removing ringings. But it can also result in fewer details. In comparison, by learning from numerous data, the learned prior has shown its effectiveness in removing artifacts and obtaining details at the same time. Second, the sparse prior often requires a heuristic setting for the weight parameter (i.e. \(\lambda\) in Eq. (13)), which is tedious work to manually tune the parameter for different images. Differently, our PEN can directly learn the prior information (i.e. \(\lambda P^{\prime}(I)\) in Eq. (14)). To quantitatively evaluate the effectiveness of PEN, we conduct an ablation study on the proposed testing set by replacing PEN with the hyper-Laplacian prior and finetune this model. Note the hyper-parameter \(\lambda\) is set to be 0.003, which is the same as existing arts [33, 10], throughout the training. The results are shown in Table V. We note that our method with PEN performs better than that with the hyper-Laplacian prior. All these results demonstrate the effectiveness of the proposed PEN. ### _Accuracies of Outputs of MEN and PEN_ We first measure the estimation accuracy of MEN by computing the MSE values of \(M\) in the intermediate updating steps, where the GT \(M\) is computed by \(\frac{B}{ToK}\) with the GT \(I\). Results are shown in Table VI, which shows that the network can output a high quality \(M\), and it can gradually estimate better maps with more iterations. Examples of interim results of \(M\) are shown in Fig. 8. We can observe that the estimated \(M\) in Fig. 8 (d) and (e) are concordant with our model that saturated regions have smaller \(M\) values, and \(M\) is closer to the GT over iterations. There is no ground truth label to examine the accuracy of the output of PEN (i.e. \(\lambda P^{\prime}(I)\)). Thus, we use the SSIM value of \(\tilde{I}^{t+1}\) in Eq. (16) and \(I^{t+1}\) in Eq. (14), which are the intermediate result before and after considering PEN, to measure the accuracy of the output of PEN. Results in Table VI show the learned prior term can help restore better results during iterations. ### _Convergence Analysis_ Visual examples in the red and yellow boxes of Fig. 8 (i), (j), and (k) show that our method can restore better results over iterations. To quantitatively evaluate the convergence property of our method, we conduct experiments on the proposed testing set and compute the average PSNR and SSIM values with different numbers of iteration stages. Fig. 9 shows that our method converges well after 30 iterations and more iteration stages will not improve the results. \begin{table} \begin{tabular}{c c c c c c c c c c c c} \hline \hline & \multicolumn{2}{c}{Cho [10]} & Hu [20] & Pan [33] & Dong [13] & Whyte [52] & FCNN [59] & DMPHN [58] & MTRNN [55] & CFCR [15] & DWDN [14] & Ours \\ \hline \multicolumn{10}{c}{Results with GT blur kernels} \\ \hline 1\% & PSNR & 20.78 & 23.76 & 25.02 & 22.05 & 23.55 & 24.55 & 22.34 & 19.77 & 20.79 & 24.82 & **25.44** \\ noise & SSIM & 0.7554 & 0.7833 & 0.8351 & 0.7760 & 0.7889 & 0.8310 & 0.7156 & 0.6432 & 0.7519 & 0.8362 & **0.8435** \\ \hline 2\% & PSNR & 20.55 & 23.58 & 24.31 & 21.91 & 23.04 & 24.37 & 22.26 & 19.70 & 20.55 & 24.55 & **25.01** \\ noise & SSIM & 0.7137 & 0.7516 & 0.8087 & 0.7564 & 0.6948 & 0.8003 & 0.7056 & 0.6225 & 0.7022 & 0.8103 & **0.8146** \\ \hline \multicolumn{10}{c}{Results with blur kernels from [20]} \\ \hline 1\% & PSNR & 20.59 & 23.52 & 24.09 & 21.91 & 23.43 & 23.17 & 22.34 & 19.77 & 20.65 & 24.47 & **24.98** \\ noise & SSIM & 0.7477 & 0.7766 & 0.8188 & 0.7646 & 0.7914 & 0.8082 & 0.7156 & 0.66432 & 0.7493 & 0.8198 & **0.8250** \\ \hline 2\% & PSNR & 20.42 & 23.41 & 23.62 & 21.80 & 23.04 & 22.84 & 22.26 & 19.70 & 20.48 & 24.19 & **24.66** \\ noise & SSIM & 0.7181 & 0.7530 & 0.7949 & 0.7490 & 0.7131 & 0.7889 & 0.7056 & 0.6225 & 0.7171 & 0.7967 & **0.8006** \\ \hline \hline \end{tabular} \end{table} TABLE VII: Evaluations on the proposed testing set with different levels of noises. Fig. 10: Real examples recovered by our model trained with samples using different synthesizing strategies. (a) is the blurry input. (b) and (c) are outputs of our model trained with samples synthesized by [9] and ours, respectively. Fig. 9: Performances of the proposed method using different iteration steps. ### _Robustness to Noise_ Night blurry images may also contain noises. To evaluate our method on images with noises, we first train our method on images with random Gaussian noises in a range of 0 to 3%. Then, we verify the robustness of our method w.r.t. different levels of noise by adding 1% and 2% noises to the blurry images in the testing dataset. Besides the optimization-based robust methods [10, 20, 33, 13, 52], we also compare our model with several leading arts that report to be robust to noise including FCNN [59], DMPHN [58], MTRNN [35], CPCR [15], and DWDN [14], where[59, 58, 14] are finetuned using our training samples. Results from different methods are shown in Table VII. We note that the proposed method can generate competitive results among the compared methods, which also validates the robustness of our method to different levels of noise. ### _Training Sample Synthesizing Strategy Comparison_ Our training sample synthesizing process is inspired by the strategy in [20], and it is different from that in [9]. Chen et al. suggest synthesizing blurry saturated images by enlarging the whole image with an enlarging factor before the convolving step. Although their strategy can synthesize enough saturated pixels, enlarging the whole image will make the dynamic range of training samples smaller. Considering that saturated images are often with a high dynamic range, we suggest only enlarging pixels above given thresh values, thus retaining or even increasing the dynamic range of the training samples. To compare these two data synthesizing strategies, we train our model on samples synthesized with each of them respectively. The test data are uniform-blurred saturated images selected from the test set of the real-world dataset [42], and the blur kernels are obtained from [20]. Evaluation results in Table VIII demonstrate the effectiveness of our sample synthesizing strategy. A real-world deblurring example in Fig. 10 also shows that our synthesizing strategy can better handle the real-world saturated blurry images. ### _Evaluations with Different Kernel Estimation Methods_ As an important part of the overall image deblurring pipeline, kernel estimation methods can also affect the quality of the restored images. To evaluate whether our method works well when using different estimated kernels, we use two different kernel estimation methods [20, 8] on the low-light blurry images (JPEG form) from the RealBlur dataset [42]. Evaluated results are listed in Table IX. Results from the compared methods are directly cited from the official website of the RealBlur dataset 3. We observe that our method can obtain favorable performances among the state-of-the-arts with different blur kernel estimation methods. These results indicate that our method is robust to blur kernels to some extent. Footnote 3: [http://cg.postech.ac.kr/research/realblur/](http://cg.postech.ac.kr/research/realblur/) ## 8 Conclusion In this paper, we propose a simple yet effective method for non-blind deblurring when saturated pixels are present in the blurry image. To explicitly handle saturated pixels, we modify the widely-adopted linear blur model by introducing a learnable latent map to estimate it. Based on the blur model, we formulate the deblurring task into a MAP problem and solve it by iteratively updating the latent image and map. In specific, we develop a map estimation network (MEN) to directly estimate the latent map based on the current estimation of the latent image, and we obtain the revised latent image by conducting a Richardson-Lucy (RL)-based optimization scheme. In addition, we develop an effective prior estimation network (PEN) to learn an image prior as the constraint of the latent image so that high-quality deblurring results can be better obtained under this learned prior. Both quantitative and qualitative evaluations on synthetic datasets and real-world images demonstrate the effectiveness of the proposed method. ## Acknowledgement F. Fang was supported by the National Key R&D Program of China (2022ZD0161800), the NSFC-RGC (61961160734), and the Shanghai Rising-Star Program (21QA1402500). J. Pan was supported by the National Natural Science Foundation of China (Nos. U22B2049). ## Appendix We present the detailed deviations for solving Eq. (13) in this appendix. By reformulating Eq. (13) into a vectorized form, we can obtain: \[\min_{\mathbf{I}}\mathbf{M}^{\mathsf{T}}\mathbf{KI}-\mathbf{B}^{\mathsf{T}} \log(diag(\mathbf{M})\mathbf{KI})+i\mathbf{\overline{I}}^{\mathsf{T}}P( \mathbf{I})\mathbf{\overline{I}}, \tag{17}\] where \(\mathbf{M}\), \(\mathbf{B}\), and \(\mathbf{I}\) denote the vectorized forms of \(M\), \(B\) and \(I\); \(\mathbf{K}\) is the Toeplitz matrix of \(K\) w.r.t. \(I\); \(\mathbf{\overline{I}}\) denotes a vector whose elements are all ones. For the second term of Eq. (17), we denote it as \(\mathbf{A}\) and its derivative w.r.t. \(\mathbf{I}\) is: \[\frac{\partial\mathbf{A}}{\partial\mathbf{I}} =\frac{\partial diag(\mathbf{M})\mathbf{KI}}{\partial\mathbf{I}} \ \frac{\partial log(diag(\mathbf{M})\mathbf{KI})}{\partial diag(\mathbf{M}) \mathbf{KI}}\ \frac{\partial\mathbf{B}^{\mathsf{T}}\log(diag(\mathbf{M})\mathbf{KI})}{ \partial log(diag(\mathbf{M})\mathbf{KI})}, \tag{18}\] \[=(\mathbf{K}^{\mathsf{T}}diag(\mathbf{M}))diag(\frac{1}{diag( \mathbf{M})\mathbf{KI}})^{\mathsf{B}}\mathbf{B},\] \[=\mathbf{K}^{\mathsf{T}}diag(\frac{1}{diag(\mathbf{M})\mathbf{KI} })(diag(\mathbf{M})\mathbf{B}),\] where the divide operation is element-wise. Then we can solve Eq. (17) by setting its derivative to zero as: \[\mathbf{K}^{\mathsf{T}}\mathbf{M}-\mathbf{K}^{\mathsf{T}}diag(\frac{1}{diag( \mathbf{M})\mathbf{KI}})(diag(\mathbf{M})\mathbf{B})+\lambda P^{\prime}( \mathbf{I})=0. \tag{19}\] Reformulate the above formation into its matrix form, we have: \[M\otimes\widetilde{K}-\frac{M\circ B}{M\circ(I\otimes K)}\otimes\widetilde{K}+ \lambda P^{\prime}_{I}(I)=0. \tag{20}\] \begin{table} \begin{tabular}{l c c c c c c c c c c} \hline \hline & Hu [20] & Xu [56] & Pan [34] & DMPHN [58] & Nah [31] & DeblurGAN [24] & SRN [49] & DeblurGAN-V2 [25] & Ours + [20] & Ours + [8] \\ \hline PSNR & 26.41 & 27.14 & 27.22 & 27.80 & 27.87 & 27.97 & 28.56 & 28.70 & 28.13 & 28.44 \\ SSM & 0.8028 & 0.8303 & 0.7901 & 0.8472 & 0.8274 & 0.8343 & 0.8674 & 0.8662 & 0.8497 & 0.8538 \\ \hline \hline \end{tabular} \end{table} TABLE IX: Evaluations of our model with different kernel estimation methods on the samples from [42]. where \(\widetilde{K}\) is the transpose of \(K\) that flips the shape of \(K\) upside down and left-to-right, \(P^{\prime}_{I}(I)\) is the first order derivative of \(P_{I}(I)\) w.r.t. \(I\). Recall that the sum of the kernel equals to 1, i.e. \(\overline{1}^{-1}\widetilde{\mathbf{K}}=1\), where \(\widetilde{\mathbf{K}}\) is the vectorized form of \(\widetilde{K}\). Thus, we further have, \[M\otimes\widetilde{K}-\frac{M\circ B}{M\circ(I\otimes K)}\otimes\widetilde{ K}+\lambda P^{\prime}_{I}(I)+\mathbf{1}-\mathbf{1}\otimes\widetilde{K}=0, \tag{21}\] \[(\frac{B}{I\otimes K}-M+\mathbf{1})\otimes\widetilde{K}=\mathbf{1}+\lambda P ^{\prime}_{I}(I), \tag{22}\] \[I\circ(\frac{B}{I\otimes K}-M+\mathbf{1})\otimes\widetilde{K}=I\circ( \mathbf{1}+\lambda P^{\prime}_{I}(I)), \tag{23}\] where \(\mathbf{1}\) is an all-one image. In order to solve Eq. (23), we use the fixed point iteration scheme and rewrite it as: \[\frac{I^{t+1}}{I^{t}}=\mathbf{1}=\frac{(\frac{B}{P\circ K}-M+\mathbf{1}) \otimes\widetilde{K}}{\mathbf{1}+\lambda P^{\prime}_{I}(I^{t})}. \tag{24}\] Thus, we can finally get Eq. (14) in our manuscript: \[I^{t+1}=\frac{I^{t}\circ((\frac{B}{P\circ K}-M+\mathbf{1})\otimes\widetilde{ K})}{\mathbf{1}+\lambda P^{\prime}_{I}(I^{t})}. \tag{25}\]
2301.09672
Pre/post-merger consistency test for gravitational signals from binary neutron star mergers
Gravitational waves from binary neutron star (BNS) mergers can constrain nuclear matter models predicting the neutron star's equation of state (EOS). Matter effects on the inspiral-merger signal are encoded in the multipolar tidal polarizability parameters, whose leading order combination is sufficient to capture to high accuracy the key features of the merger waveform (e.g.~the merger frequency). Similar EOS-insensitive relations exist for the post-merger signal and can be used to model the emission from the remnant. Several works suggested that the appearance of new degrees of freedom or phase transitions in high-density post-merger matter can be inferred by observing a violation of these EOS-insensitive relations. Here, we demonstrate a Bayesian method to test such an EOS-insensitive relation between the tidal polarizability parameters (or any other equivalent parameter) and the dominant post-merger frequency, using information either up to merger or from the post-merger signal. Technically, the method is similar to tests of General Relativity with binary black holes that verify the inspiral-merger-ringdown consistency. However, differently from the latter, BNS pre/post-merger consistency tests are conceptually less informative and they only address the consistency (or the breaking) of the assumed EOS-insensitive relation. Specifically, we discuss how such tests cannot conclusively discriminate between an EOS not respecting such relation and the appearance of new degrees of freedom (or phase transitions) in high-density matter.
Matteo Breschi, Gregorio Carullo, Sebastiano Bernuzzi
2023-01-23T19:12:04Z
http://arxiv.org/abs/2301.09672v1
# Pre/post-merger consistency test for gravitational signals ###### Abstract Gravitational waves from binary neutron star (BNS) mergers can constrain nuclear matter models predicting the neutron star's equation of state (EOS). Matter effects on the inspiral-merger signal are encoded in the multipolar tidal polarizability parameters, whose leading order combination is sufficient to capture to high accuracy the key features of the merger waveform (e.g. the merger frequency). Similar EOS-insensitive relations exist for the post-merger signal and can be used to model the emission from the remnant. Several works suggested that the appearance of new degrees of freedom or phase transitions in high-density post-merger matter can be inferred by observing a violation of these EOS-insensitive relations. Here, we demonstrate a Bayesian method to test such an EOS-insensitive relation between the tidal polarizability parameters (or any other equivalent parameter) and the dominant post-merger frequency, using information either up to merger or from the post-merger signal. Technically, the method is similar to tests of General Relativity with binary black holes that verify the inspiral-merger-ringdown consistency. However, differently from the latter, BNS pre/post-merger consistency tests are conceptually less informative and they only address the consistency (or the breaking) of the assumed EOS-insensitive relation. Specifically, we discuss how such tests cannot conclusively discriminate between an EOS not respecting such relation and the appearance of new degrees of freedom (or phase transitions) in high-density matter. ## I Introduction Kilohertz gravitational waves (GWs) from binary neutron star (BNS) mergers remnants are considered a promising probe of the nuclear equation of state (EOS) at extreme density. While no such detection was possible for GW170817 [1; 2; 3], future experiments are expected to reach the necessary sensitivity for a detection, e.g. [4; 5; 6]. Several authors claimed that a viable path to constrain the extreme-densities EOS is to "observe" specific features (e.g. frequencies) in the post-merger spectra and employ EOS-insensitive relations (or quasi-universal relations, QUR) to unveil EOS properties (e.g. phase transitions), e.g. [7; 8; 9; 10; 11; 12]. Only few authors have, however, considered the actual observational and data analysis problem, namely, the problem of how to incorporate these speculative ideas into a rigorous Bayesian data analysis framework [8; 12]. This paper discusses one possible concrete method in this direction and some related conceptual limitations in the realization of this program. New degrees of freedom or phase transitions can impact the BNS remnant dynamics at densities \(\rho\gtrsim 2\)\(\rho_{\rm sat}\), where \(\rho_{\rm sat}\simeq 2.7{\times}10\) is nuclear saturation density, and leave signatures in the observable GWs. Case studies simulated BNSs with matter models including hyperon production [e.g. 13; 14; 7] or zero-temperature models of phase transitions to quark-deconfined matter [e.g. 7; 10; 15; 16; 15]. In these examples, a EOS softening with respect to the "baseline" hadronic EOS can determine a more compact remnant that either undergoes an earlier gravitational collapse or increases the post-merger GW peak frequency \(f_{2}\) towards higher values. The former case is particularly relevant for binary masses above the prompt collapse threshold for the softened EOS, but below that threshold for the hadronic EOS. This implies that one of the two EOS model could be ruled out simply by the observation of a post-merger signal. The latter case might instead be probed, in a suitable mass range, by observing a violation (breakdown) of the QUR that relates \(f_{2}\) to properties of the individual neutron star (NS) in the binary, e.g. [7; 8; 11; 17]. It is worth remarking that the detectability of these effects crucially depends on the densities at which the EOS softening takes place. Significant effects have been simulated by constructing rather "extreme" transitions. EOS-insensitive relations are heavily used in GW astronomy with BNSs in order to either reduce the matter's degrees of freedom in waveform modeling or connect spectral features to the NS equilibria and mass-radius diagram [e.g. 18; 19; 20; 21]. Our work focuses on the relation between the dominant quadrupolar spectral peak of the post-merger signal, \(f_{2}\), and the (leading order) tidal coupling constant \(\kappa_{2}^{\rm T}\) of the binary [8; 22]. This QUR allowed us to construct a unified full-spectrum model by combining an inspiral-merger (IM) tidal waveform with a post-merger completion [23; 12; 17; 24; 8]. Such relation represents a natural (and representative) choice for a pre/post-merger (PPM) consistency test. To date, the employment of QURs is also the only method used in rigorous Bayesian studies, e.g. [6; 8; 12; 24], to connect the binary properties to the post-merger features. Inferring a QUR breakdown can be naturally treated as a PPM consistency test for a given QUR, similarly to analyses of binary black hole (BBH) mergers in the con text of tests of General Relativity [25; 26; 27]. We naturally employ such well-established framework to the analysis of BNS transients and demonstrate how to infer a QUR breakdown using Bayesian analyses of the full BNS spectrum. The paper is structured as follows. In Sec. II, we introduce the method used to detect departures from quasi-universality. In Sec. III, we validate our method performing parameter estimation (PE) on mock GW data. Finally, we conclude in Sec. IV highlighting conceptual issues in the interpretation of the analysis in real GW observations. ## II Methods QUR breaking occurs when the quasi-universal prediction does not match the corresponding observed property. For the case of the post-merger peak \(f_{2}(\kappa_{2}^{\mathrm{T}})\), the QUR is established as function of the binary properties, that can be well-estimated from pre-merger GWs. However, the post-merger signal directly provides a measurement of the \(f_{2}\) frequency. Thus, in order to identify the QUR breaking, we compare the post-merger observations to the pre-merger predictions estimated with QURs. Following the approach of Ref. [25], we introduce a consistency test that aims to reveal such breaking employing full-spectrum observations of BNSs. Given the GW data and a waveform template, the posterior distributions of the BNS parameters are calculated via Bayesian PE analysis [see, e.g. 28; 29; 30]. For our studies, we make use of the time-domain effective-one-body (EOB) model TEOBResumS [31] extended with the NRPM template in the high-frequency post-merger regime [8]. In order to speed up the computations, the EOB template makes use of a reduced-order approximation [32]. The considered post-merger model incorporates QURs calibrated on NR data, used to predict the template features and it includes a characterization of the main peaks of the post-merger spectrum. Closely following [8], we perform three PE analyses: first, we analyze the inspiral-merger data only (labeled as 'IM') with TEOBResumS; then, the post-merger data only (labeled as 'PM') is studied with NRPM, and, finally, we perform PE on the full-spectrum data (labeled as 'IMPM') with the complete model TEOBResumS_NRPM. As discussed in Ref. [25], PPM consistency tests relies on a cutoff frequency \(f_{\mathrm{cut}}\) used to split the low-frequency and high-frequency regimes. In general, the time-domain post-merger signal will also include frequency contributions below the merger frequency \(f_{\mathrm{mrg}}\), due to the low quality factor of the QNMs dominating the remnant BH response. However, for systems dominated by the quadrupolar mode, this "mixing" is typically negligible, and the portion of the signal with \(f<f_{\mathrm{mrg}}\) only suffers from small contaminations from the time-domain post-merger phase. For this reason, choosing \(f_{\mathrm{cut}}=f_{\mathrm{mrg}}\) is a sensible choice. The "mixing" becomes more significant for lower remnant spins (induced e.g. by a non-spinning high mass ratio binary). We stress that even in this case the consistency test remains valid, although the physical interpretation of the results becomes less immediate, since a good fraction of a deviation in the \(f<f_{\mathrm{mrg}}\) region could be induced by the time-domain post-merger signal. For BNS signals, the post-merger signal can lead to significant spectral contamination below \(f_{\mathrm{cut}}\) and the split is less trivial. However, if the dominant post-merger frequencies are significantly larger than the merger frequency \(f_{\mathrm{mrg}}\) or if the post-merger signal-to-noise-ratio (SNR) contribution below the cutoff is negligible, one can still choose \(f_{\mathrm{cut}}=f_{\mathrm{mrg}}\). This is the choice made in this work, assuming the cutoff frequency to be known exactly. In a realistic scenario, the cutoff frequency can be estimated from the full-spectrum posterior using EOS-insensitive relations for the merger frequency for the quadrupolar mode [23; 33; 8]. If the splitting frequency \(f_{\mathrm{cut}}\) cannot be uniquely fixed (e.g. due to spectral contamination below this threshold), the 'IM' and 'PM' models might be treated separately in single analyses either in a direct time-domain analysis [34; 35], or augmenting the standard frequency domain likelihood using "gating" techniques [36; 37; 35]. However, both of these methods are expected to significantly increase the computational cost, compounding the already long computational times inherent in inspiral BNS analyses. The 'IM' inference provides direct information on the progenitors' properties (i.e. masses, spins, tidal polarizabilities,...). From these parameters, it is possible to estimate a prediction for the \(f_{2}\) posterior using the QUR in Eq. 13 of [8]. Also the 'PM' inference provides information on the the progenitors' properties through the internally employed QURs. Moreover, in this case, the \(f_{2}\) posterior can be directly estimated from the reconstructed waveform. Finally, the 'IMPM' case naturally delivers information on the progenitors' properties and it allows us to estimate the \(f_{2}\) posterior from the reconstructed waveform. Then, following the approach of Ref. [25], we introduce the (fractional) deviations from the QUR as \[\frac{\Delta f_{2}}{f_{2}}=\frac{f_{2}^{\mathrm{PM}}-f_{2}^{\mathrm{IM}}}{f_ {2}^{\mathrm{IMPM}}}\,,\quad\frac{\Delta\kappa_{2}^{\mathrm{T}}}{\kappa_{2}^ {\mathrm{T}}}=\frac{\kappa_{2}^{\mathrm{PM}}-\kappa_{2}^{\mathrm{TM}}}{\kappa_ {2}^{\mathrm{IMPM}}}\,. \tag{1}\] We remark that \(f_{2}^{\mathrm{IM}}\) is computed from the inspiral data using the QUR in post-processing, while \(f_{2}^{\mathrm{PM}}\) and \(f_{2}^{\mathrm{IMPM}}\) estimation includes directly the PM data. The computation of \(\mathrm{p}(\Delta f_{2}/f_{2},\Delta\kappa_{2}^{\mathrm{T}}/\kappa_{2}^{ \mathrm{T}})\) is performed with a probabilistic approach. Given the posteriors \(\{f_{2},\kappa_{2}^{\mathrm{T}}\}_{i}\) for \(i=\mathrm{IM},\mathrm{PM},\mathrm{IMPM}\), the posterior of \(\Delta f_{2}\) and \(\Delta\kappa_{2}^{\mathrm{T}}\) are estimated as \[\mathrm{p}(\Delta f_{2},\Delta\kappa_{2}^{\mathrm{T}}|\mathbf{d}_{\mathrm{IM}},\mathbf{d}_{ \mathrm{PM}})=\iint\mathrm{p}(f_{2},\kappa_{2}^{\mathrm{T}}|\mathbf{d}_{\mathrm{PM}}) \,\mathrm{p}(\kappa_{2}^{\mathrm{T}}-\Delta\kappa_{2}^{\mathrm{T}},f_{2}-\Delta f _{2}|\mathbf{d}_{\mathrm{IM}})\,\mathrm{d}f_{2}\,\mathrm{d}\kappa_{2}^{\mathrm{T}}\,. \tag{2}\] Eq. (2) is the convolution product between the IM and the PM posteriors. Then, labeling \(\varepsilon_{f_{2}}=\Delta f_{2}/f_{2}\) and \(\varepsilon_{\kappa_{2}^{\mathrm{T}}}=\Delta\kappa_{2}^{\mathrm{T}}/\kappa_{2} ^{\mathrm{T}}\), the posterior for the quantities in Eq. (1) can be computed from the recovered posterior as \[\mathrm{p}(\varepsilon_{f_{2}},\varepsilon_{\kappa_{2}^{\mathrm{T}}})=\iint \kappa_{2}^{\mathrm{T}}\,f_{2}\,\mathrm{p}(\varepsilon_{f_{2}}\cdot f_{2}, \varepsilon_{\kappa_{2}^{\mathrm{T}}}\cdot\kappa_{2}^{\mathrm{T}}|\mathbf{d}_{ \mathrm{IM}},\mathbf{d}_{\mathrm{PM}})\,\mathrm{p}(f_{2},\kappa_{2}^{\mathrm{T}}| \mathbf{d}_{\mathrm{IMPM}})\,\mathrm{d}f_{2}\,\mathrm{d}\kappa_{2}^{\mathrm{T}}\,. \tag{3}\] As discussed in Ref. [25], \(\mathrm{p}(f_{2},\kappa_{2}^{\mathrm{T}}|\mathbf{d}_{\mathrm{IMPM}})\) represents our best guess for the \(\{f_{2},\kappa_{2}^{\mathrm{T}}\}\) posterior and it is used in Eq. (3) to weight the contributions of the inspiral-merger and post-merger inferences; while, \(\mathrm{p}(\Delta f_{2},\Delta\kappa_{2}^{\mathrm{T}}|\mathbf{d}_{\mathrm{IM}}, \mathbf{d}_{\mathrm{PM}})\) encodes the agreement/disagreement between pre-merger and post-merger inferences. Within this approach the origin of the axes, i.e. \(\Delta f_{2}=0\) and \(\Delta\kappa_{2}^{\mathrm{T}}=0\), represents the null-hypothesis for which no deviation from quasi-universality is observed. On the other hand, a departure of the posterior from the null-hypothesis can indicate the breakdown of the \(f_{2}(\kappa_{2}^{\mathrm{T}})\) QUR. Following the EOS terminology, we label as a _softening_ effect a deviation towards the region with \(\Delta f_{2}/f_{2}>0\) and \(\Delta\kappa_{2}^{\mathrm{T}}/\kappa_{2}^{\mathrm{T}}<0\), in order to differentiate it from a _stiffening_ effect, which shows \(\Delta f_{2}/f_{2}<0\) and \(\Delta\kappa_{2}^{\mathrm{T}}/\kappa_{2}^{\mathrm{T}}>0\). ## III Results We demonstrate the possibility of investigating the QUR breaking using PE analyses of mock GW data. We discuss the specific case of BHBA\(\phi\) and DD2 EOS simulated in [14]. The BHBA\(\phi\) EOS is identical to DD2 except that at densities \(\rho\gtrsim 2.5\rho_{\mathrm{sat}}\) it softens due to the formation of \(\Lambda\)-hyperons. Inspiral-merger GW signals from (equal-mass) binaries described by the two EOS and \(M\lesssim 2.8\) M\({}_{\odot}\) are indistinguishable since the individual progenitor NSs have maximal densities \(\rho\lesssim 2.5\rho_{\mathrm{sat}}\), similar compactnesses and tidal parameters, as shown in Figure 1 (left). On the other hand, for \(M\gtrsim 2.8\) M\({}_{\odot}\) the post-merger remnants reach higher densities at which the two EOS differ, leading to different post-merger GWs as shown in Figure 1 (right). We consider a pair of high-mass binaries with \(M=3\) M\({}_{\odot}\), no spins and equal component masses extracted from the CoRe database [38; 39]. The individual progenitors of the high mass BNS have \(\rho\approx 2.35\rho_{\mathrm{sat}}\); while, the associated remnant reaches \(\rho\approx 2.8\rho_{\mathrm{sat}}\) and the presence of \(\Lambda\)-hyperons significantly affect the post-merger dynamics. The DD2 1.50+1.50 M\({}_{\odot}\) binary has \(f_{2}\simeq 2.76\) kHz 1, and the respective BHBA\(\phi\) remnant has \(f_{2}\simeq 3.29\) kHz 2. The difference between the two NR values is \(\sim\)500 Hz, which corresponds to \(\sim\)20%. The BHB\(\Lambda\phi\) data deviates of \(\sim\)3\(-\sigma\) from the prediction of the QUR presented in Figure 1: Comparison between the BHBA\(\phi\) (red) and the DD2 (blue) EOS and the corresponding BNS templates [14]. _Top panel_: Mass of individual NSs as a function of the central density. The markers refer to simulated BNSs. _Bottom panel_: Plus polarization \(h_{+}(t)\) of the NR waveforms for the simulated BNSs with mass \(M=2.5\) M\({}_{\odot}\) (top) and \(M=3\) M\({}_{\odot}\) (bottom). The binary are located at a fiducial distance of 40 Mpc. The origin of the time axis \(t=0\) corresponds to the moment of merger. Ref. [8] and employed in NRPM (\(f_{2}^{\rm fit}=2.88\) kHz), corresponding to a more compact remnant than the DD2 case. The two binaries have also different times of black-hole collapse: the DD2 case collapses at late times, i.e. \(\sim\)21 ms after merger; while, the BHB\(\Lambda\phi\) remnant collapses shortly after merger within 2.6 ms. Moreover, we repeat the analysis on the low-mass BHB\(\Lambda\phi\) binary with \(M=2.5\) M\({}_{\odot}\), whose morphology is almost identical to the corresponding DD2 case even in the post-merger phase. The corresponding waveforms are shown in Figure 1 (right). The data are generated as EOB-NR hybrid waveforms injected in zero-noise, while the recovery is performed using TEOBResumS_NRPM. We analyze 128 s of data with a lower frequency \(f_{\rm low}=20\)Hz (or \(f_{\rm low}=f_{\rm mrg}\) in the post-merger only case) and a sampling rate of 8192 Hz, injecting the signal with post-merger SNR 11 (total SNR \(\sim\)200) and using the three-detector LIGO-Virgo network at design sensitivity [40; 41]. The priors on the parameters are taken consistently with Ref. [28; 29] with spin parameters fixed to zero. The PE studies are performed with the nested sampling routines implemented in LALInference[28; 29; 42]2. Footnote 2: The analysis settings are identical to Ref. [8]. There, the reader can also find detailed discussion on the posteriors. Figure 2 shows the posterior estimated for the three considered binaries. The grey band indicates the uncertainty of the QUR, and \(\Delta f_{2}/f_{2}\) posteriors falling in this band are considered to be consistent with the assumed QUR. The low-mass BHB\(\Lambda\phi\) case confidently includes the null hypothesis within the 90% confidence level of the posterior. The \(\Delta f_{2}/f_{2}\) posterior for the high-mass DD2 case is fully consistent with the QUR uncertainties, indicating no significant deviation. The mild deviation of \(\Delta\kappa_{2}^{\rm T}/\kappa_{2}^{\rm T}=0.5^{+0.3}_{-0.3}\) toward the stiffness portion of the plane is due to the finite faithfulness of NRPM against the full NR simulation considered, and is expected to be cured by improved models [17; 23]. The salient point to be extracted from the figure, is that the high-mass BHB\(\Lambda\phi\) case shows a significant deviations toward the softness portion of the plane, with \(\Delta\kappa_{2}^{\rm T}/\kappa_{2}^{\rm T}=-0.2^{+0.5}_{-0.2}\) and \(\Delta f_{2}/f_{2}=0.2^{+0.2}_{-0.1}\). This deviation in the frequency is significantly above the fit uncertainty and demonstrates a successful detection of the QUR breaking, invalidate the applicability of the QUR \(f_{2}(\kappa_{2}^{\rm T})\) to the considered binary. ## IV Conclusions Our results demonstrates a quantitative Bayesian method to invalidate a given QUR using full-spectrum BNS observations. The observation of an inconsistency in a PPM analysis of this type might help to _exclude_ (some of) the EOS employed for the design of the QUR. Although in the specific case considered this inconsistency was indeed caused by the appearance of hyperons at high densities (a "phase transition"), we stress that demonstrating the breakdown of a QUR within a given confidence level does not necessarily imply the measurement of a EOS softening effect. Since the true EOS is not known, but the inference requires a model (the QUR) designed using a EOS sample, it is only possible to invalidate the model (hypothesis) using the proposed null test. For example, this consistency test might simply exclude a QUR which is "not sufficiently" EOS-insensitive or which is poorly designed. Ref. [23] discusses the specific case of \(f_{2}(R_{1.4})\), where \(R_{1.4}\) is the radius of an equilibrium NS of mass \(1.4M_{\odot}\). According to current available data and EOS models, the \(f_{2}(R_{1.4})\) QUR might be easily broken by an observation at minimal post-merger SNR for detection. However, if one considers a similar QUR with the same quantities but rescaled by the binary mass, the QUR significantly improves its EOS-insensitive character. We stress that, according to current theoretical models and constraints, demonstrating the breaking of a (well-designed) QUR requires significant fine-tuning of both the EOS model and the binary masses, Cf. [10; 7; 12; 14]. The presented method is not restricted to the particular QUR considered here. A similar analysis may be performed, for example, on the inferred collapse time [17], considering the consistency of multiple parameters/QUR involved in the GW template, or using other QURs [e.g. 7; 11]. However, the \(f_{2}(\kappa_{2}^{\rm T})\) QUR is particularly interesting because (i) it is directly involved in the construction of the GW template, and (ii) it is rather accurate and Figure 2: Posterior for the deviation from the quasiuniversality defined in Eq. 1 for characteristic post-merger frequency \(f_{2}\) and tidal coupling \(\kappa_{2}^{\rm T}\). The contours report the 50% and the 90% credibility regions. Red lines refer to low-mass BHB\(\Lambda\phi\) binary, blue lines refer to high-mass DD2 binary and red lines refer to high-mass BHB\(\Lambda\phi\) binary. The red area denotes deviations due to softening effects, while blue area identify the stiffening effects. The grey band report the 90% credibility region of the \(f_{2}\) EOS-insensitive relation. shows deviations at a few percent level although being built from the largest sample of EOS and simulations explored so far in numerical relativity. Improved analyses can be obtained by folding-in recalibration parameters to better account for the uncertainties of the QUR, as shown in Ref. [6; 17; 23]. BNS post-merger signals are likely to be accessible with next-generation ground-based GW interferometers for events comparable (or louder) than GW170817 [e.g. 43; 44; 17]. In order to gain information on the nuclear matter from these observations, it seems necessary to significantly extend current theoretical EOS models and simulations and explore within Bayesian analysis frameworks such predictions. ###### Acknowledgements. MB and SB acknowledge support from the European Union's H2020 ERC Starting Grant, no. BinGraSp-714626. MB acknowledges support from the Deutsche Forschungsgemeinschaft (DFG) under Grant no. 406116891 within the Research Training Group (RTG) 2522/1. MB acknowledges support from the European Union's H2020 ERC Consolidator Grant "GRavity from Astrophysical to Microscopic Scales" (Grant no. GRAMS 815673) and the EU Horizon 2020 Research and Innovation Programme under the Marie Sklodowska-Curie Grant Agreement No. 101007855. GC acknowledges support by the Della Riccia Foundation under an Early Career Scientist Fellowship. GC acknowledges funding from the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement No. 847523 'INTERACTIONS', from the Villum Investigator program supported by VILLUM FONDEN (grant no. 37766) and the DNRF Chair, by the Danish Research Foundation. SB acknowledges support from the DFG project MEMI no. BE 6301/2-1. The computational experiments were performed on the Tullio sever at INFN Turin. The waveform model employed in this work, TEOBResumS_NRPM, is implemented in bajes and the software is publicly available at: [https://github.com/matteobreschi/bajes](https://github.com/matteobreschi/bajes)
2305.15974
Demonstration of the excited-state search on the D-wave quantum annealer
Quantum annealing is a way to prepare an eigenstate of the problem Hamiltonian. Starting from an eigenstate of a trivial Hamiltonian, we slowly change the Hamiltonian to the problem Hamiltonian, and the system remains in the eigenstate of the Hamiltonian as long as the so-called adiabatic condition is satisfied. By using devices provided by D-Wave Systems Inc., there were experimental demonstrations to prepare a ground state of the problem Hamiltonian. However, up to date, there are no demonstrations to prepare the excited state of the problem Hamiltonian with quantum annealing. Here, we demonstrate the excited-state search by using the D-wave processor. The key idea is to use the reverse quantum annealing with a hot start where the initial state is the excited state of the trivial Hamiltonian. During the reverse quantum annealing, we control not only the transverse field but also the longitudinal field and slowly change the Hamiltonian to the problem Hamiltonian so that we can obtain the desired excited state. As an example of the exited state search, we adopt a two-qubit Ising model as the problem Hamiltonian and succeed to prepare the excited state. Also, we solve the shortest vector problem where the solution is embedded into the first excited state of the Ising Hamiltonian. Our results pave the way for new applications of quantum annealers to use the excited states.
Takashi Imoto, Yuki Susa, Ryoji Miyazaki, Tadashi Kadowaki, Yuichiro Matsuzaki
2023-05-25T12:12:11Z
http://arxiv.org/abs/2305.15974v1
# Demonstration of the excited-state search on the D-wave quantum annealer ###### Abstract Quantum annealing is a way to prepare an eigenstate of the problem Hamiltonian. Starting from an eigenstate of a trivial Hamiltonian, we slowly change the Hamiltonian to the problem Hamiltonian, and the system remains in the eigenstate of the Hamiltonian as long as the so-called adiabatic condition is satisfied. By using devices provided by D-Wave Systems Inc., there were experimental demonstrations to prepare a ground state of the problem Hamiltonian. However, up to date, there are no demonstrations to prepare the excited state of the problem Hamiltonian with quantum annealing. Here, we demonstrate the excited-state search by using the D-wave processor. The key idea is to use the reverse quantum annealing with a hot start where the initial state is the excited state of the trivial Hamiltonian. During the reverse quantum annealing, we control not only the transverse field but also the longitudinal field and slowly change the Hamiltonian to the problem Hamiltonian so that we can obtain the desired excited state. As an example of the exited state search, we adopt a two-qubit Ising model as the problem Hamiltonian and succeed to prepare the excited state. Also, we solve the shortest vector problem where the solution is embedded into the first excited state of the Ising Hamiltonian. Our results pave the way for new applications of quantum annealers to use the excited states. ## I Introduction Quantum annealing (QA) is a way to prepare a quantum system in an eigenstate of the non-trivial Hamiltonian, which is called the problem Hamiltonian [1; 2; 3]. The initial Hamiltonian, called the driving Hamiltonian, is chosen to be a trivial one, and we set the initial state as the eigenstate of the Hamiltonian. By slowly changing the Hamiltonian to the problem Hamiltonian, we can prepare the eigenstate of the problem Hamiltonian as long as the so-called adiabatic condition is satisfied. One of the main applications of QA is to solve the combinational optimization problem [1; 2; 3]. The solution of the combinational optimization problem is embedded into a ground state of an Ising Hamiltonian [4; 5], and QA provides such a ground state. D-Wave Systems Inc. developed a quantum annealer to solve such combinational optimization problems [6]. The D-Wave quantum annealing machine has been used in a variety of applications such as a quantum simulator, machine learning, and an attack on the cryptography[7; 8; 9; 10; 11; 12]. Also, the method to emulate QA using the noisy intermediate-scale quantum device(NISQ) was proposed and demonstrated [13; 14]. Quantum annealing can be used for quantum chemistry [15]. We can map the Hamiltonian of molecules into the Ising Hamiltonian, and the ground state of the Hamiltonian provides information about molecules, such as the prediction of the chemical reaction. Also, there was an experimental demonstration to use the quantum annealer for the ground-state search in quantum chemistry[16]. Also, there is a theoretical proposal to prepare the excited state of the problem Hamiltonian by using the quantum annealer[17]. In this previous method, it is necessary to resolve the degeneracy of the driving Hamiltonian by applying inhomogeneous transverse fields. An initial state is prepared in the desired excited state of the driving Hamiltonian, and the Hamiltonian slowly changes into the problem Hamiltonian. As long as the adiabatic condition is satisfied and the energy relaxation is negligible, we can obtain the desired excited state after the time evolution. The main difficulty is to prepare the excited states of the driving Hamiltonian which is chosen as the transverse magnetic fields in the previous method. It is known that we can use the excited state search to solve the shortest vector problem, which is one of the post-quantum cryptography[12; 18]. Furthermore, the excited state search has numerous applications in quantum chemistry[19], quantum simulations[20; 21; 14], and machine learning[22]. However, so far, there is no experimental demonstration of the excited-state search. In this paper, we demonstrate to prepare the excited state of the problem Hamiltonian by using the D-wave processor (see FIG. 1 (a)). We modify the previous method to be applicable to the current D-wave machine. We start from a simple Hamiltonian with a large longitudinal magnetic field without transverse magnetic fields where the first excited state can be easily inferred. We prepare the first excited state of the simple Hamiltonian, and perform the reverse quantum annealing (RQA). We control the longitudinal magnetic field during RQA, and we slowly change the Hamiltonian to the problem Hamiltonian. To show the effectiveness of our method, we adopt a two-qubit Ising model with an inhomogeneous longitudinal magnetic field as the problem Hamiltonian and prepare the first excited state of this Hamiltonian by the D-wave processor. Also, we apply our method to solve the shortest vector problem (SVP), which is a basis for post-quantum cryptographic protocol. Here, the solution of the SVP is embedded into the first excited state of the Ising Hamiltonian, and we prepare the first excited state of such an Ising Hamiltonian by using the D-wave processor. ## II Method Our scheme consists of the following three steps (see FIG1. (a)). First, we start with a simple Hamiltonian where we apply a large control longitudinal magnetic field to the problem Hamiltonian. In this case, we can easily calculate the first excited state (See appendix A), which is the initial state of our method. Second, during the time from \(0\) to \(t_{1}\), we increase the transverse field from \(0\) to \(1-h_{d}\) while we decrease the amplitude of the problem Hamiltonian. Third, we fix the the amplitude of the transverse field and turn off the control longitudinal magnetic field during the time from \(t_{1}\) to \(t_{1}+t_{2}\). Finally, during the time from \(t_{1}+t_{2}\) to \(t_{1}+t_{2}+t_{3}\), we adiabatically turn off the transverse magnetic field while we increase the amplitude of the problem Hamiltonian. The Hamiltonian is given as follows: \[H(k) =A(k)H_{D}+B(k)\bigg{(}g(k)H_{L}+H_{P}\bigg{)} \tag{1}\] \[H_{D} =-\Gamma\sum_{j=1}^{N}\sigma_{j}^{(x)}\] (2) \[H_{L} =\sum_{j=1}^{N}h_{j}\sigma_{j}^{(x)}. \tag{3}\] where \(H_{D}\) denotes the driving Hamiltonian, \(H_{P}\) denotes the problem Hamiltonian, \(H_{L}\) denotes the Hamiltonian of the control longitudinal magnetic field, \(\Gamma\) denotes the Figure 1: (a) We illustrate the schematic diagram of the excited state search using QA. First, we consider a Hamiltonian with a large longitudinal field where we can easily specify the excited state. We prepare the excited state in this Hamiltonian and change the Hamiltonian to the problem Hamiltonian with transverse fields. After that, we change the Hamiltonian to the problem Hamiltonian without transverse fields. (b) We illustrate the scheduling function of the transverse field, the longitudinal field, and the problem Hamiltonian. First, we decrease the problem Hamiltonian while we increase the transverse magnetic fields for a time period of \(0\leq t\leq t_{1}\). Second, we gradually turn off the longitudinal magnetic fields for a time period of \(t_{1}\leq t\leq t_{1}+t_{2}\). Finally, we gradually turn off the transverse field with a time period of \(t_{1}+t_{2}\leq t\leq t_{1}+t_{2}+t_{3}\). amplitude of the transversal field, and \(h_{j}\) denotes the amplitude of the control longitudinal field. Also, \(A(k)\), \(B(k)\) and, \(g(k)\) are scheduling functions defined as follows. \[A(k) =\begin{cases}\frac{1-h_{d}}{t_{1}}k&(0\leq k\leq t_{1})\\ 1-h_{d}&(t_{1}\leq k\leq t_{1}+t_{2})\\ -\frac{1-h_{d}}{t_{3}}k+(1-h_{d})\frac{t_{1}+t_{2}+t_{3}}{t_{3}}\\ (t_{1}+t_{2}\leq k\leq t_{1}+t_{2}+t_{3})\end{cases} \tag{4}\] \[B(k) =1-A(k)\] (5) \[g(k) =\begin{cases}C_{L}&(0\leq k\leq t_{1})\\ -\frac{C_{L}}{t_{2}}(k-t_{1})+C_{L}&(t_{1}\leq k\leq t_{1}+t_{2})\\ 0&(t_{1}+t_{2}\leq k\leq t_{1}+t_{2}+t_{3})\end{cases} \tag{6}\] where \(1-h_{d}\) denotes the maximum amplitude of the transverse magnetic field in our algorithms and \(C_{L}\) denotes the initial amplitude of the longitudinal magnetic field. The illustration of the scheduling function is given in FIG1. (b). ## III Excited-state search of the two-qubit Ising model To demonstrate our method by using the D-wave processor, we adopt a two-qubit Ising model as the problem Hamiltonian while we adopt an inhomogeneous longitudinal magnetic field as the control Hamiltonian. In this case, \(H_{L}\) and \(H_{P}\) are given by \[H_{L} =2\hat{\sigma}_{1}^{z}-\hat{\sigma}_{2}^{z} \tag{7}\] \[H_{P} =-J\hat{\sigma}_{1}^{z}\hat{\sigma}_{2}^{z}. \tag{8}\] Here, we set \(J=1\), \(C_{L}=4\), \(t_{1}=2\), \(t_{2}=20\) and, \(t_{3}=2\). Also, the initial state is \(\ket{\uparrow\uparrow}\). From FIG. 2 (a), we plot the energy diagram against \(g(k)\), and the final states of a successful excited state search are \(\ket{\uparrow\downarrow}\) and \(\ket{\downarrow\uparrow}\). Let us discuss how the values of \(h_{d}\) affect the performance of our method. For \(h_{d}\simeq 1\), only a weak transverse magnetic field is applied during the RQA, and the system remains in the initial state. We call these phenomena "freezing". On the other hand, for \(h_{d}\simeq 0\), we apply a strong transverse magnetic field, and the energy relaxation time becomes small [23], which makes it difficult for the system to remain in the excited state. For these reasons, it is important for our method to optimize the value of \(h_{d}\). FIG 2 (b) shows that we successfully obtain the target excited state for \(h_{d}\simeq 0.7\). On the other hand, for \(h_{d}>0.8\), the freezing occurs, and the dominant state after performing our protocol is \(\ket{\uparrow\uparrow}\), which is the same as the initial state. For \(h_{d}<0.6\), an energy relaxation is relevant, and so the dominant states after performing our protocol are \(\ket{\uparrow\uparrow}\) and \(\ket{\downarrow\downarrow}\), which are the ground states of the problem Hamiltonian. In FIG 2 (c-1) and (c-2), we illustrate that the success probability depends not only on \(h_{d}\) but also on \(t_{2}\). As we decrease \(h_{d}\), the optimized \(t_{2}\) decreases. We can understand this as follows. For a smaller \(h_{d}\), we apply more transverse magnetic fields, and so we can suppress the freezing while the energy relaxation becomes larger. In this case, for the excited state search, we need to decrease \(t_{2}\) so that the system could remain in the excited state. For comparison, we also plot the population of the target excited state using the non-adiabatic transition of the conventional QA. We compare the success probability of this conventional method with that of our method. For the conventional QA, the annealing Hamiltonian is described as follows: \[H(t)=\bigg{(}1-\frac{t}{T}\bigg{)}H_{D}+\frac{t}{T}C_{s}H_{P} \tag{9}\] where \(T\) denotes the annealing time and \(C_{s}\) denotes the amplitude of the problem Hamiltonian. We prepare the ground state of \(H_{D}\) at \(t=0\), and let the state evolve by the Hamiltonian. In the conventional QA, if non-adiabatic transitions occur, we could obtain a finite population of the excited state by optimizing \(T\) and \(C_{s}\). From FIG 2, we confirm that, as we decrease \(T\), the success probability to obtain the desired excited state becomes larger. This is because the non-adiabatic transitions become more relevant for a shorter \(T\). However, the success probability with optimized parameters with the conventional QA is much smaller than that of our method. This shows that, for the excited-state search, our method is superior to the conventional method. ## IV Solving the shortest-vector problem by the excited-state search We apply the excited state search to solving the SVP. This problem plays a central role in Lattice-Based Cryptography, which is a potential candidate for post-quantum cryptography. It is known that the excited state search is useful to solve this problem[18]. The SVP is the problem to find the shortest non-zero vector in a given lattice where the number of bases is \(N\). Among several ways to transform the SVP into the Ising Hamiltonian, we adopt the Hamming encoding. We set \(N=2\), and we use 4 qubits to describe the problem Hamiltonian. We show the details about how to map the SVP to the Ising Hamiltonian in the appendix C. The problem Hamiltonian and the control longitudinal field are given by \[H_{L} =\frac{1}{2}h_{1}\hat{\sigma}_{1}^{z}+\frac{1}{2}h_{2}\hat{\sigma}_ {2}^{z}+\frac{1}{2}h_{3}\hat{\sigma}_{3}^{z}+\frac{1}{2}h_{4}\hat{\sigma}_{4}^ {z} \tag{10}\] \[H_{P} =\frac{1}{2}G_{11}\hat{\sigma}_{1}^{z}\hat{\sigma}_{2}^{z}+\frac{ 1}{2}G_{22}\hat{\sigma}_{3}^{z}\hat{\sigma}_{4}^{z}+\frac{1}{2}G_{12}\hat{ \sigma}_{1}^{z}\hat{\sigma}_{3}^{z}+\frac{1}{2}G_{12}\hat{\sigma}_{1}^{z} \hat{\sigma}_{4}^{z}\] \[+\frac{1}{2}G_{12}\hat{\sigma}_{2}^{z}\hat{\sigma}_{3}^{z}+\frac{ 1}{2}G_{12}\hat{\sigma}_{2}^{z}\hat{\sigma}_{4}^{z} \tag{11}\] where \(G_{ij}\equiv\vec{b}_{i}\cdot\vec{b}_{j}\) denotes the Gram matrix and \(\{\vec{b}_{j}\}_{j=1}^{2}\) denotes a set of linearly independent vectors determined by the problem. We set \(|\vec{b}_{1}|=1.0\) and \(\vec{b}_{2}=1.3\). Also, the angle between the \(\vec{b}_{1}\) and \(\vec{b}_{2}\) is set to be \(\theta=\pi/10\). Furthermore, we set \(h_{1}=4\), \(h_{2}=4\), \(h_{3}=1\), \(h_{4}=2\), and \(C_{L}=4\). It is worth mentioning that the ground states and the first excited states of the problem Hamiltonian are degenerate. So, in order to solve the SVP, we set the initial state as \(\ket{\downarrow\uparrow\downarrow}\), which is the 2-th excited state of the initial Hamiltonian (see Appendix C). We plot the result in FIG. 3 (a), (b-1) and, (b-2). FIG. 3(a) shows that the population of \(\ket{\uparrow\uparrow\downarrow\downarrow}\), which is one of the degenerate first excited states of the problem Hamiltonian, is dominant around \(h_{d}=0.6\). On the other hand, the population of the other first excited state \(\ket{\downarrow\downarrow\uparrow}\) is much smaller. The reason for this is that the first excited state, which is close to the initial state, is preferentially obtained. In regions where \(h_{d}\) is more than 0.8, the population of the initial state \(\ket{\downarrow\uparrow\downarrow\downarrow}\) is dominant. This is due to freezing. In the region where \(h_{d}\) is less than 0.5, the population of the ground states is dominant due to the energy relaxation. FIG 3 (b-1) and (b-2) illustrate how the success probability depends on \(h_{d}\) and \(t_{2}\). Similar to the two-qubit case in the section III, the optimized \(t_{2}\) becomes larger as we increase \(h_{d}\). FIG 3 (c-1) and (c-2) illustrated the success probability of the SVP with the conventional method. In this case, we use the Eq (9) as the annealing Hamiltonian. The success probability of the conventional method, which was originally developed for the ground-state search, is much smaller than that of our method. This is because the solution is encoded in the excited state for the SVP. ## V Conclusion In conclusion, we proposed and demonstrated a method to prepare an excited state of the problem Hamiltonian by using a D-wave processor. We use the reverse quantum annealing with a hot start. More specifically, we start from the first excited state of a trivial Ising Hamiltonian and slowly change the Hamiltonian to the problem Hamiltonian. During the reverse quantum annealing, we control both the transverse field and the longitudinal Figure 2: (a) The energy diagram against \(g(k)\) which determines the ratio between the problem Hamiltonian and the Hamiltonian with a longitudinal magnetic field. If the excited-state search succeeds, we should obtain \(\ket{\uparrow\downarrow}\) or \(\ket{\downarrow\uparrow}\) as the final state, while the initial state is \(\ket{\uparrow\uparrow}\) as the second excited state of the problem Hamiltonian. (b) The plot of the population of the computational basis states against the strength of the transverse field. Here, we set \(t_{1}=2\), \(t_{2}=20\), and, \(t_{3}=2\) and the shot number is \(100000\). Between \(0.8\leq h_{d}\leq 1.0\), the population of \(\ket{\uparrow\downarrow}\), which is the same as the initial state, is dominant. This comes from the fact that, for weaker transverse magnetic fields, the freezing effect becomes stronger. On the other hand, between \(0.0\leq h_{d}\leq 0.6\), the population of \(\ket{\downarrow\uparrow}\), which is the ground state of the problem Hamiltonian is dominant. This is because the system relaxes into the ground state due to the strong decoherence. We find that, for \(h_{d}\simeq 0.7\), we can maximize the success probability which is the sum of the populations of \(\ket{\uparrow\downarrow}\) and \(\ket{\downarrow\uparrow}\). (c-1), (c-2) The success probability of our method against the \(h_{d}\) and \(t_{2}\) when the shot number is \(100000\). As we decrease \(h_{d}\), the optimized \(t_{2}\) becomes smaller. (d-1), (d-2) The success probability of the conventional QA for the excited-state search against the \(h_{d}\) and \(t_{2}\) in \(100000\) shot number. The success probability of the conventional method is much lower than that of ours. field. We adopt a two-qubit Ising model as the problem Hamiltonian, and we succeed to prepare the excited state. Moreover, by using our method, we solve the shortest vector problem where the solution is embedded into the first excited state of the Ising Hamiltonian. Our results pave the way for new applications of quantum annealers to use the excited states. This paper is partly based on results obtained from a project, JPNP16007, commissioned by the New Energy and Industrial Technology Development Organization (NEDO), Japan. This work was supported by JST Moonshot R&D (Grant Number JPMJMS226C). YM is supported by JSPS KAKENHI (Grant Number 23H04390).
2310.00671
Tension between HST/JWST and $Λ$CDM Cosmology, PBH, and Antimatter in the Galaxy
Recent data released by James Webb Space Telescope (JWST) and, somewhat earlier, the data presented by Hubble Space Telescope (HST) are commonly understood as a strong indication for breaking of the canonical $\Lambda$CDM cosmology. It is argued in the presented work that massive primordial black holes (PBH) could seed galaxy and quasar formation in the very young universe as it has been conjectured in our paper of 1993 and resolve the tension induced by the JWST and the HST data with the standard cosmology. This point of view is presently supported by several recent works. The proposed mechanism of PBH formation leads to the log-normal mass spectrum of PBHs and predicts abundant antimatter population of our Galaxy, Milky Way. Both these predictions are in excellent agreement with astronomical observations.
A. D. Dolgov
2023-10-01T13:48:04Z
http://arxiv.org/abs/2310.00671v1
# Tension between HST/JWST and \(\Lambda\)CDM Cosmology, PBH, and Antimatter in the Galaxy ###### Abstract Recent data released by James Webb Space Telescope (JWST) and, somewhat earlier, the data presented by Hubble Space Telescope (HST) are commonly understood as a strong indication for breaking of the canonical \(\Lambda\)CDM cosmology. It is argued in the presented work that massive primordial black holes (PBH) could seed galaxy and quasar formation in the very young universe as it has been conjectured in our paper of 1993 and resolve the tension induced by the JWST and the HST data with the standard cosmology. This point of view is presently supported by several recent works. The proposed mechanism of PBH formation leads to the log-normal mass spectrum of PBHs and predicts abundant antimatter population of our Galaxy, Milky Way. Both these predictions are in excellent agreement with astronomical observations. ## I Introduction During last decade observations made by Hubble Space Telescope (HST), see e.g. [1; 2; 3] and very recently by James Webb Space Telescope (JWST) [4; 5; 6; 7] have led to the surprising conclusion that the early universe, younger than one billion years, is densely populated by well developed galaxies, quasars (supermassive black holes), gamma-bursters, and heavy elements (heavier than helium). These striking results were taken by the community as absolutely incompatible with the canonical \(\Lambda\)CDM cosmology, especially after release of the JWST data. In fact already observations of HST could be a sufficient cause for anxiety, not only with respect to the early universe but also to the contemporary very old universe almost 15 billion years old. The troubling situation in the present day universe as well as in the universe with redshifts \(z=6-10\) are summarised in review [8]. The state of art is emphatically characterised as crisis in cosmology that is believed to hit strong blow to the conventional \(\Lambda\)CDM picture. However, the resolution of the above mentioned problems was suggested in our papers [9] (DS) and [10] (DKK). long before these problems arose. In these works a new mechanism of massive primordial black hole (PBH) formation was worked out that could lead to their efficient creation with the masses in the range from a fraction of the solar mass up to billion solar masses. An essential input of DS and DKK papers is the suggestion of an inverted formation mechanism of galaxies and their central black holes. Usually it is assumed that supermassive BHs (SMBHs), that are observed in centres of all large galaxies, are created by matter accretion to the density excess in the galactic centre, but the estimated necessary time is much longer than the universe age, even for the contemporary universe, with the age about 15 billion years, to say nothing about the 20 times younger universe at \(z\sim 10\). On the opposite, as it was conjectured in refs. [9; 10], supermassive black holes were created first in the early universe at prestellar epoch, that's why they are caller primordial, and later they have SEEDED galaxy formation. Model DS/DKK is verified by a very good agreement of the calculated log-normal mass spectrum of PBH with observations and by discovery of abundant antimatter population in the Galaxy envisaged according to DS and DKK. The model also predicts an early formation of galaxies, seeded by PBH, quasars (alias SMBH), rich chemistry (heavy elements), and dust in the early universe. ## II A few words about HST and JWST observations The orbit of HST is at the distance of 570 km from the Earth. The orbit of JWST is much larger, it is about \(1.5\times 10^{6}\) km. The mirror of HST has diameter equal to 2.4 m, while JWST has 2.7 time larger one and correspondingly the area of JWST mirror is approximately 7.4 times larger. In fig. 1 the images of HST and JWST are presented. HST operates in optical wave length range, for example 450 nm, corresponding to blue light. It has also a possibility to catch the signal in the infrared range with the wave length 0.8-2.5 microns. JWST has high sensitivity to infrared radiation with the wave length 0.6 - 28,5 micron. It allows to penetrate deep into the early universe, up to redshifts \(z\sim 15\). Accidentally HST and JWST observed the same galaxy at \(z=12\), see fig. 2. This coincidence is a strong argument in favour of the reliable operation of these two very different instruments. Comparison of the JWST data and theoretical expectation of the \(\Lambda\)CDM cosmology is depicted in fig. 3. Theoretical expectations (colorored dots at \(z=15\)) are noticeably below observations. ## III Spectral measurements and puzzles of the early universe Only continuum in micron range was measured by JWST till February. That raised justified doubts on accuracy of the redshifts determination of the observed galaxies. Now numerous observations of spectra of different elements excellently confirm the early data. For example according to ref. [11] the JWST NIRCam 9-band near-infrared Figure 1: imaging of the luminous \(z=10.6\) galaxy GN-z11 from the JWST Advanced Deep Extragalactic Survey (JADES) proved that the spectral energy distribution (SED) is entirely consistent with the expected form of the high-redshift galaxy. In a simultaneous work [12] the spectroscopy of GN-z11, the most luminous candidate \(z>10\) Lyman break galaxy is presented. The nitrogen lines are clearly observed. Quoting the authors: "The spectroscopy confirms that GN-z11 is a remarkable galaxy with extreme properties seen 430 Myr after the Big Bang." Another example of spectral measurements by a different instrument: age of most distant galaxy is confirmed with Oxygen observation. The radio telescope array ALMA (Atacama Large Millimeter Array) has pin-pointed the exact cosmic age of a distant JWST-identified galaxy, GHZ2/GLASS-z12, at 367 million years after the Big Bang [13]. The observations of the a spectral emission line emitted by ionized Oxygen near the galaxy, red-shifted according to its age in the early universe, confirms the JWST data. This data show that the JWST is able to look out to record distances and, quoting the authors, **heralds a leap in our ability to understand the formation of the earliest galaxies in the Universe.** A population of red candidate massive galaxies (stellar mass \(>10^{10}\) solar masses) at \(7.4\lesssim z\lesssim 9.1\), 500-700 Myr after the Big Bang, including one galaxy with a possible stellar mass of \(\sim 10^{11}M_{\odot}\), too massive to be created in so early universe, is observed in [14]. Authors conclude that according to the'science' it is impossible to create so well developed galaxies. "May be they are supermassive **black holes of the kind never seen before.** That might mean a revision of usual understanding of black holes." Clearly these "black holes of the kind never seen before" nicely fit the assertion that they are primordial, as Figure 4: Figure 3: suggested in refs. [9; 10]. Recent observation by ALMA [15] of an extremely massive reionization-era galaxy with \(M_{*}=1.7\times 10^{11}M_{\odot}\) at z = 6.853 with active galactic nuclei (AGN) that have huge luminosity suggests that this object is powered by \(\sim 1.6\times 10^{9}M_{\odot}\) black hole if accreting closely to the Eddington limit. It is nearly impossible to create so massive BH in the early universe. But supermassive primordial black hole could easily feed such monster ## IV Rich chemistry in the early universe According to the standard lore, light elements, deuterium, helium, and tiny traces of lithium are created from the primordial protons and neutrons in the early universe roughly during the first 100 seconds. This process is called big bang nucleosynthesis (BBN). Heavier elements, the so called metals (everything heavier than helium-4 are called metals in astrophysics) are created through stellar nucleosynthesis. Next, supernova explosions populate interstellar medium with metals. Unexpectedly high abundances of heavy elements (high metallicity) are observed in the very early universe by HST and JWST. For example, as reported in ref. [16], a study of a strongly lensed galaxy SPT0418-47 revealed its mature metallicity, amounts of elements heavier than helium and hydrogen, such as carbon, oxygen and nitrogen. According to the estimate of the team the amount is comparable to that of the sun, which is more than 4 billion years old and inherited most of its metals from previous generations of stars that had 8 billion years to build them up. Analysis using optical strong line diagnostics suggests that galaxy SPT0418-SE has near-solar elemental abundance, while the ring appears to have supersolar metallicity O/H and N/O. One more example of well developed chemistry, that demands too long evolution if produced by the conventional mechanism is presented in ref. [17]. Observations of GN-z11 with JWST/NIRSpec disclosed numerous oxygen, carbon, nitrogen, and helium emission lines at \(z=10.6\). The data prefers (N/O), greater than 4 times solar and the derived \(C/O\approx 30\) solar. Nitrogen enhancement in GN-z11 cannot be explained by enrichment from metal-free Population III stars. The suggested explanation is that yields from runaway stellar collisions in a dense stellar cluster or a tidal disruption event provide promising solutions to give rise to these unusual emission lines at \(z=10.6\), and explain the resemblance between GN-z11 and a nitrogen-loud quasar. High abundances of heavy elements may be a result of BBN with baryon-to-gamma ratio close to unity, as it takes place in the DS [9] and DKK [10] model, see below. ## V Seeding of galaxy formation by PBH ### Seeding of early galaxies The hypothesis pioneered by DS [9] and DKK [10], that galaxy formation is seeded by SMBH allows to understand the presence of SMBH in all large and several small galaxies accessible to observation. This mechanism explains how the galaxies observed by JWST in the very young universe might be created. Presently it is rediscovered in several recent works. As is stated in ref. [18], the recent observations with JWST have identified several bright galaxy candidates at \(z\gtrsim 10\), some of which appear unusually massive (up to \(\sim 10^{11}\) M\({}_{\odot}\)). Such early formation of massive galaxies is difficult to reconcile with standard \(\Lambda\)CDM predictions demanding very high star formation efficiency (SFE), possibly even in excess of the cosmic baryon mass budget in collapsed structures. With an idealized analysis based on linear perturbation theory and the Press-Schechter formalism, the observed massive galaxy candidates can be explained, with lower SFE than required in \(\Lambda\)CDM, if structure formation is accelerated by massive (\(\gtrsim 10^{9}\) M\({}_{\odot}\)) PBHs that enhance primordial density fluctuations. Observations made by JWST (and HST) of high-redshift quasars reveal that many supermassive black holes were in place less than 700 Million years after the Big Bang. In particular, in ref. [19] the detection of an X-ray-luminous quasar powered by SMBH with the mass \(\sim 4\times 10^{7}M_{\odot}\) in a gravitationally-lensed galaxy, identified by JWST at \(z\approx 10.3\), is reported. As is stated by the authors, this mass is comparable to the inferred stellar mass of its host galaxy, in contrast to the usual examples from the local universe where mostly the BH mass is \(\sim 0.1\%\) of the host galaxy's stellar mass. The combination of such a high BH mass and large BH-to-galaxy stellar mass ratio \(\sim 500\) Myrs after the Big Bang is consistent with a picture wherein such BHs originated from heavy seeds. Let stress again, that this detection suggests that early supermassive black holes originate from **heavy seeds.** However, the origin of the first BHs, that started the seeding, remains a mystery. According to the authors. the seeds of the first BHs are postulated to be either light i.e., \((10-100)M_{\odot}\) remnants of the first stars or heavy i.e., \((10^{4}-10^{5})M_{\odot}\), originating from direct collapse of gas clouds. The latter hypothesis is questionable, but a supermassive primordial black hole would perfectly work. In a subsequent paper [20] a support to the heavy seeding channel for the formation of supermassive BHs within the first billion years of cosmic evolution is also proposed. As is mentioned in this work, "the James Webb Space Telescope is now detecting early black holes (BHs) as they originate from seeds to supermassive BHs. Recently Bogdan et al [19] reported the detection of an X-ray luminous supermassive black hole, UHZ-1, with a photometric redshift at \(z>10\). Such an extreme source at this very high redshift provides new insights on **seeding** and growth models for BHs given the short time available for formation and growth. The resulting ratio of \(M_{BH}/M^{*}\) remains two to three orders of magnitude higher than local values, thus lending support to the heavy seeding channel for the formation of supermassive BHs within the first billion years of cosmic evolution. ### Seeding of globular clusters and dwarf galaxies The idea of seeding of globular clusters and dwarf galaxies by primordial black holes was worked out in ref. [21]. Primordial IMBHs with masses of a few thousand solar mass can explain their formation, poorly understood otherwise. In the last several years such IMBHs inside globular clusters are observed. Similar features are true for dwarfs. In particular the seeding of dwarfs by intermediate mass BHs is confirmed by the recent data. For instance in the dwarf galaxy SDSS J1521+1404 the BH is discovered with the mass \(M\sim 10^{5}M_{\odot}\)[22]. For the first time, astronomers have spotted evidence of a pair of dwarf galaxies featuring GIANT black holes on a collision course with each other. In fact, they haven't just found just one pair - they've found two. Another recent example [23] of intermediate-mass black holes is the finding of episodic, large-scale and powerful jet activity in a dwarf galaxy SDSS J090613.77+561015.2. This can be explained by an intermediate-mass black hole (IMBH) with a mass of \(M_{BH}=3.6^{+5.9}_{-2.3}\times 10^{5}M_{\odot}\), Such huge black hole surely could not be created by accretion but vice versa might seed the formation of the dwarf. ## VI Peculiar stars in the Galaxy ### Ancient stars A discovery of primordial stars in globular cluster M92 [24] was recently announced. The absolute age of the globular cluster M92 was evaluated and found to be practically equal to the universe age, \(t_{M92}=13.8\pm 0.75\) Gyr. As it is stated in the paper, possibly these stars came to us from JWST epoch or even from the earlier one. Similar declaration of pristine stars in the Galaxy was made almost at the same time [25]. An international team of researchers, Pristine Inner Galaxy Survey (PIGS) team, has obtained the largest set of detailed observations yet of the oldest stars in the center of our Galaxy, the Milky Way. Some of the stars that were born in the first billion years after the Big Bang are still around today. In fact, extremely old stars in the Galaxy were discovered considerably earlier. As it is asserted in ref. [26], new more accurate methods of determination of stellar ages led to discovery of surprisingly old stars. Employing thorium and uranium abundances in comparison with each other and with several stable elements the age of metal-poor, halo star BD+17o 3248 was estimated as \(13.8\pm 4\) Gyr. For comparison the age of inner halo of the Galaxy is 11.4 \(\pm\) 0.7 Gyr [27] The age of a star in the galactic halo, HE 1523-0901, was estimated to be about 13.2 Gyr. First time many different chronometers, such as the U/Th, U/Ir, Th/Eu and Th/Os ratios to measure the star age have been employed [28]. And now, the most surprising star which is older than the Universe. Metal deficient high velocity subgiant in the solar neighborhood HD 140283 has the age 14.46 \(\pm\) 0.31 Gyr [28]. The determined central value of the age exceeds the universe age by two standard deviations if the Hubble parameter is low, H = 67.3 (according to the CMB analysis) and \(t_{U}=13.8\) Gyr, if H = 74 (according to the traditional methods), and \(t_{U}=12.5\) Gyr. The age of this star exceeds the universe age more than by 10 \(\sigma\). In our model [9; 10] not only primordial black holes could be formed but, if the bubbles with high baryon-to-photon ratio are not sufficiently massive, compact stellar kind objects could be created. Such "stars" might look older than they are because they would be enriched with heavy elements mimicking larger age. ### Fast moving stars Several stars are discovered in the Galaxy with unusually high velocity much larger than the galactic virial velocity, that is about 200 km/sec. There are several very fast pulsars in the Galaxy, but their origin is evident. Pulsars are the results of supenova explosions and a small angular asymmetry in the emitted radiation could create a strong kick, which would accelerate a pulsar up to \(10^{3}\) km/sec. The observed fast stars look normal, except of very high velocity, about 500 km/sec. In ref. [30] a a discovery of a low mass white dwarf, LP 40-365, was reported, that travels at a velocity greater than the Galactic escape velocity and whose peculiar atmosphere is dominated by intermediate mass elements. According to the authors these properties suggest that it could be the predicted leftover remains from a type Iax supernova. On the other hand, it can naturally be a primordial star with high initial abundances of heavy elements. Let us mention several more discoveries of other high velocity stars in the Galaxy [31; 32]. The authors argue that these stars could be accelerated by a population of intermediate mass black holes (IMBHs) in Globular clusters, if there is sufficient number of IMBHs. So many IMBHs were not expected but the recent data reveal more and more of them in contrast to conventional expectations and in agreement with refs. [9; 10]. As it is noted in ref. [33] observations of stellar remnants linked to Type Ia and Type Iax supernovae are necessary to fully understand their progenitors and explain the origin of their high speed. Multiple progenitor scenarios predict a population of kicked donor remnants and partially-burnt primary remnants, both moving with relatively high velocity. But only a handful of examples consistent with these two predicted populations have been observed. It is reported in ref. [33] that the likely first known example of an unbound white dwarf that is consistent with being the fully-cooled primary remnant to a Type Iax supernova is LP 93-21. The candidate, LP 93-21, is travelling with a galactocentric velocity of \(v_{gal}\approx 605\) km/sec, and is gravitationally unbound to the Milky Way. The authors claim ruling out its extragalactic origin. The Type Iax supernova ejection scenario is consistent with its peculiar unbound trajectory, given the observed anomalous elemental abundances. This discovery reflects recent models that suggest stellar ejections likely occur often. Let us repeat here that extragalactic primordial star presumably populating the galactic halo, according to assertion of papers [9; 10], very well fits the observations made in ref. [33] ### Stars with unusual chemistry An unusually red star was observed in planetary system through microlensing event MOA-2011-BLG-291 [34]. The host star and planet masses are estimated as \(M_{host}=0.15^{+027}_{-0.10}M_{\odot}\) and \(m_{planet}=18^{+34}_{-12}M_{\oplus}\). The source star that is redder (or brighter) than the bulge main sequence. The favoured interpretation by the authors is that the source star is a lower main sequence star at a distance of \(4.9\pm 1.3\) kpc in the Galactic disk. According to the authors, the life-time of main sequence star with the solar chemical content is larger than the universe age already for \(M<0.8M_{\odot}\). It implies the primordial origin of the registered star with already evolved chemistry. May it be a primordial helium star? There could be stars dominated by helium, even purely helium stars, in our scenario [9; 10]. ## VII Pulsar humming If a pulsar moves in any way, orbiting around a star, the relative motion of the pulsar causes the pulses to shift slightly. These shifts can be measured with extreme accuracy. The observations are so precise, pulsars were used to measure the orbital decay of binary systems as indirect evidence of gravitational waves long before they are observed directly. Unexpectedly high number of SMBH binaries are presumably observed through distortion of the pulsar timing by emission of gravitational waves [35]. The NANOGrav 15 yr data set shows evidence for the presence of a low-frequency gravitational-wave background. While many physical processes can source such low-frequency gravitational waves, but most natural possibility seems to be that the signal as coming from a population of supermassive black hole (SMBH) binaries distributed throughout the Universe [35]. It is difficult to explain such huge number of SMBH binaries. However, this can be naturally expected if these SMBHs are primordial. ## VIII Possible types of black holes ### BH classification by mass There is the following conventional division of black holes by their masses: 1. Supermassive black holes (SMBH): \(M=(10^{6}-10^{10})M_{\odot}\) (the record mass is about \(10^{11}M_{\odot}\)). 2. Intermediate mass black holes (IMBH): \(M=(10^{2}-10^{5})M_{\odot}\). 3. Solar mass black holes: masses from a fraction of \(M_{\odot}\) up to \(100M_{\odot}\). The origin of most of these BHs is unclear in the traditional approach, except maybe of the BHs with masses of a few solar masses, that might be astrophysical. Highly unexpected was a great abundance of IMBH which are copiously appearing in observations during last few years. The assumption that (almost) all black holes in the universe are primordial strongly reduce or even eliminate the tension between the data and expected numbers of black holes. ### BH classification by formation mechanism **1. Astrophysical black holes,** created by the collapse of a star that exhausted its nuclear fuel. The expected masses should start immediately above the neutron star mass, i.e. about \(3M_{\odot}\), but noticeably below \(100M_{\odot}\). Instead we observe that the BH mass spectrum in the galaxy has maximum at \(M\approx 8M_{\odot}\) with the width. The result is somewhat unexpected but an explanations in the conventional astrophysical frameworks might be possible. Recently LIGO/Virgo discovered black holes with masses close to \(100M_{\odot}\). Their astrophysical origin was considered unfeasible due to huge mass loss in the process of collapse. Now some, quite exotic, formation mechanisms are suggested. **2. BH formed by accretion on the mass excess in the galactic center.** In any large galaxy there exists a supermassive BH (SMBH) at the center, with masses varying from several millions of \(M_{\odot}\) (e,g, Milky Way) up to almost hundred billions \(M_{\odot}\). However, the conventional accretion mechanisms are not efficient enough to create such monsters during the universe life-time, \(t_{U}\approx 14.6\) Gyr. At least 10-fold longer time is necessary, to say nothing about SMBH in 10 times younger universe. **3. Primordial black holes (PBHs) created during pre-stellar epoch.** The idea of the primordial black hole (PBH) i.e. of black hole which have been formed in the early universe prior to star formation, was first put forward by Ya.B. Zeldovich and I.D. Novikov [36]. According to their arguments, if the density contrast in the early universe inside the bubble with radius equal to the cosmological horizon might accidentally happen to be large, \(\delta\varrho/\varrho\approx 1\), then that piece of volume would be inside its gravitational radius i.e. it became a PBH, that decoupled from the cosmological expansion. Subsequently this mechanism was elaborated by S. Hawking [37] and B.Carr and S.Hawking [38] ## IX PBH and Inflation In earlier works the predicted masses of PBH were quite low, more or less equal to the mass of the matter inside cosmological horizon at the moment ob PBH formation. Inflation allowed for formation of PBH with very large masses. It was first applied to PBH creation by Dolgov and Silk [9], and a year later by Carr, Hilbert, and Lidsey [39], and soon after that by Ivanov, Naselsky, and Novikov [40]. Presently inflationary mechanism of PBH production is commonly used. It allows to create PBH with very high masses, but the predicted spectrum is multi-parameter one and quite complicated. The only exception is the log-normal spectrum of refs. [9; 10] which is verified by observations in excellent agreement. ## X Black dark matter The first suggestion PBH might be dark matter "particles" was made by S. Hawking in 1971 [41] and later by G. Chapline in 1975 [42], who noticed that low mass PBHs might be abundant in the present-day universe and their energy density could be comparable to the energy density of dark matter. In the latter paper the scale independent spectrum of cosmological perturbations was assumed, thus leading to the flat PBH mass spectrum in log interval: \[dN=N_{0}(dM/M) \tag{10.1}\] with maximum mass \(M_{max}\lesssim 10^{22}\) g, which hits the allowed mass range. The next proposal of BH-dominated dark matter was made in ref. [9] that even was contained in the title "Baryon isocurvature fluctuations at small scales and **baryonic dark matter,**" with much larger and more realistic black hole masses, close to \(10M_{\odot}\). ### Bounds on BH energy density The constraints on the density of black holes were reviewed by Carr and Kuhnel [43] and the results are presented in Fig. 5 for monochromatic mass spectrum of PBHs. ### Lifting the bounds on the black hole fraction in dark matter At the beginning of this section it would be proper to quote Bernard Carr's words at 2019 : "all limits are model dependent and have caveats." There are several papers, where authors looking and finding ways to eliminate or weaken the limits on the number density of black holes. In ref. [44] it is argued that primordial black holes in the mass range \((30-100)M_{\odot}\) could be the dark matter carriers since they might escape microlensing and cosmic microwave background constraints. They are however subject to the constraints from the binary merger rate observed by the LIGO and Virgo experiments. The authors argue that in realistic situation the masses of black holes in expanding universe depend upon time and this leads to a suppression of binary formation. Hence they conclude that this effect reopens the possibility for dark matter in the form of LIGO-mass PBHs. In ref. [45] the authors have opened the window for PBH with masses in the range \((10^{2}-10^{5})M_{\odot}\) to make full or significant contribution to cosmological dark matter. They claim that the derivation of the accepted bound that excluded considerable contribution of PBH in this mass range is based on oversimplified accretion model. As is argued in ref. [46], PBHs can form clusters. Dynamical interactions in PBH clusters offers additional channel for the orbital energy dissipation thus increasing the merging rate of PBH binaries, and the constraints on the fraction of PBH in dark matter can be weaker than that obtained by assuming a homogeneous PBH space distribution. A recent analysis performed in paper [47] permits to conclude that a possible clustering of PBH could significantly reduce efficiency of the merger process and the final rate of gravitational wave bursts in some parameter range. As a result the fraction of PBH in dark matter could be as large as unity without distortion of LIGO/Virgo observational data. ## XI Observations of black holes A possibility of black hope existence was ingeniously discovered in 1783 by John Michell, an English country person, famous for many other discoveries in physics. He noticed that there could be stellar bodies having the second cosmic velocity larger than the speed of light. Since such objects neither shine nor reflect light, it would Figure 5: be impossible to observe them directly. Michell called such, not emitting light stars as "dark stars". According to his understanding a single dark star would be invisible, but if a double system of a dark and a usual star is formed, one may identify dark star observing the other one rotating around "nothing". This is one of possible ways to observe black holes at the present time. However, all that happened to be not absolutely true, or possibly even entirely wrong. BHs evaporate and shine (Hawking radiation), though nobody yet saw it. The most powerful sources of radiation (quasars) are supermassive black holes, point-like objects radiate as thousands galaxies through ultrarelativistic particle collision in the process of matter accretion. Near-solar mass BHs are observed through X-rays from accreting surrounding matter. Black holes may act as gravitational lenses, that's how MACHOs and some other BHs are discovered. Observation of the stellar motion around supposed black hole permits to identify the latter as e.g. supermassive black hole in our Galaxy was discovered. All these methods only allow to determine the mass inside central volume. According to theory of General Relativity, a huge mass in a small volume must form a black hole. However, strictly speaking BH existence is not proven by all these methods. The first direct proof of black hole existence was the registration of gravitational waves from a pair of coalescing massive bodies by LIGO/Virgo/Kagra. The data explicitly shows that the the coalescence is indeed between two black holes, because the best fit to the form of the signal is achieved under assumption of the Schwarzschild metric that according to GR describes non-charged and (almost) non-rotating black hole. The observations permit to determine the masses of two coalescing BHs, their spins, and the mass of the final black hole. ## XII Gravitational waves from BH binaries ### Are the GW sources primordial BHs? As it is argued e.g. in paper [48], discovery of gravitational waves (GW) by LIGO interferometer strongly indicates that the sources of GW are primordial black holes. In fact here is general agreement between several groups of theorists, that the gravitational waves discovered by LIGO/Virgo interferometers originated from PBH binaries. We discuss this issue here following our paper [48]. There are three features that indicate that the sources of GWs should most naturally be primordial black holes: **1. Origin of heavy BHs (with masses \(\sim 30M_{\odot}\)).** To form so heavy BHs, the progenitors should have \(M>100M_{\odot}\) and a low metal abundance to avoid too much mass loss during the evolution. Such heavy stars might be present in young star-forming galaxies but they are not observed in the necessary amount. Recently there emerged much more striking problem because of the observation of BH with \(M\sim 100M_{\odot}\). Formation of such black holes in the process of stellar collapse was considered to be strictly forbidden. On the other hand, primordial black holes with the observed by LIGO masses may be easily created with sufficient density. **2. Formation of BH binaries from the original stellar binaries.** Stellar binaries are formed from common interstellar gas clouds and are quite frequent in galaxies. If BH is created through stellar collapse, a small non-sphericity would result in a huge velocity of the BH and the binary would be destroyed. BH formation from PopIII stars and subsequent formation of BH binaries with tens of \(M_{\odot}\) is estimated to be small. The problem of the binary formation is simply solved if the observed sources of GWs are the binaries of primordial black holes. They were at rest in the comoving volume, and when inside the cosmological horizon they were gravitationally attracted and might loose energy due to dynamical friction or interaction with third body in the early universe. The probability for them to become gravitationally binded is probably high enough. The conventional astrophysical scenario is not excluded but less natural. **3. Low spins of the coalescing BHs.** The low values of the BH spins sae observed in GW150914 and in almost all (except for three) other events. It strongly constrains astrophysical BH formation from close binary systems. Astrophysical BHs are expected to have considerable angular momentum, nevertheless the dynamical formation of double massive low-spin BHs in dense stellar clusters is not excluded, though difficult. On the other hand, PBH practically do not rotate, because vorticity perturbations in the early universe are vanishingly small. Still, individual PBH forming a binary initially rotating on elliptic orbit could gain collinear spins about 0.1 - 0.3, rising with the PBH masses and eccentricity [49; 50]. This result is in agreement with the GW170729 LIGO event produced by the binary with masses \(50M_{\odot}\) and \(30M_{\odot}\) and GW190521. To summarise: each of the mentioned problems might be solved in the conventional frameworks but it looks much simpler to assume that the LIGO/Virgo/Kagra sources are primordial black holes. ### Chirp mass distribution It is well known that two rotating gravitationally bound massive bodies emit gravitational waves. In quasi-stationary inspiral regime, the radius of the orbit and the rotation frequency are approximately constant, to be more exact, slowly decreasing, and the GW frequency is twice the rotation frequency. The luminosity of the GW radiation in inspiral regime is: \[L=\frac{32}{5}\,m_{Pl}^{2}\left(\frac{M_{c}\,\omega_{orb}}{m_{Pl}^{2}}\right)^{10 /3}\,, \tag{12}\] where \(M_{1}\), \(M_{2}\) are the masses of two bodies in the binary system and \(M_{c}\) is the so called chirp mass: \[M_{c}=\frac{(M_{1}\,M_{2})^{3/5}}{(M_{1}+M_{2})^{1/5}}\,, \tag{13}\] and \[\omega_{orb}^{2}=\frac{M_{1}+M_{2}}{m_{Pl}^{2}R^{3}}\,. \tag{14}\] In ref. [51] the available data on the chirp mass distribution of the black holes in the coalescing binaries in O1-O3 LIGO/Virgo runs are analyzed and compared with theoretical expectations based on the hypothesis that these black holes are primordial with log-normal mass spectrum. The results are presented in Fig. 6. The inferred best-fit mass spectrum parameters, \(\ M_{0}=17M_{\odot}\) and \(\gamma=0.9\), fall within the theoretically expected range and shows excellent agreement with observations. On the opposite, binary black hole formation based on massive binary star evolution require additional adjustments to reproduce the observed chirp mass distribution, see Fig. 7. Similar value of the parameters are obtained in refs. [52; 53], see also [54]. So we may conclude that PBHs with log-normal mass spectrum perfectly fit the data. Astrophysical black holes seem to be disfavoured. A new analysis of the Ligo-Virgo-Kagra data was performed recently in ref. [54]. The authors concluded that the chirp-mass distribution of LVK GWTC-3 BH+BH binaries with distinct two bumps can be explained by two different populations of BH+BH binaries: 1) the low-mass bump at \(M_{0}\sim 10M_{\odot}\) due to the astrophysical BH+BH formed in the local Universe from the evolution of massive binaries 2) the PBH binaries with log-normal mass spectrum with \(M_{0}\simeq 10M_{\odot}\) and \(\gamma\simeq 10\). The central mass of the PBH distribution is larger than the expected PBH mass at the QCD phase transition (\(\sim 8M_{\odot}\)) but still can be accommodated with the mass of the cosmological horizon provided that the temperature \(T_{QCD}\sim 70\) MeV, possible for non-zero chemical potential at QCD p.t. Figure 6: Model distribution \(F_{PBH}(<M)\) with parameters \(M_{0}\approx 17M_{\odot}\) and \(\gamma\sim 1\) for two best Kolmogorov-Smirnov tests. EDF= empirical distribution function. Figure 7: Cumulative distributions \(F(<M)\) for several astrophysical models of binary BH coalescences. The observed (blue step-like curve) and model (red solid curve) distribution function of the chirp-masses of coalescing binary BHs from the LVK GWTC-3 catalogue. The model includes almost equal contributions from coalescences of astrophysical binary BHs (green dashed curve) and primordial BHs with the initial log-normal mass spectrum with parameters \(M_{0}=33M_{\odot}\), \(\gamma=10\) - with such \(\gamma\) heavier PBH practically are not created. ## XIII Cosmic antimatter ### Anti-history The father of antimatter is justly admitted to be Paul Dirac. In his Nobel Lecture at December 12, 1933 "Theory of electrons and positrons", dedicated to his prediction of positrons, he foresaw that there could be antistars and possibly antiworlds: "... It is quite possible that... these stars being built up mainly of positrons and negative protons. In fact, there may be half the stars of each kind. The two kinds of stars would both show exactly the same spectra, and there would be no way of distinguishing them by present astronomical methods." Now we expect that there may be some presumably small fraction of antistars in the Galaxy. Since they are immersed into interstellar gas consisting of matter, they can be detected due to excess of gamma radiation with energy of a several hundred MeV, originating from annihilation of the interstellar gas on the surface of antistar. Situation is different if an antistar "lives" in a distant antigalaxy. Still it is possible in principle to distinguish a star from an antistar through rather subtle effects considered in ref. [55]. First of all the spectra of the emitted radiation are not exactly the same, even if CPT is unbroken and the polarisation of radiation from weak decays could be a good indicator and lastly the types of emitted neutrinos versus antineutrinos from supernovae or antisupernovae. It is in fact surprising that Dirac was not the first person to talk about antimatter. In 1898, 30 years before Dirac and one year after discovery of electron (J.J. Thomson, 1897) Arthur Schuster (another British physicist) conjectured that there might be other sign electricity, **antimatter**, and supposed that there might be entire solar systems, made of antimatter, indistinguishable from ours. Schuster made fantastic wild guess that matter and antimatter are capable to annihilate and produce vast energy. Schuster believed that they were gravitationally repulsive having negative mass. Two such objects on close contact should have vanishing mass!? Quoting his paper [56]: "When the year's work is over and all sense of responsibility has left us, who has not occasionally set his fancy free to dream about the unknown, perhaps the unknowable?'... Astronomy, the oldest and yet most juvenile of the sciences, may still have some surprises in store. May antimatter be commended to its case". According to the classical scenario of the generation of the cosmological baryon asymmetry, proposed by A.D. Sakharov [57], the baryon excess in the universe is on the average homogeneous, having the same sign determined by the sign of symmetry breaking between particles and antiparticles. However, there are plenty of mechanisms leading to space varying amplitude of C and CP violations with possible sign changing. If this is the case, the sign of baryon asymmetry could also be different leading to formation of matter and antimatter domains in the universe. Possible models of C and CP violation in cosmology that possess this property are reviewed in [58]. Figure 8: ### Matter and antimatter in the Universe To the best of my knowledge the first papers on cosmological antimatter were published by F. Stecker in 1971 [59], independently on the three year earlier papers by Konstantinov et al [60; 61] on search of antimatter in the Galaxy, see below subsection XIII.3. Further development of the idea of matter-antimatter domain structure of the universe was presented in ref. [62]. The analysis, performed in ref. [63], permits to conclude that matter-antimatter symmetric universe or close to that, is excluded by the observed cosmic diffuse gamma-ray background and a distortion of the cosmic microwave background. However, there still remains some space for the fraction of cosmological antimatter, but considerably restricted. It is argued by G. Sqteigman in ref. [64] that the nearest anti-galaxy should be out of our galaxy cluster and thus could not be closer than at \(\sim 10\) Mpc. In a subsequent paper by the same author [65] it is argued that the fraction of antimatter in Bullet Cluster should be below \(3\times 10^{-6}\). Summary of the situation of the year 2002 was presented in two keynote lectures at 14th Rencontres de Blois on Matter - Anti-matter Asymmetry [66; 67]. ### Antimatter in Milky Way The search of antimatter in the Milky Way was initiated by Konstantinov with coworkers in 1968 [60; 61]. This activity was strongly criticised by Ya.B. Zeldovich despite very friendly relations between the two. In agreement with canonic faith no antimatter may exist in our Galaxy and that explains negative attitude of Zeldovich to Konstantinov activity. Until recently there was no reason to suspect that any noticeable amount of antimatter might be in the Galaxy. The predictions of refs. [9; 10] were not taken seriously. Now there are a lot of data indicating that Milky Way contains significant amount of antimatter of different kinds: positrons, antinuclei, and possibly antistars. The observations do not violate the existing bounds on galactic antimatter. According to the predictions of papers [9; 10] antimatter objects could be not only in the Galaxy but in its halo as well. According to ref. [68], the analysis of the intensity of gamma rays created by the Bondi accretion of interstellar gas to the surface of an antistar would allows to put a limit on the relative density of antistars in the Solar neighbourhood: \(N_{\rm z}/N_{*}<4\cdot 10^{-5}\) inside 150 pc from the Sun. The bounds on galactic antimatter are analysed in refs. [69; 70; 71]. The limits on the density of galactic antistars are rather loose, because the annihilation proceeds only on the surface of antistars, i.e on the objects with short mean free path of protons, so the extra luminosity created by matter-antimatter annihilation is relatively low. **Anti-evidence: cosmic positrons.** Existence of rich populations of positrons in the Galaxy was noticed long ago through the observations of 511 keV gamma ray line (see [72; 73; 74] and references therein) with the flux \[\Phi_{511\;{\rm keV}}=1.07\pm 0.03\cdot 10^{-3}\;{\rm photons\;cm^{-2}\,s^{-1}}. \tag{13.1}\] The width of the line is about 3 keV. The emission mostly goes from the Galactic bulge and at much lower level from the disk.This unambiguously indicates the frequent annihilation of nonrelativistic \(e^{+}e^{-}\) pairs in the Galactic bulge with the rate [72] \[\dot{N}_{ee}^{\rm bulge}\sim 10^{43}\;{\rm s^{-1}}. \tag{13.2}\] Note that one of the brightest X-ray sources in the region around the Galactic Center got the name Great Annihilator [75]. Possibly it is a microquasar first detected in soft X-rays by the Einstein Observatory [76] and later detected in hard X-rays by the space observatory "Granat" [77]. There is no commonly accepted point of view on the origin of the cosmic positrons. The conventional hypothesis that positrons are created in strong magnetic fields of pulsars is at odds with the AMS data [78]. However, this conclusion is questioned in ref. [79] where it is shown that these features could be consistently explained by a nearby source which was active \(\sim 2\) Myr ago and has injected \((1-2)\times 10^{50}\) erg in cosmic rays. A competing option is that positrons are created by the Schwinger process at the horizon of small black holes with masses \(\gtrsim 10^{20}\) g. This mechanism was suggested in ref. [80] and discussed in more detail in ref. [81]. One more possibility that is closer to the spirit of this talk is that positrons are primordial, produced in the early universe in relatively small antimatter domains [9; 10]. Possible observation of the unexpectedly high flux of antinuclei [85; 86] and antistars in the Galaxy [87] strongly supports this hypothesis, in particular, by ref. [88], where it is advocated that antihelium cosmic rays are created by antistars. **Anti-evidence: cosmic antinuclei.** In 2018 AMS-02 announced possible observation of six \(\overline{He}^{3}\) and two \(\overline{He}^{4}\)[83; 84]. Accumulated by 2022 data contains some more events: 7 \(\overline{D}\) (at energies \(\lesssim 15\) GeV) and 9 \(\overline{He}^{4}\) at (\(E\sim 50\) GeV). These numbers correspond roughly speaking to \(\overline{He}/He\sim 10^{-9}\). This number is much larger than the expected number of \(\overline{He}^{4}\), if it were created in cosmic ray collisions. It is possible that the total flux of anti-helium is even much higher because low energy \(\overline{He}\) may escape registration in AMS. The probability of the secondary creation of different antinuclei was estimated in ref. [89]. According to this work, anti-deuterium could be most efficiently produced in the collisions \(\bar{p}\,p\) or \(\bar{p}\,He\) that can create the flux \(\sim 10^{-7}/m^{2}/s^{-1}\)/steradian/GeV/neutron), i.e. 5 orders of magnitude below the observed flux of antiprotons. Antihelium could be created in the similar reactions and the fluxes of \(\overline{He}^{3}\) and \(\overline{He}^{4}\), that could be created in cosmic rays would respectively be 4 and 8 orders of magnitude smaller than the flux of the secondary created anti-D. According to the works [9; 10], antinuclei should be primordial i.e. created in the very early universe during big bang nucleosynthesis (BBN) inside antimatter bubbles with high baryon density. However, the standard anti-BBN surely does not help, since normally BBN gives 75% of hydrogen, 25% of helium-4, and a minor fraction of deuterium, at the level a few times \(10^{-5}\), in a huge contrast to the observed ratio of anti-deuterium to anti-helium which is of order unity. The same problem exists for the ratio of \(\overline{He}^{3}\) to \(\overline{He}^{4}\), that is also of order unity instead of the standard \(\sim 3\times 10^{-5}\). If we assume that in the model of [9; 10] the abundances of anti-D and anti-He are determined by normal BBN with large baryon-to-photon ratio \(\beta\sim 1\), the problem would be even more pronounced, because amount of deuterium and helium-3 would be negligibly small, even much less than \(10^{-5}\). On the other hand in our scenario formation of primordial elements takes place inside non-expanding compact stellar-like objects with practically fixed temperature. If the temperature is sufficiently high, this so called BBN may stop with almost equal abundances of D and He. One can see that looking at abundances of light elements at a function of temperature. If it is so, antistars may have equal amount of \(\overline{D}\) and \(\overline{He}\) **Anti-evidence: antistars in the Galaxy.** A striking announcement of the possible discovery of anti-stars in the Galaxy was made in ref. [90] The catalog 14 antistar candidates was identifyed, not associated with any objects belonging to established gamma-ray source classes and with a spectrum compatible with baryon-antibaryon annihilation. There results are illustrated in Fig. 9. In ref. [82] a suplimentary method of antistar identification was proposed. In astrophysically plausible cases of the interaction of neutral atmospheres or winds from antistars with ionised interstellar gas, the hadronic annihilation will be preceded by the formation of excited \(p\bar{p}\) and \(He\bar{p}\) atoms. These atoms rapidly cascade down to low levels prior to annihilation giving rise to a series of narrow lines which can be associated with the hadronic annihilation gamma-ray emission. The most significant are L (3p-2p) 1.73 keV line (yield more than 90%) from \(p\bar{p}\) atoms, and M (4-3) 4.86 keV (yield \(\sim\) 60%) and L (3-2) 11.13 keV (yield about 25%) lines from \(He^{4}\bar{p}\) atoms. These lines can be probed in dedicated observations by forthcoming sensitive X-ray spectroscopic missions XRISM and Athena and in wide-field X-ray surveys like SRG/eROSITA all-sky survey. Connection between antistars and the observed fluxes of antinuclei was studied in recent paper [88]. A minor population of antistars in galaxies has been predicted by some of non-standard models of baryogenesis and nucleosynthesis in the early Universe, and their presence is not yet excluded by the currently available observations. Detection of an unusually high abundance of antinuclei in cosmic rays can probe the baryogenesis scenarios in the early Universe. It is shown that the flux of antihelium cosmic rays reported by the AMS-02 experiment can be explained by Galactic anti-nova outbursts, thermonuclear anti-SN Ia explosions, a collection of flaring antistars, or an extragalactic source with abundances not violating existing gamma-ray and microlensing constraints on the antistar population. Figure 9: Positions and energy flux in the 100 MeV - 100 GeV range of antistar candidates selected in 4FGL-DR2. Galactic coordinates. The background image shows the Fermi 5-year all-sky photon counts above 1 GeV ## XIV Mechanism of creation of PBH and antimatter The mechanism of PBH and galactic antimatter creation [9; 10] is essentially based on the scenario of supersymmetry (SUSY) motivated baryogenesis, proposed by Affleck and Dine (AD) [91]. SUSY generically predicts existence of scalars, \(\chi\), with non-zero baryonic number, \(B\neq 0\). Another prediction of high energy SUSY models is an existence of flat directions in the \(\chi\)-potential, either quartic (self-interaction): \[U_{\lambda}(\chi)=\lambda|\chi|^{4}\left(1-\cos 4\theta\right) \tag{14.1}\] or quadratic, i.e. the mass term, \(U_{m}=m^{2}\chi^{2}+{m^{*}}^{2}\chi^{*}{}^{2}\): \[U_{m}(\chi)=m^{2}|\chi|^{2}[1-\cos(2\theta+2\alpha)]\,, \tag{14.2}\] where \(\chi=|\chi|\exp(i\theta)\) and \(m=|m|e^{\alpha}\). If \(\alpha\neq 0\), C and CP are broken. Potential energy does not rise along these flat directions. At the inflationary epoch the average value of \(\chi^{2}\) linearly rises with time [92; 93; 94] see also [95]. In other words, \(\chi\) bosons may condense along flat directions of the quartic potential, when and if its mass was smaller than the inflationary Hubble parameter. In GUT SUSY baryonic number is naturally non-conserved, because of generic non-invariance of \(U(\chi)\) w.r.t. phase rotation \(\chi\rightarrow\chi\exp(i\theta)\). After inflation \(\chi\) was far away from origin due to rising quantum fluctuations and, when inflation ends, it started to evolve down to the equilibrium point, \(\chi=0\), according to equation of motion that formally coincides with the equation of motion of a point-like particle in Newtonian mechanics: \[\ddot{\chi}+3H\dot{\chi}+U^{\prime}(\chi)=0. \tag{14.3}\] The baryonic number of \(\chi\): \[B_{\chi}=\dot{\theta}|\chi|^{2} \tag{14.4}\] is analogous to mechanical angular momentum in complex plane \([Re\chi,Im\chi]\). After \(\chi\) decays, the accumulated baryonic number of \(\chi\) is transferred into baryonic number of quarks in B-conserving process. AD baryogenesis could lead to baryon asymmetry of order of unity, much larger than the observed asymmetry \(\sim 10^{-9}\). If \(m\neq 0\) and the flat direction of quadratic and quartic valleys are different, the angular momentum, B, would be generated by the "rotation" induced by the motion of \(\chi\) from quartic flat direction to the quadratic one. In other words, the field \(\chi\) would acquire non-zero baryonic number, generically very large. If CP-odd phase \(\alpha\) is non-vanishing, both baryonic and antibaryonic domains might be formed with possible dominance of either of them. Matter and antimatter objects may exist but possibly global \(B\neq 0\). An essential development proposed in works [9; 10] was an introduction of the new interaction between the Affleck-Dine field and the inflaton \(\Phi\), the first term in the equation below: \[U=g|\chi|^{2}(\Phi-\Phi_{1})^{2}+\lambda|\chi|^{4}\,\ln\left(\frac{|\chi|^{2} }{\sigma^{2}}\right)+\lambda_{1}(\chi^{4}+h.c.)+(m^{2}\chi^{2}+h.c.). \tag{14.5}\] This coupling between \(\chi\) and the inflaton is the general renormalizable interaction of two scalar field. The only tuning is the assumption that \(\Phi\) reaches the value \(\Phi_{1}\) during inflation significantly before it ends, with the remaining number of e-foldings about 30-40. The window to the flat directions is open, near \(\Phi=\Phi_{1}\). At that period the field \(\chi\) could rise to large values, according to the quantum diffusion equation derived by Starobinsky, generalised to a complex field \(\chi\). If the window to flat direction, when \(\Phi\approx\Phi_{1}\) is open only during a short period, cosmologically small but possibly astronomically large bubbles with high baryon-to-photon ratio \(\beta\) could be created, occupying a tiny fraction of the total universe volume, while the rest of the universe has the observed \(\beta\approx 6\cdot 10^{-10}\), created by the normal small \(\chi\). The fundament of PBH creation has been build at inflation by making large isocurvature fluctuations at relatively small scales, with practically vanishing density perturbations. The initial isocurvature perturbations are contained in large baryonic number of massless quarks in rather small bubbles. We call them high baryonic bubbles, HBBs. Density perturbations were generated rather late after QCD phase transition, at temperatures around 100 MeV, when massless quarks turned into massive baryons. The resulting high density contrast could lead to creation of PBHs. The mechanism is very much different from any other described in the literature models of PBH formation. The emerging universe looks like a piece of Swiss cheese, where holes are high baryonic density objects occupying a minor fraction of the universe volume. Outcome of the DS/DKK mechanism: - confirmed by the observations! * Compact stellar-like objects, similar to cores of red giants. * Disperse hydrogen and helium clouds with (much) higher than average \(n_{B}\) density. * Strange stars with unusual chemistry and velocity. * \(\beta\) may be negative leading to creation of (compact?) antistars which could survive annihilation despite being submerged into the homogeneous baryonic background. * Extremely old stars could exist and indeed they are observed, even, "older than universe star" is found; its prehistoric age is mimicked by the unusual initial chemistry. The mechanism of PBH creation pretty well verified by the data on the BH mass spectrum and on existence of antimatter in the Galaxy, especially of antistars. So we may expect that it indeed solves the problems created by HST and JWST. Thus we may conclude that canonical \(\Lambda\)CDM cosmology is saved by PBHs. Antimatter in our backyard is predicted and found.
2301.07887
Which differential equations correspond to the Lindblad equation?
The Lindblad master equation can always be transformed into a first-order linear ordinary differential equation (1ODE) for the coherence vector. We pose the inverse problem: given a finite-dimensional, non-homogeneous 1ODE, does a corresponding Lindblad equation exist? If so, what are the corresponding Hamiltonian and Lindblad operators? We provide a general solution to this problem, including a complete positivity test in terms of the parameters of the 1ODE. We also derive a host of properties relating the two representations (master equation and 1ODE), which are of independent interest.
Victor Kasatkin, Larry Gu, Daniel A. Lidar
2023-01-19T05:22:26Z
http://arxiv.org/abs/2301.07887v2
# Which differential equations correspond to the Lindblad equation? ###### Abstract The Lindblad master equation can always be transformed into a first-order linear ordinary differential equation (1ODE) for the coherence vector. We pose the inverse problem: given a finite-dimensional, non-homogeneous 1ODE, does a corresponding Lindblad equation exist? If so, what are the corresponding Hamiltonian and Lindblad operators? We provide a general solution to this problem, including a complete positivity test in terms of the parameters of the 1ODE. We also derive a host of properties relating the two representations (master equation and 1ODE) of independent interest. ## I Introduction The Gorini-Kossakowski-Lindblad-Sudarshan (GKLS) master equation [1; 2] is widely used in modeling the evolution of open quantum systems subject to Markovian dynamics [3; 4; 5; 6; 7]. It can be written in the following form: \[\dot{\rho} =\mathcal{L}\rho\;,\quad\mathcal{L}=\mathcal{L}_{H}+\mathcal{L}_{a} \tag{1a}\] \[\mathcal{L}_{H}[\cdot] =-i[H,\cdot]\] (1b) \[\mathcal{L}_{a}[\cdot] =\sum_{i,j=1}^{d^{2}-1}a_{ij}\big{(}F_{i}\cdot F_{j}-\frac{1}{2} \{F_{j}F_{i},\cdot\}\big{)}\;, \tag{1c}\] commonly known simply as the Lindblad equation.1 Here, the dot denotes a time-derivative, \(\rho\) is the state (density matrix) of the open system whose Hilbert space is \(d\)-dimensional, \(\mathcal{L}\) is the Lindbladian, \(H=H^{\dagger}\) is the system Hamiltonian, \(\{F_{i}\}\) is a Hermitian operator basis, and the constants \(a_{ij}\in\mathbb{C}\) are the elements of a positive semidefinite matrix of rates \(a\) (we provide complete definitions below). The positivity property of \(a\) is crucial since it is necessary and sufficient for the superoperator \(\mathcal{L}\) to generate a completely positive (CP) map. This is known as the GKLS theorem.2 Footnote 1: Sadly, Prof. Lindblad passed away while this work was in preparation. We dedicate this work to his memory. Footnote 2: This is actually a pair of theorems that were discovered independently and nearly simultaneously for the finite-dimensional case by Gorini, Kossakowski, and Sudarshan [1], and the infinite-dimensional case by Lindblad [2]; for a detailed account of this history see [8]. When \(a\ngeq 0\), Eq. (1) still describes a time-independent Markovian quantum master equation but no longer generates a CP map. This situation arises, e.g., when the initial system-bath state is sufficiently correlated [9; 10; 11; 12; 13; 14]. The Lindblad equation is a first-order linear differential equation, and it can easily be transformed to make this explicit. By expanding the density matrix \(\rho\) in the operator basis \(\{F_{i}\}\) one straightforwardly derives the equivalent nonhomogeneous form \[\dot{\tilde{v}}=(Q+R)\tilde{v}+\tilde{c}\;, \tag{2}\] where the "coherence vector" \(\tilde{v}\) collects the real-valued expansion coefficients of \(\rho\), \(Q=-Q^{T}\) is an antisymmetric matrix determined by \(H\), while \(R\) and \(\tilde{c}\) are determined by \(a\)[3]. The problem we address in this work is the inverse question: _When does a general, nonhomogeneous linear first-order differential equation_ \[\dot{\tilde{v}}=G\tilde{v}+\tilde{c} \tag{3}\] _describe a Markovian quantum master equation, and in particular a Lindblad equation?_ An elementary necessary condition is that \(G\) must have eigenvalues whose real parts are non-positive (otherwise \(\lim_{t\to\infty}\left|\tilde{v}(t)\right|\) is unbounded). Here we go well beyond this observation and give a complete solution to the problem of inverting both \(H\) and \(a\) from \(G\) and \(\tilde{c}\). We also reformulate the condition for complete positivity in terms of \(G\) and \(\tilde{c}\). Our results apply in the finite-dimensional setting, and we leave the inverse problem for the infinite-dimensional or unbounded settings open. We start by stating our main result - the solution of the inverse problem along with the complete positivity condition - in Section II. We devote the rest of this work to providing a general background, a derivation and proof of the solution, and examples. In more detail, the background is given in Section III, where we introduce the representation of general superoperators over a "nice" operator basis such as the Gell-Mann matrices, use it to express the Markovian quantum master equation over real vector spaces, and derive a variety of general properties of \(Q\), \(R\), and \(\tilde{c}\). In Section IV, we derive the solution of the inverse problem and provide its proof. In Section V, we illustrate a few aspects of the general theory in terms of examples. Given the question we pose in the title of this work, it is natural to ask about the probability that a randomly selected pair \((G,\tilde{c})\) will give rise to a valid Lindblad equation. We provide a partial answer in Section VI, where we point out that Lindbladians are extremely rare in the space of random matrices. In Section VII we discuss the relationship between our work and previous results. We conclude in Section VIII and provide additional supporting material in the appendices, including the analytical solution of Eq. (3) in Appendix A. ## II Main result Consider a finite-dimensional Hilbert space \(\mathcal{H}\), with \(d=\dim(\mathcal{H})<\infty\), and the Banach space (a complete, normed vector space) \(\mathcal{B}(\mathcal{H})\) of operators acting on \(\mathcal{H}\) equipped with the Hilbert-Schmidt inner product \(\langle A,B\rangle\equiv\operatorname{Tr}(A^{\dagger}B)\). Throughout this work, we use the following definitions. **Definition 1**.: _A "nice operator basis" for \(\mathcal{H}\) is a set \(\{F_{j}\}_{j=0}^{J}\in\mathcal{B}(\mathcal{H})\), \(J\equiv d^{2}-1\), such that \(F_{0}=\frac{1}{\sqrt{d}}I\) (\(I\) denotes the identity operator in \(\mathcal{B}(\mathcal{H})\)), \(F_{j}=F_{j}^{\dagger}\) and \(\operatorname{Tr}F_{j}=0\) for \(1\leq j\leq J\), and [15]:_ \[\langle F_{j},F_{k}\rangle=\operatorname{Tr}\bigl{(}F_{j}F_{k}\bigr{)}=\delta _{jk}\;,\quad 0\leq j\leq J\;. \tag{4}\] **Definition 2**.: _Any equation of the form Eq. (1), where \(H=H^{\dagger}\) is a Hermitian operator in \(B(\mathcal{H})\) and \(a=a^{\dagger}\) is a Hermitian \(J\times J\) matrix, is a "Markovian quantum master equation". In this case, the superoperator \(\mathcal{L}\) in Eq. (1) is called a "Liouvilleian"._ _When, in addition, \(a\geq 0\), Eq. (1) is a "Lindblad master equation", "Lindblad equation", or "completely positive Markovian quantum master equation". In this case, the superoperator \(\mathcal{L}\) is called a "Lindbladian" or "completely positive Liouvillian"._ _The superoperator \(\mathcal{L}_{a}\) is called the dissipator._ Whenever we refer to Eq. (1) or the superoperator \(\mathcal{L}\) without explicitly specifying complete positivity, we assume the former case, i.e., we only assume the conditions \(H=H^{\dagger}\), \(a=a^{\dagger}\). Our main result is the following: **Theorem 1**.: _For any \(J\times J\) matrix \(G\) and any vector \(\bar{c}\) of length \(J\) with real coefficients there is a pair \((H,a)\) describing a Markovian quantum master equation Eq. (1) which transforms to Eq. (3) with these \(G\) and \(\bar{c}\). If, in addition to the above, we require \(H\) to be traceless, then such a pair \((H,a)\) is unique and can be computed as follows:_ \[H =\frac{1}{2id}\sum_{m,n=1}^{J}G_{nm}\left[F_{m},F_{n}\right], \tag{5a}\] \[a_{mn} =\sum_{i=1}^{J}\operatorname{Tr}\left[\left(\sum_{j=1}^{J}G_{ij}F _{j}+c_{i}I\right)F_{m}F_{i}F_{n}\right]. \tag{5b}\] _Moreover, the matrix \(a=\{a_{mn}\}\) is positive semidefinite [i.e., Eq. (1) is a Lindblad master equation] if and only if (iff) for all traceless \(B\in\mathcal{B}(\mathcal{H})\):_ \[\sum_{i=1}^{J}\operatorname{Tr}\left[\left(\sum_{j=1}^{J}G_{ij}F_{j}+c_{i}I \right)B^{\dagger}F_{i}B\right]\geq 0\;. \tag{6}\] Condition (6) is equivalent to the positive semidefiniteness of \(a\), which is simpler to check. Thus, in practice, it is preferable to first compute \(a\) using Eq. (5b) and check the sign of its smallest eigenvalue, rather than to work directly with Eq. (6). Note that adding a constant, i.e., a term of the form \(cI\) with \(c\in\mathbb{R}\), to \(H\) does not change the Liouvillian \(\mathcal{L}\). Therefore, if the requirement that \(H\) is traceless is not imposed, \(H\) can only be recovered from \(G\) up to an additive constant. ## III Background Let \(M(d,F)\) denote the vector space of \(d\times d\) matrices with coefficients in \(F\), where \(F\in\{\mathbb{R},\mathbb{C}\}\). For our purposes it suffices to identify \(\mathcal{B}(\mathcal{H})\) with \(M(d,\mathbb{C})\). Quantum states are represented by density operators \(\rho\in\mathcal{B}_{+}(\mathcal{H})\) (positive trace-class operators acting on \(\mathcal{H}\)) with unit trace: \(\operatorname{Tr}\rho=1\). Elements of \(\mathcal{B}[\mathcal{B}(\mathcal{H})]\), i.e., linear transformations \(\mathcal{E}:\mathcal{B}(\mathcal{H})\to\mathcal{B}(\mathcal{H})\), are called superoperators, or maps. Complete positivity of a superoperator \(\mathcal{E}\) is equivalent to the statement that \(\mathcal{E}\) has a Kraus representation [16]: \(\forall X\in\mathcal{B}(\mathcal{H})\), \[\mathcal{E}(X)=\sum_{i}K_{i}XK_{i}^{\dagger}\;, \tag{7}\] where the \(\{K_{i}\}\) are called Kraus operators. When they satisfy \(\sum_{i}K_{i}^{\dagger}K_{i}=I\), the map \(\mathcal{E}\) is trace preserving. A quantum dynamical semigroup is defined as a one-parameter family of strongly continuous CP maps, satisfying \(\Lambda_{0}=I\) (the identity operator), and the Markov property \(\Lambda_{s}\Lambda_{t}=\Lambda_{s+t}\) for all \(s,t\geq 0\).3 Footnote 3: Strong continuity means that \(\forall X\in\mathcal{B}(\mathcal{H})\), \(\|\Lambda_{t}X-X\|\to 0\) as \(t\searrow 0\), i.e., the map \(\Lambda_{t}\) is continuous in the strong operator topology. We can more formally state the GKLS theorem(s) [1; 2] as follows: a superoperator \(\mathcal{L}:\mathcal{B}(\mathcal{H})\to\mathcal{B}(\mathcal{H})\) is the generator of a quantum dynamical semigroup iff it is of the form given in Eq. (1), with \(a\geq 0\). I.e., when \(\mathcal{L}\) is in the form of Eq. (1), the solution of \(\dot{\rho}=\mathcal{L}\rho\) is \(\rho(t)=\Lambda_{t}\rho(0)\), where \(\Lambda_{t}=e^{\mathcal{L}t}\) is an element of a quantum dynamical semigroup. ### General superoperators over a nice operator basis Given a nice operator basis, let us "coordinatize" the operator \(X\in\mathcal{B}(\mathcal{H})\) in the nice operator basis as \[X=\sum_{j=0}^{J}\mathbf{X}_{j}F_{j}, \tag{8}\] where we use the notation \(\mathbf{X}=\{\mathbf{X}_{j}\}\) for the vector of coordinates of the operator \(X\) (we interchangeably use the \(\tilde{X}\) and \(\mathbf{X}\) notations to denote a vector). I.e., for any \(X\in\mathcal{B}(\mathcal{H})\): \[\mathbf{X}_{i}=\{F_{i},X\}=\operatorname{Tr}(F_{i}X)\;. \tag{9}\] In particular, applying Eqs. (8) and (9) to \(|k\rangle\!\langle l|\) we obtain \[|k\rangle\!\langle l|=\sum_{i=0}^{J}(F_{i})_{lk}F_{i}, \tag{10}\] or \[\sum_{i=0}^{J}F_{i\,lk}F_{i\,k^{\prime}l^{\prime}}=\delta_{ll^{\prime}}\delta_{ kk^{\prime}}. \tag{11}\] **Proposition 1**.: _The operator \(X\in\mathcal{B}(\mathcal{H})\) is Hermitian iff \(\mathbf{X}\in\mathbb{R}^{d^{2}}\) in a nice operator basis._ Proof.: If \(X\) is Hermitian, then \(\mathbf{X}_{i}^{*}=\langle F_{i},X\rangle^{*}=\langle X,F_{i}\rangle=\mathrm{Tr}( XF_{i})=\mathrm{Tr}(F_{i}X)=\mathbf{X}_{i}\). Vice-versa, if all the coefficients \(\mathbf{X}_{i}\) are real, then the operator \(X=\sum_{j=0}^{J}\mathbf{X}_{j}F_{j}\) is Hermitian because it is a sum of Hermitian operators \(F_{j}\) with real coefficients. More generally, for Hermitian matrices, the inner product becomes \((A,B)=\mathrm{Tr}(AB)\). Such matrices have \(d^{2}=J+1\) real parameters [\(d\) real diagonal elements plus \((d^{2}-d)/2\) independent complex off-diagonal elements], so they can be represented in terms of vectors in \(\mathbb{R}^{d^{2}}\). Moreover, if \(A\) and \(B\) are both Hermitian operators then: \[\langle A,B\rangle=\sum_{j,k=0}^{J}\mathbf{A}_{j}\mathbf{B}_{k}\,\mathrm{Tr}(F_{j}F_{ k})=\sum_{j,k=0}^{J}\mathbf{A}_{j}\mathbf{B}_{k}\delta_{jk}=\mathbf{A}^{t}\mathbf{B}\, \tag{12}\] so that in these coordinates, the inner product \((A,B)\) corresponds to the standard inner product in \(\mathbb{R}^{d^{2}}\). The matrix elements of a superoperator \(\mathcal{E}:\mathcal{B}(\mathcal{H})\rightarrow\mathcal{B}(\mathcal{H})\) are given by \[\mathbf{\mathcal{E}}_{ij}=\langle F_{i},\mathcal{E}(F_{j})\rangle=\mathrm{Tr}[F_{ i}\mathcal{E}(F_{j})]. \tag{13}\] The action of a general (not necessarily CP) superoperator \(\mathcal{E}\in\mathcal{B}[\mathcal{B}(\mathcal{H})]\) can always be represented for any \(A\in\mathcal{B}(\mathcal{H})\) as \[\mathcal{E}(A)=\sum_{i,j=0}^{J}c_{ij}F_{i}AF_{j}, \tag{14}\] where \(c_{ij}\in\mathbb{C}\) (see Appendix D). Thus the matrix \(c=\{c_{ij}\}\) specifies \(\mathcal{E}\) in the given basis \(\{F_{i}\}\). CP maps are a special case of Eq. (14), where \(c\) is positive semidefinite. For superoperators we denote the Hilbert-Schmidt adjoint of \(\mathcal{E}\) by \(\mathcal{E}^{\dagger}\), which is defined via \[(\mathcal{E}^{\dagger}(A),B)\equiv\langle A,\mathcal{E}(B)\rangle\quad\forall A,B\in\mathcal{B}(\mathcal{H}). \tag{15}\] **Definition 3**.: \(\mathcal{E}\) _is Hermiticity-preserving iff:_ \[\mathcal{E}^{\dagger}\left(A^{\dagger}\right)=\mathcal{E}(A)\quad\forall A\in \mathcal{B}(\mathcal{H}). \tag{16}\] \(\mathcal{E}\) _is_ Hermitian _iff_ \[\mathcal{E}^{\dagger}\left(A\right)=\mathcal{E}(A)\quad\forall A\in\mathcal{B} (\mathcal{H}). \tag{17}\] Let us find explicit formulas for \([\mathcal{E}(A^{\dagger})]^{\dagger}\) and \(\mathcal{E}^{\dagger}(A)\) in terms of the general representation of a superoperator given by Eq. (14). First, from Eq. (14) we have \[[\mathcal{E}(A^{\dagger})]^{\dagger}=\left(\sum_{ij}c_{ij}F_{i}A^{\dagger}F_{ j}\right)^{\dagger}=\sum_{ij}c_{ij}^{*}F_{j}AF_{i}. \tag{18}\] Second, substituting Eq. (14) into the complex conjugate of Eq. (15) yields: \[\left\langle B,\mathcal{E}^{\dagger}(A)\right\rangle=\langle \mathcal{E}(B),A\rangle=\mathrm{Tr}\left(\sum_{ij}(c_{ij}F_{i}BF_{j})^{\dagger}A\right) \tag{19a}\] \[=\mathrm{Tr}\left(\sum_{ij}c_{ij}^{*}F_{j}B^{\dagger}F_{i}A \right)=\mathrm{Tr}\left(B^{\dagger}\sum_{ij}c_{ij}^{*}F_{i}AF_{j}\right)\] (19b) \[=\langle B,\sum_{ij}c_{ij}^{*}F_{i}AF_{j}\rangle. \tag{19c}\] Since this equality holds for every \(B\), we have: \[\mathcal{E}^{\dagger}(A)=\sum_{ij}c_{ij}^{*}F_{i}AF_{j}. \tag{20}\] It will turn out to be useful to have another representation of \(\mathcal{E}^{\dagger}(A)\). Consider two sets of arbitrary operators \(\{L_{p}\}\) and \(\{M_{q}\}\), which we can expand in the nice operator basis as \(L_{p}=\sum_{i=0}^{j}l_{ip}F_{i}\) and \(M_{q}=\sum_{i=0}^{J}m_{iq}F_{i}\). In addition, let \(b\) be some arbitrary matrix in \(M(d^{2},\mathbb{C})\) and let \(c=lbm^{\dagger}\). Then, using Eq. (14): \[\mathcal{E}(A) =\sum_{i,j,0}^{J}[\sum_{p,q}l_{ip}b_{pq}m_{jq}^{*}]F_{i}AF_{j} \tag{21a}\] \[=\sum_{p,q}b_{pq}\left(\sum_{i=0}^{J}l_{ip}F_{i}\right)A\left( \sum_{j=0}^{J}m_{jq}^{*}F_{j}\right)\] (21b) \[=\sum_{p,q}b_{pq}L_{p}AM_{q}^{\dagger}\, \tag{21c}\] and, using Eq. (20): \[\mathcal{E}^{\dagger}(A) =\sum_{i,j,0}^{J}[\sum_{p,q}l_{ip}b_{pq}m_{jq}^{*}]^{*}F_{i}AF_{j} \tag{22a}\] \[=\sum_{p,q}b_{pq}^{*}\left(\sum_{i=0}^{J}l_{ip}^{*}F_{i}\right)A \left(\sum_{j=0}^{J}m_{jq}F_{j}\right)\] (22b) \[=\sum_{p,q}b_{pq}^{*}L_{p}^{\dagger}AM_{q}. \tag{22c}\] **Proposition 2**.: \(\mathcal{E}\) _is simultaneously Hermiticity-preserving and Hermitian iff \(c\) in Eq. (14) is real-symmetric, i.e., \(c_{ij}=c_{ji}=c_{ij}^{*}\ \forall i,j\)._ Proof.: This follows immediately by equating the right-hand sides of Eqs. (18) and (20), both of which are then equal to \(\mathcal{E}(A)\) by Definition 3. **Proposition 3**.: \(\mathcal{E}\) _is Hermiticity-preserving iff \(\mathbf{\mathcal{E}}\in M(d^{2},\mathbb{R})\)._ Proof.: Using Eq. (13), we obtain: \[\mathbf{\mathcal{E}}^{*}_{ij}=\operatorname{Tr}\left(\left[\mathcal{E}\left(F_{j} \right)\right]^{\dagger}F_{i}^{\dagger}\right)=\operatorname{Tr}\left(\mathcal{E }\left(F_{j}\right)F_{i}\right)=\mathbf{\mathcal{E}}_{ij}\;. \tag{23}\] In the other direction, assume \(\mathbf{\mathcal{E}}\in M(d^{2},\mathbb{R})\). \(\mathcal{E}(A)\) is represented in a nice operator basis as \[\mathcal{E}(A)=\sum_{i,j=0}^{J}\mathbf{\mathcal{E}}_{ij}\mathbf{A}_{j}F_{i}\;, \tag{24}\] so that \[\left[\mathcal{E}(A)\right]^{\dagger} =\left(\sum_{i,j=0}^{J}\mathbf{\mathcal{E}}_{ij}\mathbf{A}_{j}F_{i}\right) ^{\dagger}=\sum_{i,j=0}^{J}\mathbf{\mathcal{E}}^{*}_{ij}\mathbf{A}^{*}_{j}F_{i}^{\dagger} \tag{25a}\] \[=\sum_{i,j=0}^{J}\mathbf{\mathcal{E}}_{ij}\mathbf{A}^{*}_{j}F_{i}=\mathcal{ E}(A^{\dagger})\;. \tag{25b}\] Thus, after coordinatization, a Hermiticity-preserving superoperator \(\mathcal{E}\) can be seen as a real-valued \(d^{2}\times d^{2}\)-dimensional matrix \(\mathbf{\mathcal{E}}\). ### General properties of a Liouvillian Since \(a\) in Eq. (1) is Hermitian, it can be written as \(a=u^{\dagger}\tilde{\gamma}u\) where \(u\) is unitary and \(\tilde{\gamma}\) is diagonal with the eigenvalues \(\{\gamma_{\alpha}\}_{\alpha=1}^{J}\) of \(a\) on its diagonal. The \(\gamma_{\alpha}\)'s are always real when Eq. (1) is a Markovian quantum master equation. When \(a\geq 0\), i.e., when Eq. (1) is a Lindblad equation, they are non-negative. Defining \[L_{\alpha}=\sum_{j=1}^{J}u^{*}_{\alpha j}F_{j}\;, \tag{26}\] we have \(F_{i}=\sum_{\alpha=1}^{J}u_{\alpha i}L_{\alpha}\). Then: \[\sum_{i,j=1}^{J}a_{ij}F_{i}\rho F_{j}^{\dagger} =\sum_{\alpha,\beta=1}^{J}L_{\alpha}\rho L_{\beta}^{\dagger}\sum_ {i,j=1}^{J}u_{\alpha i}a_{ij}u_{\beta l}^{\dagger}\] \[=\sum_{\alpha=1}^{J}\gamma_{\alpha}L_{\alpha}\rho L_{\alpha}^{ \dagger}\;, \tag{27a}\] \[\sum_{i,j=1}^{J}a_{ij}F_{j}^{\dagger}F_{i} =\sum_{\alpha,\beta=1}^{J}L_{\beta}^{\dagger}L_{\alpha}\sum_{i,j =1}^{J}u_{\alpha i}a_{ij}u_{\beta l}^{\dagger}\] \[=\sum_{\alpha=1}^{J}\gamma_{\alpha}L_{\alpha}^{\dagger}L_{\alpha}\;. \tag{27b}\] Therefore the Markovian quantum master equation [Eq. (1)] becomes: \[\dot{\rho}=\mathcal{L}\rho =\mathcal{L}_{H}\rho+\mathcal{L}_{a}\rho \tag{28a}\] \[\mathcal{L}_{H}[\cdot] =-i[H,\cdot]\;,\] (28b) \[\mathcal{L}_{a}[\cdot] =\sum_{\alpha=1}^{J}\gamma_{\alpha}\left(L_{\alpha}\cdot L_{ \alpha}^{\dagger}-\frac{1}{2}\{L_{\alpha}^{\dagger}L_{\alpha},\cdot\}\right)\;. \tag{28c}\] The \(L_{\alpha}\in\mathcal{B}(\mathcal{H})\) are called _Lindblad operators_. We remark that the representation of the dissipator \(\mathcal{L}_{a}\) in the form (28c) need not be unique: for example, if \(a=I\) (the identity matrix), it can be written as \(u^{\dagger}Tu\) for any unitary \(u\), but [as can be seen from Eq. (26)] different choices of \(u\) lead to different Lindblad operators. More generally, \(\tilde{\gamma}\) is unique only up to permutations, and any permutation redefines the Lindblad operators by modifying the diagonalizing unitary matrix \(u\). **Proposition 4**.: _The Liouvillian is Hermiticity-preserving._ Proof.: To show this we can directly compare \([\mathcal{L}(X)]^{\dagger}\) with \(\mathcal{L}(X^{\dagger})\) using Eqs. (28b) and (28c). For the Hamiltonian part, we have the following: \[[\mathcal{L}_{H}(X)]^{\dagger} =i(HX-XH)^{\dagger}=i(X^{\dagger}H^{\dagger}-H^{\dagger}X^{\dagger}) \tag{29a}\] \[=-i(HX^{\dagger}-X^{\dagger}H)=\mathcal{L}_{H}(X^{\dagger})\;, \tag{29b}\] while for the dissipative part, \[[\mathcal{L}_{a}(X)]^{\dagger} =\sum_{\alpha=1}^{J}\gamma_{\alpha}\left((L_{\alpha}XL_{\alpha}^{ \dagger})^{\dagger}-\frac{1}{2}\{L_{\alpha}^{\dagger}L_{\alpha},X\}^{\dagger}\right) \tag{30a}\] \[=\sum_{\alpha=1}^{J}\gamma_{\alpha}\left((L_{\alpha}X^{\dagger} L_{\alpha}^{\dagger})-\frac{1}{2}\{L_{\alpha}^{\dagger}L_{\alpha},X^{\dagger}\}\right)\] (30b) \[=\mathcal{L}_{a}(X^{\dagger})\,. \tag{30c}\] Thus, \[\mathcal{L}\left(X^{\dagger}\right)=[\mathcal{L}\left(X\right)]^{\dagger}\;. \tag{31}\] **Corollary 1**.: _The Markovian quantum master equation \(\dot{\rho}=\mathcal{L}\rho\) in a nice operator basis is a real-valued linear ordinary differential equation (ODE) for the vector \(\mathbf{\rho}\) whose coordinates are \(\mathbf{\rho}_{j}=\operatorname{Tr}(\rho F_{j})\):_ \[\dot{\mathbf{\rho}}=\mathbf{\mathcal{L}}\mathbf{\rho}\;,\quad\mathbf{\rho}\in\mathbb{R}^{d^{2} }\;,\quad\mathbf{\mathcal{L}}\in M(d^{2},\mathbb{R})\;. \tag{32}\] Proof.: This follows directly from Propositions 1, 3 and 4 with \(X=\rho\) and \(\mathcal{E}=\mathcal{L}\). **Proposition 5**.: _The Liouvillian maps any operator \(X\) to a traceless operator. That is, for any operator \(X\)_ \[\operatorname{Tr}[\mathcal{L}(X)]=0. \tag{33}\] Proof.: From Eq. (1) we have \[\operatorname{Tr}[\mathcal{L}_{H}(X)] =-i\operatorname{Tr}([H,X])\] \[=-i[\operatorname{Tr}(HX)-\operatorname{Tr}(XH)]=0, \tag{34}\] and, using \(\operatorname{Tr}(AB)=\operatorname{Tr}(BA)\): \[\operatorname{Tr}(\mathcal{L}_{a}(X))=\sum_{i,j=1}^{J}a_{ij} \operatorname{Tr}\left(F_{i}XF_{j}-\frac{1}{2}\left\{F_{j}F_{i},X\right\}\right)\] \[=\sum_{i,j=1}^{J}a_{ij}\left(F_{j}F_{i}X-\frac{1}{2}F_{j}F_{i}X- \frac{1}{2}F_{j}F_{i}X\right)=0. \tag{35}\] **Proposition 6**.: _Suppose the dissipator \(\mathcal{L}_{a}\) is of the form (1c) with a Hermitian matrix \(a\). Then the following conditions are equivalent._ 1. \(\mathcal{L}_{a}\) _is Hermitian._ 2. \(\mathcal{L}_{a}\) _can be written in the form (_28c_) s.t._ \(\forall\alpha\ L_{\alpha}=L_{\alpha}^{\dagger}\)_._ 3. \(a\) _is symmetric, i.e.,_ \(a=a^{T}\)_._ 4. _All the matrix elements_ \(a_{ij}\) _are real._ Proof.: \(2\implies 1\). If all the Lindblad operators \(L_{\alpha}\) are Hermitian then \(\mathcal{L}_{a}^{\dagger}=\mathcal{L}_{a}\). Indeed, from Eqs. (22c) and (28c), for any operator \(A\) we have \[\mathcal{L}_{a}^{\dagger}(A)=\sum_{\alpha=1}^{J}\gamma_{\alpha}\left(L_{ \alpha}^{\dagger}XL_{\alpha}-\frac{1}{2}\{L_{\alpha}^{\dagger}L_{\alpha},X\} \right)=\mathcal{L}_{a}(A). \tag{36}\] \(3\Leftrightarrow 4\). Conditions 3 and 4 mean that for each \(i,j\) we have \(a_{ji}=a_{ij}\) and \(a_{ji}=a_{ji}^{*}\) respectively. Since \(a=a^{\dagger}\), the right-hand sides of these two equations are equal for all \(i,j\). \(3,4\implies 2\). We know that \(a\) is real-symmetric; hence it can be diagonalized by an orthogonal matrix. Therefore, \(u\) in Eq. (26) can be chosen to have real matrix elements. For that choice, we have \(\forall\alpha\ L_{\alpha}=L_{\alpha}^{\dagger}\). \(1\implies 4\). According to Eq. (22c), \(\mathcal{L}_{a}^{\dagger}=\mathcal{L}_{a^{*}}\). From \(\mathcal{L}_{a}=\mathcal{L}_{a}^{\dagger}\) we have \[\mathcal{L}_{a}=(\mathcal{L}_{a}+\mathcal{L}_{a^{*}})/2=\mathcal{L}_{\tilde{a}}, \tag{37}\] where \(\tilde{a}=(a+a^{*})/2\) -- a matrix with real matrix elements. Later in Theorem 2 we will show that \(a\) is uniquely determined by \(\mathcal{L}_{a}\), hence \(a=\tilde{a}\) and \(a=a^{*}\). \(2\implies 3\). If \(L_{\alpha}=L_{\alpha}^{\dagger}\) then it can be expanded in the given basis \(\{F_{j}\}\) with real coefficients: \(L_{\alpha}=\sum_{j=1}^{J}w_{\alpha j}F_{j}\), with \(w_{\alpha j}\in\mathbb{R}\). Substituting this into Eq. (28c) we obtain Eq. (1c) with \(\tilde{a}=w^{T}\tilde{\gamma}w\) in place of \(a\) and \(\tilde{\gamma}=\text{diag}(\gamma_{1},\dots,\gamma_{J})\). Thus \(\tilde{a}=\tilde{a}^{T}\). Later in Theorem 2 (see also Proposition 18 and Corollary 2) we will show that \(a\) is uniquely determined by \(\mathcal{L}_{a}\); hence, \(a=\tilde{a}\) and \(a=a^{T}\). By definition, the generator \(\mathcal{L}_{a}\) is unital if \(\mathcal{L}_{a}(I)=0\)[17] [a CPTP map is unital if it maps \(I\) to itself: \(\mathcal{E}(I)=I\)]. Another general property of the Liouvillian is the following: **Proposition 7**.: _The Liouvillian is unital if any of the conditions in Proposition 6 are satisfied._ Proof.: It suffices to assume that \(a\) is symmetric; then using Eq. (1c): \[\mathcal{L}_{a}(I) =\sum_{i,j=1}^{J}a_{ij}\big{(}F_{i}F_{j}-F_{j}F_{i}\big{)} \tag{38a}\] \[=\sum_{i,j=1}^{J}a_{ij}F_{i}F_{j}-\sum_{i,j=1}^{J}a_{ji}F_{j}F_{i }=0. \tag{38b}\] Note that the converse is false: unitality does not imply that \(a\) must be symmetric (or any of the other conditions in Proposition 6). As a counterexample, consider any nice operator basis containing two or more commuting operators [e.g., the last two matrices in Eq. (130) below]. Then, if \(a\) contains a non-zero matrix element only between these two operators [e.g., \(a_{78}=1\), all the rest zero, for the nice operator basis in Eq. (130)], Eq. (38a) gives \(\mathcal{L}_{a}(I)=0\), but \(a\) is not symmetric. Markovian quantum master equation for the coherence vector: from \(H\) and \(a\) to \(G=Q+R\) and \(\tilde{c}\) By Corollary 1, we can expand the density matrix in the nice operator basis as: \[\rho=\vec{F}^{\prime}\cdot\mathbf{\rho}=\frac{1}{d}I+\vec{F}\cdot\vec{v}\, \tag{39}\] where \(\vec{F}^{\prime}=(F_{0},F_{1},\dots,F_{J})\), \(\mathbf{\rho}=(1/\sqrt{d},v_{1},\dots,v_{J})^{T}\), \(\vec{F}=(F_{1},\dots,F_{J})\) collects the traceless basis-operators into a vector, and the corresponding coordinate vector \(\tilde{v}=(v_{1},\dots,v_{J})^{T}\in\mathbb{R}^{J}\) is called the _coherence vector_. We have: \[v_{j}=\operatorname{Tr}(\rho F_{j})=\mathbf{\rho}_{j}. \tag{40}\] We note that the conditions guaranteeing that a given vector represents a valid (i.e., non-negative) density matrix are non-trivial. A simple necessary condition follows from the condition that the purity \(\operatorname{Tr}(\rho^{2})\leq 1\): \[\operatorname{Tr}(\rho^{2}) =\operatorname{Tr}\left[\left(\frac{1}{d}I+\vec{F}\cdot\vec{v} \right)^{2}\right] \tag{41a}\] \[=\frac{1}{d}+\sum_{i,j=1}^{J}\operatorname{Tr}(F_{i}F_{j})v_{i}v_{ j}=\frac{1}{d}+\left\|\tilde{v}\right\|^{2}\, \tag{41b}\] i.e., \[\left\|\tilde{v}\right\|\leq\left(1-\frac{1}{d}\right)^{1/2}\, \tag{42}\] which is saturated for pure states. Consequently, as mentioned in Section I, \(G\)'s eigenvalues are constrained to have non-positive real parts. This is a necessary condition any candidate Eq. (3) must satisfy to represent a Lindblad equation. Additional inequalities have been derived in Ref. [18]. **Proposition 8**.: _The coherence vector satisfies Eq. (3). Moreover, the decomposition of \(\mathcal{L}\) as \(\mathcal{L}=\mathcal{L}_{H}+\mathcal{L}_{a}\) with \(\mathcal{L}_{H}\) and \(\mathcal{L}_{a}\) given by Eqs. (1b) and (1c) induces the decomposition of \(G\) in Eq. (3) into \(G=Q+R\), where \(\mathcal{L}_{H}[\rho]\sim Q\tilde{v}\), \(\mathcal{L}_{a}[\rho]\sim R\tilde{v}+\tilde{c}\), \(\tilde{c}\in\mathbb{R}^{J}\), \(R,Q\in M(J,\mathbb{R})\), and \(Q=-Q^{T}\) (antisymmetric)._ Proof.: After separating out \(v_{0}=\mathbf{\rho}_{0}=1/\sqrt{d}\), Eq. (32) becomes \[\hat{v}_{i}=\sum_{j=1}^{J}\mathbf{\mathcal{L}}_{ij}v_{j}+\frac{1}{\sqrt{d}}\mathbf{ \mathcal{L}}_{i0}\,\quad 1\leq i\leq J. \tag{43}\] This already establishes that \(\bar{v}\) satisfies Eq. (2), with \[Q_{ij}+R_{ij} \equiv\mathcal{L}_{ij}=\operatorname{Tr}[F_{i}\mathcal{L}(F_{j})] \equiv G_{ij} \tag{44a}\] \[c_{j} \equiv\frac{1}{\sqrt{d}}\mathcal{L}_{j0}=\frac{1}{\sqrt{d}}(Q_{j0 }+R_{j0})\, \tag{44b}\] for \(1\leq i,j\leq J\). To prove the remaining claims, recall that \(\mathcal{L}_{ij}=\mathcal{L}_{ij}^{*}\) by Corollary 1. Therefore, by linearity, for \(1\leq i,j\leq J\): \[Q_{ij} \equiv\operatorname{Tr}[F_{i}\mathcal{L}_{H}(F_{j})]=Q_{ij}^{*} \tag{45a}\] \[R_{ij} \equiv\operatorname{Tr}[F_{i}\mathcal{L}_{a}(F_{j})]=R_{ij}^{*}\, \tag{45b}\] i.e., \(R,Q\in M(J,\mathbb{R})\). In addition, \[Q_{ji} =\operatorname{Tr}[F_{j}\mathcal{L}_{H}(F_{i})]=-i\operatorname{ Tr}\left(F_{j}[H,F_{i}]\right) \tag{46a}\] \[=i\operatorname{Tr}\left(F_{i}[H,F_{j}]\right)=-\operatorname{ Tr}[F_{i}\mathcal{L}_{H}(F_{j})]=-Q_{ij}\, \tag{46b}\] i.e., \(Q\) is antisymmetric. Moreover, by expanding \(H\) in the nice operator basis as \[H=\sum_{m=0}^{J}h_{m}F_{m}\, \tag{47}\] we can write \(Q\)'s matrix elements as: \[Q_{ij} =-i\operatorname{Tr}\left(F_{i}[H,F_{j}]\right) \tag{48a}\] \[=-i\sum_{m=1}^{J}h_{m}\operatorname{Tr}\left(F_{i}[F_{m},F_{j}] \right). \tag{48b}\] As for \(\bar{c}\), since \(F_{0}=I/\sqrt{d}\), we have \[Q_{j0}=\frac{-i}{\sqrt{d}}\operatorname{Tr}\left(F_{j}[H,I]\right)=0, \tag{49}\] so that for \(1\leq j\leq J\), Eq. (44b) is replaced by: \[c_{j}=\frac{1}{d}\operatorname{Tr}[F_{j}\mathcal{L}_{a}(I)]=\frac{1}{\sqrt{d}} R_{j0}=c_{j}^{*}\, \tag{50}\] i.e., \(\bar{c}\in\mathbb{R}^{J}\). Let \(\operatorname{spec}(A)\) denote the spectrum of the operator \(A\), i.e., the set of its eigenvalues. **Proposition 9**.: \[\operatorname{spec}(\mathcal{L})=\{0\}\cup\operatorname{spec}(G).\] (51) Proof.: Note that, as follows from proposition 5, \[\mathcal{L}_{0j}=0\quad\text{i.e.,}\quad\mathcal{L}_{0}=\bar{0}. \tag{52}\] Combining this with Eq. (45), we see that in the basis \(\{F_{j}\}_{j=0}^{J}\): \[\mathcal{L}=\left(\begin{array}{cc}0&0\cdots 0\\ \sqrt{d}\bar{c}&G\end{array}\right). \tag{53}\] Computing the spectrum of the superoperator \(\mathcal{L}\) is equivalent to finding the eigenvalues of \(\mathcal{L}\), which are the solutions of the characteristic equation \[\det\left(\lambda I_{J+1}-\mathcal{L}\right)=\lambda\det\left(\lambda I_{J}-G \right)=0, \tag{54}\] where \(I_{n}\) denotes the \(n\times n\) identity matrix. These solutions are \(\lambda=0\) and \(\operatorname{spec}(G)\). ### When does \(\bar{c}\) vanish? **Proposition 10**.: \(\bar{c}=\bar{0}\) _iff \(\mathcal{L}_{a}\) is unital._ Proof.: The proof is immediate from Eq. (53), which shows that \(\bar{c}\) is the vector representing the matrix \(\mathcal{L}_{a}(I)/d\). Thus \(\bar{c}=\bar{0}\) iff \(\mathcal{L}_{a}(I)=0\). A more explicit proof is the following calculation. First, if \(\mathcal{L}_{a}\) is unital then, using Eqs. (45b) and (50), we have: \[c_{k}=\frac{1}{d}\operatorname{Tr}[F_{k}\mathcal{L}_{a}(I)]=0\quad\left( \mathcal{L}_{a}\text{ unital}\right). \tag{55}\] On the other hand, if \(\bar{c}=0\), \[\operatorname{Tr}[F_{k}\mathcal{L}_{a}(I)]=dc_{k}=0, \tag{56}\] hence \(\mathcal{L}_{a}(I)=0\). Note that using Eq. (1c) we can write down the general formula for \(\bar{c}\)'s elements in a given nice operator basis: \[c_{k}=\frac{1}{d}\sum_{i,j=1}^{J}a_{ij}\operatorname{Tr}\bigl{(}\bigl{[}F_{i},F_{j}\bigr{]}F_{k}\bigr{)}. \tag{57}\] ### When is \(R\) symmetric? Here by \(R\) we mean the \(J\times J\) matrix with elements \(\{R_{kl}\}_{k,l=1}^{J}\). Using Eq. (45b): \[R_{kl} =\operatorname{Tr}\bigl{[}F_{k}\mathcal{L}_{a}(F_{l})\bigr{]} \tag{58a}\] \[=\sum_{i,j=1}^{J}a_{ij}\operatorname{Tr}\bigl{[}F_{k}\bigl{(}F_{i }F_{l}F_{j}-\frac{1}{2}\left\{F_{j}F_{i},F_{l}\right\}\bigr{)}\bigr{]}. \tag{58b}\] Note that this is the general formula for \(R\)'s elements in a given nice operator basis. \(R\) is not symmetric in general. However, we have two special cases presented in the following Propositions. **Proposition 11**.: \(R\) _is symmetric in the single qubit (\(d=2\)) case._ Proof.: By direct calculation, it follows that if we choose the traceless elements of the nice operator basis as the normalized Pauli matrices \(\{\sigma^{x},\sigma^{y},\sigma^{z}\}/\sqrt{2}\), then \(R_{kl}=R_{lk}\) for arbitrary \(a\). An alternative way to see this is from Corollary 2 below: the dimension of the space of possible antisymmetric components of \(R\) is \(J(J-3)/2\), which is \(0\) iff \(J\in\{0,3\}\), i.e., when \(d\in\{1,2\}\). **Proposition 12**.: _The following statements are equivalent:_ 1. \(R\) _is symmetric and_ \(\bar{c}=\bar{0}\)_;_ 2. _Any of the conditions in Proposition_ 6 _are satisfied._ Proof.: If \(\mathcal{L}_{a}^{\dagger}=\mathcal{L}_{a}\) (one of the four equivalent conditions in Proposition 6) then, using Eq. (45b): \[R_{lk} =\langle F_{l},\mathcal{L}_{a}(F_{k})\rangle=\langle\mathcal{L}_{ a}(F_{l}),F_{k}\rangle \tag{59a}\] \[=\langle F_{k},\mathcal{L}_{a}(F_{l})\rangle^{*}=R_{kl}^{*}=R_{kl}\;, \tag{59b}\] where we also used that the matrix elements of \(R\) are real (Proposition 8). This proves that \(2\implies 1\). Similarly, \[c_{k} =\frac{1}{d}\left\langle F_{k},\mathcal{L}_{a}(I)\right\rangle= \frac{1}{d}\left\langle\mathcal{L}_{a}(F_{k}),I\right\rangle \tag{60a}\] \[=\frac{1}{d}\operatorname{Tr}[\mathcal{L}_{a}(F_{k})]^{*}=0, \tag{60b}\] where in the last equality we used Proposition 5. The proof that \(1\implies 2\) is almost identical: \[\langle F_{l},\mathcal{L}_{a}(F_{k})\rangle=R_{lk}=R_{kl}=R_{kl}^ {*} =\langle F_{k},\mathcal{L}_{a}(F_{l})\rangle^{*} \tag{61a}\] \[=\langle\mathcal{L}_{a}(F_{l}),F_{k}\rangle\;, \tag{61b}\] and \[\langle F_{k},\mathcal{L}_{a}(F_{0})\rangle=\sqrt{d}c_{k}=0=\langle\mathcal{L }_{a}(F_{k}),F_{0}\rangle\;, \tag{62}\] which means \(\mathcal{L}_{a}^{\dagger}=\mathcal{L}_{a}\). In particular, \(a=a^{T}\) implies that \(R\) is symmetric. Note that, consistent with the comment following Proposition 7, the converse is false, i.e., \(R\) being symmetric does not imply that \(a\) is real-symmetric: any \(a\) yielding a symmetric \(R\) and non-zero \(\mathcal{E}\) is a counterexample. As we describe in Theorem 2, from any such pair \((R,c)\) one could recover \(a\) via Eq. (5b), which would be a counterexample. For a specific counterexample, see Section V.1.2. According to Corollary 2 below, the dimension of the space of possible antisymmetric components of \(R\) is \(J(J-3)/2\), which is non-zero if and only if \(d\geq 3\). Thus, for \(d\geq 3\) the matrix \(R\) is not always symmetric. An explicit example of this for \(d=3\) is the following. Consider a qutrit subject to amplitude damping (spontaneous emission) involving just two of the three levels: \[a=\left(\begin{array}{ccc}1&-i\\ i&1\\ &0_{6\times 6}\end{array}\right)\;, \tag{63}\] which is a non-negative matrix with eigenvalues \(0\) (7-fold degenerate) and \(2\). Note that in all the examples given in this work, we use a specific choice of a nice operator basis \(\{F_{i}\}\): the generalized Gell-Mann matrices normalized to satisfy the normalization condition Eq. (4); see Section III.7 and Ref. [19]. By Eq. (58) we have \[R=\left(\begin{array}{cccccccc}-\frac{1}{4}&0&0&0&-\frac{1}{4}&0&0&0\\ 0&-\frac{1}{4}&0&\frac{1}{4}&0&0&0&0\\ 0&0&-\frac{1}{2}&0&0&0&0&0\\ 0&-\frac{3}{4}&0&0&-\frac{5}{4}&0&0&0\\ \frac{3}{4}&0&0&0&-\frac{5}{4}&0&0&0\\ 0&0&0&0&0&-\frac{1}{2}&\frac{1}{4}&\frac{1}{4\sqrt{3}}\\ 0&0&0&0&0&-\frac{3}{4}&-\frac{5}{4}&-\frac{\sqrt{3}}{4}\\ 0&0&0&0&0&-\frac{\sqrt{3}}{4}&-\frac{\sqrt{3}}{4}&-\frac{3}{4}\end{array}\right) \tag{64}\] which is not symmetric. We revisit this example in Section V.2.1. ### Properties of the linear map \(a\mapsto(R,\vec{c})\) Consider a map \(\mathcal{F}:a\mapsto(R,\vec{c})\) defined by Eqs. (57) and (58). This is a linear map of real vector spaces \(\mathcal{F}:M_{\text{sa}}(J,\mathbb{C})\to M(J,\mathbb{R})\oplus\mathbb{R}^{J}\), where \(M_{\text{sa}}(J,\mathbb{C})\) is the \(\mathbb{R}\)-vector space of Hermitian (or self-adjoint; hence the "sa" subscript) \(J\times J\) matrices over \(\mathbb{C}\). Then, as will follow later from Theorem 2 and Proposition 14, this map is injective, i.e., \(a\neq a^{\prime}\implies\mathcal{F}(a)\neq\mathcal{F}(a^{\prime})\). In other words, if the pairs \((R,\vec{c})\) and \((R^{\prime},\vec{c}^{\prime})\) are equal, then also the corresponding rate matrices \(a\) and \(a^{\prime}\) are equal. A direct way to prove the injectivity of \(\mathcal{F}\), which does not rely on Theorem 2 (but in part uses similar ideas), is presented in Appendix B. The following Lemma, together with Corollary 2 below, describes the image of \(\mathcal{F}\). **Lemma 1**.: _For all symmetric \(R\), \(R_{\text{sym}}\): \((R_{\text{sym}},\vec{0})\in\text{Im}(\mathcal{F})\). In particular, \((R_{\text{sym}},\vec{0})\) is the image of some real-valued symmetric \(a\)._ Proof.: The subspace of real-valued symmetric \(a\) in \(M(J,\mathbb{R})\) is of dimension \(J(J+1)/2\). We know by Proposition 12 that \(\mathcal{F}\) maps real-symmetric \(a\) to \(\tilde{c}=\vec{0}\) and \(R\) real-symmetric. The subspace \(V=\{(R_{\text{sym}},\vec{0})\}\) has dimension \(J(J+1)/2\) in the codomain of \(\mathcal{F}\). Since, as explained above, \(\mathcal{F}\) is injective, the image of all real-symmetric \(a\) is the subspace \(V\). ### Properties of a nice operator basis In this subsection, we define structure constants corresponding to a nice operator basis and provide alternative forms of Eqs. (48b) and (57) using these structure constants. We note that the structure constants are often explicitly defined for the case of (normalized and generalized) Gell-Mann matrices [20]. As mentioned above, this is our choice in all the examples in this work. However, the theory presented here applies to any choice of a nice operator basis \(\{F_{i}\}_{i=0}^{J}\) (see Definition 1). For any such choice, the elements \(\{F_{i}\}_{i=1}^{J}\) form a generator set of the Lie algebra \(\mathfrak{su}(d)\).4 We can define structure constants \(f_{ijk}\) via Footnote 4: Not every generator set of a Lie algebra is a nice operator basis. For example, the set \(\{\sigma^{x},\sigma^{y},\sigma^{z}\}\) is a generator set of \(\mathfrak{su}(2)\) but it violates the normalization conditions Eq. (4); indeed, with this choice we have \(\{F_{i},F_{j}\}=2\delta_{ij}\). \[[F_{i},F_{j}]=i\sum_{k=1}^{J}f_{ijk}F_{k}=if_{ijk}F_{k}\;, \tag{65}\] where in the second equality we used the Einstein summation convention of summing over repeated indices, which we use henceforth when convenient. The structure constants are totally antisymmetric, i.e., \(f_{jkl}=-f_{kjl}=f_{klj}\) and themselves satisfy a type of orthogonality relation [21; 22; 23]: \[f_{jkl}f_{jkm}=2d\delta_{lm}. \tag{66}\] Using Eqs. (4) and (65) we can then further simplify Eq. (57): \[c_{k}=\frac{1}{d}ia_{ij}f_{ijk}. \tag{67}\] We may also derive an explicit expression relating the coordinates \(h_{m}\) of the Hamiltonian in the expansion (47) to the matrix elements of \(Q\). Namely, inserting Eq. (65) into Eq. (48), we have: \[Q_{jk}=-ih_{l}\operatorname{Tr}\bigl{(}F_{j}[F_{l},F_{k}]\bigr{)}=-ih_{l}if_{ lkj}=-f_{jkl}h_{l}. \tag{68}\] On the other hand, using \[f_{jkm}Q_{jk}=-f_{jkl}f_{jkm}h_{l}=-2d\delta_{lm}h_{l}=-2dh_{m}\, \tag{69}\] we have \[h_{m}=-\frac{1}{2d}f_{jkm}Q_{jk}. \tag{70}\] This implies that \(H\) can be computed given \(Q\) as \[H=-\frac{1}{2d}f_{jkm}Q_{jk}F_{m}. \tag{71}\] We discuss the consistency between this expression and Eq. (5a) in Proposition 15. ## IV Solution of the inverse problem We now set up the necessary mathematical framework to solve the inverse problem in full generality. Since some of the results after Proposition 5 used Theorem 2 and its corollaries, in this section we only use the results up to (and including) Proposition 5, to avoid any circular references. ### Forward and inverse transformations For a field \(F\in\{\mathbb{R},\mathbb{C}\}\) let \(M_{0}(d,F)\) denote the subspace of traceless matrices. Let \(M_{\text{sa}}(d,F)\) denote the \(\mathbb{R}\)-subspace of Hermitian matrices. E.g., \(M_{\text{sa}}(d,\mathbb{R})\) is a vector space of real-valued \(d\times d\) symmetric matrices. Finally, let \(M_{0,\text{sa}}(d,F)\) be the \(\mathbb{R}\)-subspace of matrices which are both traceless and Hermitian. **Theorem 2**.: _For any nice operator basis \(\{F_{n}\}_{n=1}^{J}\) there is a linear bijective correspondence between the following objects._ 1. _Pairs_ \((H,a)\) _where_ \(H\in M_{0,\text{sa}}(d,\mathbb{C})\)_,_ \(a\in M_{\text{sa}}(J,\mathbb{C})\)_._ 2. \(d\times d\times d\) _tensors_ \(x=\{x_{ijkl}\}_{i,j,k,l=1..d}\) _over_ \(\mathbb{C}\) _satisfying_ \[x_{ijkl}=x_{lkji}^{*},\quad\sum_{k=1}^{d}(x_{ijkk}+x_{kkij})=0\.\] (72) 3. \(d\times d\times d\times d\) _tensors_ \(\tilde{x}=\{\tilde{x}_{ijkl}\}_{i,j,k,l=1..d}\) _over_ \(\mathbb{C}\) _satisfying_ \[\tilde{x}_{ijkl}=\tilde{x}_{lkji}^{*}\,\quad\sum_{i=1}^{d}\tilde{x}_{ijki}=0\.\] (73) 4. _Linear Hermiticity-preserving superoperators_ \(\mathcal{L}\) _acting on_ \(M(d,\mathbb{C})\) _satisfying_ \[\mathcal{L}\left(X^{\prime}\right)=\left[\mathcal{L}(X)\right]^{\dagger}\, \quad\operatorname{Tr}[\mathcal{L}(X)]=0\] (74) _for each_ \(X\in M(d,\mathbb{C})\)_._ 5. _Linear superoperators_ \[\mathcal{L}:M_{\text{sa}}(d,\mathbb{C})\to M_{0,\text{sa}}(d,\mathbb{C})\.\] (75) 6. _Pairs_ \((G,\vec{c})\) _where_ \(G\in M(J,\mathbb{R})\)_,_ \(c\in\mathbb{R}^{J}\)_._ _The corresponding transformations are given as follows:_ * \(1\to 2\)_:_ \[x_{ijkl}=-iH_{ij}\delta_{kl}+i\delta_{ij}H_{kl}+\sum_{m,n=1}^{J}a_{mn} (F_{m})_{ij}(F_{n})_{kl}\.\] (76) * \(2\to 1\)_:_ \[H_{ij}=\frac{1}{2id}\sum_{k=1}^{d}(x_{kkij}-x_{ijkk})\,\] (77a) \[a_{mn}=\sum_{i,j,k,l=1}^{d}(F_{m})_{ji}x_{ijkl}(F_{n})_{lk}\.\] (77b) * \(2\to 4,5\)_:_ \[[\mathcal{L}(X)]_{il}=\sum_{j,k=1}^{d}(x_{ijkl}X_{jk}-\frac{1}{2}x_{jkij}X_{ kl}-\frac{1}{2}X_{ij}x_{kljk})\.\] (78) * \(4\to 5\) _is simply the restriction to Hermitian operators_ \(X\)_._ * \(5\to 4\)_:_ \[\mathcal{L}(X)=\mathcal{L}[\operatorname{Re}(X)]+i\mathcal{L}[ \operatorname{Im}(X)]\,\] (79) _where_ \[\operatorname{Re}(X)=\frac{X+X^{\dagger}}{2}\,\qquad\operatorname{Im}(X)= \operatorname{Re}(X/i)=\frac{X-X^{\dagger}}{2i}\.\] (80) * \(1\to 4\) _is given by Eqs. (_1_)-(_1_)._ * \(5\to 6\): \[G_{nm} =\operatorname{Tr}[F_{n}\mathcal{L}(F_{m})]\;,\] (81a) \[c_{n} =\frac{1}{\sqrt{d}}\operatorname{Tr}[F_{n}\mathcal{L}(F_{0})]\;.\] (81b) * \(6\to 4,5\): \[\mathcal{L}(X)=\sum_{n=1}^{J}\left(\sum_{m=1}^{J}G_{nm}\operatorname{Tr}(F_{m}X) +c_{n}\operatorname{Tr}(X)\right)F_{n}\;.\] (82) * \(4\to 3\): \[\tilde{x}_{ijkl}=\left[\mathcal{L}([j\rangle\!\langle k|])\right]_{il}\;.\] (83) * \(3\to 1\) is given by Eq. (77a) and Eq. (77b) with \(\tilde{x}\) in place of \(x\). * \(3\to 4\) is given by Eq. (78) with \(\tilde{x}\) in place of \(x\). * \(3\to 2\) is given by \[x_{ijkl}=\tilde{x}_{ijkl}-b_{ij}\delta_{kl}-\delta_{ij}b_{kl}\;,\] (84) where \[b_{ij}=\frac{1}{2d}\sum_{k=1}^{d}\left(\tilde{x}_{ijkk}+\tilde{x}_{kkij}- \frac{1}{d}\sum_{l=1}^{d}\tilde{x}_{llkk}\delta_{ij}\right)\;.\] (85) We denote the spaces in Theorem 2 as \(\mathcal{V}_{1},\ldots,\mathcal{V}_{6}\) and the corresponding maps as \(\varphi_{ij}:\mathcal{V}_{j}\rightarrow\mathcal{V}_{i}\). Theorem 2 and Lemma 2 below imply that the diagram in Fig. 1 is commutative and all its arrows are \(\mathbb{R}\)-linear bijections (given by the formulas in Theorem 2 and Lemma 2). Once the theorem is proven, we will use the maps \(\varphi_{ij}\) for any indices \(i,j=1,\ldots,6\). Such maps are defined as the compositions of the maps in Fig. 1; any such composition will provide the same result due to the commutativity of Fig. 1. Proof.: First, note that the dimension of each of the \(\mathbb{R}\)-vector spaces \(\mathcal{V}_{i}\) is \((d^{2}-1)d^{2}=J(J+1)\). We would like to prove that all maps are well defined and that the diagram is commutative, i.e., if we start at any object in any node and apply the maps in the diagram until we end up in the same node, we will have obtained the same object as the one we started with. For \(\varphi_{21}\):\(\mathcal{V}_{1}\rightarrow\mathcal{V}_{2}\) we need to show that the right-hand side of Eq. (76) satisfies Eq. (72). We check this by direct computation: \[x^{*}_{lkji} =iH^{*}_{lk}\delta_{ji}-i\delta_{lk}H^{*}_{ji}+\sum_{m,n=1}^{J}a ^{*}_{mn}(F_{m})^{*}_{lk}(F_{n})^{*}_{ji}\] \[=-iH_{ij}\delta_{kl}+i\delta_{ij}H_{kl}+\sum_{m,n=1}^{J}a_{mn}(F_ {m})_{ij}(F_{n})_{kl}\] \[=x_{ijkl}\;. \tag{86}\] Here we used that \(H,a,\{F_{n}\}\) are Hermitian, swapped the order of two terms with \(H\), and renamed the indices \(n\leftrightarrow m\) in the summation. To check the second property of \(x\) we note that the \(\{F_{n}\}\) are traceless. Hence, the second term in Eq. (76) does not contribute and \[\sum_{k=1}^{d}(x_{ijkk}+x_{kkij})= \tag{87}\] \[\sum_{k=1}^{d}\Bigl{(}-iH_{ij}\delta_{kk}+i\delta_{ij}H_{kk}-iH_{ kk}\delta_{ij}+i\delta_{kk}H_{ij}\Bigr{)}=0\;.\] For \(\varphi_{12}\):\(\mathcal{V}_{2}\rightarrow\mathcal{V}_{1}\) the fact that \(H\) and \(a\) are Hermitian follows from the first property of \(x\) in Eq. (72) and Eqs. (77a) and (77b) for \(H\) and \(a\). The trace of the right-hand side of Eq. (77a) evaluates to \(0\). If we start with \((H,a)\) and apply the maps \(\mathcal{V}_{1}\rightarrow\mathcal{V}_{2}\rightarrow\mathcal{V}_{1}\) given by Eqs. (76), (77a) and (77b) we obtain some \((H^{\prime},a^{\prime})=\varphi_{12}(\varphi_{21}((H,a)))\). Let us prove that these match the original \((H,a)\), i.e., that \(\varphi_{12}\circ\varphi_{21}=\operatorname{id}_{\mathcal{V}_{1}}\) Since \(F_{m}\) and \(F_{n}\) are traceless, the term corresponding to \(a\) in the expression for \(H^{\prime}\) is zero. Using the fact that \(\operatorname{Tr}(H)=0\) and \(\operatorname{Tr}(I)=d\) we obtain \[H^{\prime}_{ij} =\frac{1}{2id}\sum_{k=1}^{d}(x_{kkij}-x_{ijkk}) \tag{88a}\] \[=\frac{1}{2id}\Bigl{(}-i\operatorname{Tr}(H)\delta_{ij}+i \operatorname{Tr}(I)H_{ij}+iH_{ij}\operatorname{Tr}(I)\] \[\qquad-i\delta_{ij}\operatorname{Tr}(H)\Bigr{)}\] (88b) \[=\frac{1}{2id}\left(idH_{ij}+iH_{ij}d\right)=H_{ij}\;. \tag{88c}\] Similarly, we can observe that the term with \(H\) from Eq. (76) does not contribute to \(a^{\prime}\) since the \(\{F_{n}\}\) are traceless. Using the fact that the \(\{F_{m}\}\) are Hermitian and form an orthonormal basis we obtain \[a^{\prime}_{mn}=\sum_{i,j=1}^{J}a_{ij}\operatorname{Tr}(F_{m}F_{i}) \operatorname{Tr}(F_{n}F_{j})=a_{mn}\;. \tag{89}\] Figure 1: A commutative diagram representing the transformations between the various spaces described in Theorem 2 (solid lines) and Lemma 2 (dotted lines). These transformations capture the equivalence between the algebraic objects appearing in Markovian quantum master equations and first-order differential equations. The maps to \(\mathcal{V}_{5}\) from \(\mathcal{V}_{2}\) and \(\mathcal{V}_{6}\) are represented by dashed lines as they are represented by the same formulas (78) and (82) as the corresponding maps to \(\mathcal{V}_{4}\): indeed, these maps to \(\mathcal{V}_{5}\) trivially coincide with the composition of the corresponding maps to \(\mathcal{V}_{4}\) with the restriction \(\mathcal{V}_{4}\rightarrow\mathcal{V}_{5}\). It follows that \(\varphi_{21}\circ\varphi_{12}=\mathrm{id}_{\mathcal{V}_{2}}\). Indeed, it is a general fact that a left inverse of a linear map between 2 vector spaces of the same dimension is necessarily its right inverse as well [24, Proposition 1.20]. We also use this argument for other loops, proving the commutativity of each loop in the diagram for only a single starting point. By direct comparison of Eqs. (1a)-(1c) with Eq. (76) and Eq. (78) one can conclude that they yield the same operator \(\mathcal{L}\). To check that \(\varphi_{41}\) is well defined, one would need to check that the operator \(\mathcal{L}\) defined by Eqs. (1a)-(1c) satisfies Eq. (74). This was already done in Proposition 4 and Proposition 5. Therefore, \(\varphi_{42}\circ\varphi_{21}=\varphi_{41}\). Note, that \(\varphi_{42}\) is well defined because for any \(x\in\mathcal{V}_{2}\) we have \[\varphi_{42}(x)=(\varphi_{42}\circ\varphi_{21}\circ\varphi_{12})(x)=\varphi_{4 1}(\varphi_{12}(x)). \tag{90}\] The maps \(\varphi_{45},\,\varphi_{54}:\mathcal{V}_{5}\leftrightarrow\mathcal{V}_{4}\) are clearly well-defined. Also \(\varphi_{54}\circ\varphi_{45}=\mathrm{id}_{\mathcal{V}_{5}}\) and, hence, \(\varphi_{45}\circ\varphi_{54}=\mathrm{id}_{\mathcal{V}_{4}}\). The fact that the elements of \(G\) and \(\tilde{c}\) are real follows from Eq. (81), the fact that the \(\{F_{n}\}\) form an orthonormal basis, and the fact that \(\mathcal{L}\) maps Hermitian operators to Hermitian operators. Let us check that the composition \(\varphi_{46}\circ\varphi_{65}\circ\varphi_{54}=\mathrm{id}_{\mathcal{V}_{4}}\). Denote the resulting operator given by Eq. (82) as \(\mathcal{L}^{\prime}=(\varphi_{46}\circ\varphi_{65}\circ\varphi_{54})( \mathcal{L})\). Since \(\{F_{n}\}_{n=0,\ldots,J}\) form a basis in \(M(d,\mathbb{C})\) and \(\mathcal{L}\) and both \(\mathcal{L}^{\prime}\) are \(\mathbb{C}\)-linear, it is sufficient to check that \(\mathcal{L}(F_{l})=\mathcal{L}^{\prime}(F_{l})\) for \(l=0,\ldots,J\). For \(l=0\) we have, using Eq. (82): \[\mathcal{L}^{\prime}(F_{0}) =\sum_{n=1}^{J}c_{n}\operatorname{Tr}(I/\sqrt{d})F_{n}=\sum_{n=1 }^{J}c_{n}\sqrt{d}F_{n} \tag{91a}\] \[=\sum_{n=1}^{J}\operatorname{Tr}[F_{n}\mathcal{L}(F_{0})]F_{n}= \mathcal{L}(F_{0}). \tag{91b}\] And for \(n>0\) we have: \[\mathcal{L}^{\prime}(F_{l}) =\sum_{n,m=1}^{J}G_{nm}\operatorname{Tr}(F_{m}F_{l})F_{n}= \tag{92a}\] \[=\sum_{n=1}^{J}G_{nl}F_{n}=\sum_{n=1}^{J}\operatorname{Tr}[F_{n} \mathcal{L}(F_{l})]F_{n}\] (92b) \[=\mathcal{L}(F_{l}). \tag{92c}\] Here we again used that \(\{F_{n}\}_{n=0,\ldots,J}\) are Hermitian and form an orthonormal basis, and that \(\operatorname{Tr}[F_{0}\mathcal{L}(X)]=0\) for any \(X\) [Eq. (74)]. Now consider the map \(\varphi_{34}:\mathcal{L}\mapsto\tilde{x}\). Eq. (73) follows from Eq. (74) applied to \(X=|j\rangle\!\langle k|\). Let us prove that \(\varphi_{23}\circ\varphi_{34}\circ\varphi_{42}=\mathrm{id}_{\mathcal{V}_{2}}\). Start with \(x\in\mathcal{V}_{2}\) and apply the above maps to obtain \(\mathcal{L}=\varphi_{42}(x)\in\mathcal{V}_{4}\), \(\tilde{x}=\varphi_{34}(\mathcal{L})\in\mathcal{V}_{3}\), \(x^{\prime}=\varphi_{23}(\tilde{x})\in\mathcal{V}_{2}\). From Eqs. (78) and (83) we have: \[\tilde{x}_{ijkl} =x_{ijkl}-\frac{1}{2}\sum_{n=1}^{d}(x_{njn}\delta_{lk}+x_{nlkn} \delta_{ij}) \tag{93a}\] \[=x_{ijkl}+\tilde{b}_{ij}\delta_{kl}+\delta_{ij}\tilde{b}_{kl}\, \tag{93b}\] where \[\tilde{b}_{ij}=-\frac{1}{2}\sum_{n=1}^{d}x_{njn}. \tag{94}\] In order to compute \(x^{\prime}=\varphi_{23}(\tilde{x})\) we compute \(b\) given by Eq. (85) by substituting Eq. (93b) into Eq. (85) and using Eq. (72). \[b_{ij} =\frac{1}{2d}\sum_{k=1}^{d}\left(x_{ijkk}+\tilde{b}_{ij}\delta_ {kk}+\delta_{ij}\tilde{b}_{kk}\right. \tag{95a}\] \[\quad+x_{kkij}+\tilde{b}_{kk}\delta_{ij}+\delta_{kk}\tilde{b}_{ij}- \frac{1}{d}\sum_{l=1}^{d}(x_{kkll}+2\tilde{b}_{kk}\delta_{ll})\delta_{ij}\right)\] \[=\tilde{b}_{ij}+(1-1)\frac{1}{d}\delta_{ij}\sum_{k=1}^{d}\tilde{ b}_{kk}=\tilde{b}_{ij}. \tag{95b}\] Therefore, Eq. (84) subtracts the same term as the one added in Eq. (93). Hence, \(x=x^{\prime}\). Note that the computation we performed also implies that \(\varphi_{23}\) is well-defined by a similar argument as used for \(\varphi_{42}\) above. Finally, concerning \(\varphi_{13}\) and \(\varphi_{43}\), one can check that the term of the form \[\tilde{b}_{ij}\delta_{kl}+\delta_{ij}\tilde{b}_{kl}=b_{ij}\delta_{kl}+\delta_ {ij}b_{kl}\, \tag{96}\] which appears in Eq. (84), evaluates to \(0\) when substituted into Eqs. (77a), (77b), and (78). Specifically, \[H_{ij} =\frac{1}{2id}\sum_{k=1}^{d}\left[(\tilde{x}_{kkij}-b_{kk}\delta_ {ij}-b_{ij}\delta_{kk})\right.\] \[\quad-(\tilde{x}_{ijkk}-b_{ij}\delta_{kk}-\delta_{ij}b_{kk})\right] \tag{97a}\] \[=\frac{1}{2id}\sum_{k=1}^{d}(\tilde{x}_{kkij}-\tilde{x}_{ijkk}). \tag{97b}\] Substituting just the term \(b_{ij}\delta_{kl}+\delta_{ij}b_{kl}\) into Eq. (77b), we have: \[\sum_{i,j,k,l=1}^{d}(F_{m})_{ji}(b_{ij}\delta_{kl}+\delta_{ij}b_{ kl})(F_{n})_{lk} \tag{98a}\] \[=\sum_{i,j=1}^{d}b_{ij}\big{[}(F_{m})_{ji}\operatorname{Tr}(F_{n} )+(F_{n})_{ji}\operatorname{Tr}(F_{m})\big{]}\] (98b) \[=0\, \tag{98c}\] which means that \[a_{mn}=\sum_{i,j,k,l=1}^{d}(F_{m})_{ji}\tilde{x}_{ijkl}(F_{n})_{lk}. \tag{99}\] Likewise, substituting the same Eq. (96) term into Eq. (78), we have: \[\sum_{j,k=1}^{d}\left[b_{ij}\delta_{kl}X_{jk}+\delta_{ij}b_{kl}X_{jk}\right.\] \[\quad-\frac{1}{2}(b_{jk}\delta_{ij}X_{kl}+\delta_{jk}b_{ij}X_{kl})\] \[\quad-\frac{1}{2}(X_{ij}b_{kl}\delta_{jk}+X_{ij}\delta_{kl}b_{ jk})\big{]} \tag{100a}\] \[=\frac{1}{2}\sum_{j=1}^{d}(2b_{ij}X_{jl}+2b_{jl}X_{ij}-b_{ij}X_{jl}-b_{ ij}X_{jl}\] (100b) \[\quad-X_{ij}b_{jl}-X_{ij}b_{jl})\] \[=0\, \tag{100c}\] which means that \[[\mathcal{L}(X)]_{il}=\sum_{j,k=1}^{d}(\tilde{x}_{ijkl}X_{jk}-\frac{1}{2}\tilde{x }_{jkij}X_{kl}-\frac{1}{2}X_{ij}\tilde{x}_{kljk})\;. \tag{101}\] This proves the remaining statements \(\varphi_{43}\circ\varphi_{34}=\mathrm{id}_{\mathcal{V}_{4}}\) and \(\varphi_{13}\circ\varphi_{34}\circ\varphi_{41}=\mathrm{id}_{\mathcal{V}_{1}}\). ### Explicit formulas for \(H\) and \(a\) from \(G\) and \(\bar{c}\) We are now finally prepared to complete the solution of the inverse problem and provide explicit formulas for \(H\) and \(a\) given \(G\) and \(\bar{c}\). We first solve the problem more generally: assume we have \((G,\bar{c})\in\mathcal{V}_{6}\) and would like to know the corresponding object from one of \(\mathcal{V}_{1},\ldots,\mathcal{V}_{5}\). If we are interested in \(\mathcal{L}\in\mathcal{V}_{4}\) or \(\mathcal{L}\in\mathcal{V}_{5}\) we could directly apply Eq. (82) from Theorem 2. For \(\mathcal{V}_{1}\), \(\mathcal{V}_{2}\), or \(\mathcal{V}_{3}\) we would, however, have to compose multiple maps from that theorem. Here we provide explicit formulas for these compositions. **Lemma 2**.: _Suppose we have \((G,\bar{c})\in\mathcal{V}_{6}\). Then the objects from \(\mathcal{V}_{1},\mathcal{V}_{2},\mathcal{V}_{3}\) corresponding to \((G,\bar{c})\) in the sense of Theorem 2 are given by the following formulas:_ * \((H,a)\in\mathcal{V}_{1}\) _are given by Eqs. (_5a_) and (_5b_)._ * \(\tilde{x}\in\mathcal{V}_{3}\) _is given by_ \[\tilde{x}_{ijkl}=\sum_{n=1}^{J}\left(\sum_{m=1}^{J}G_{nm}(F_{m})_{kj}+c_{n} \delta_{kj}\right)(F_{n})_{il}.\] (102) * \(x\in\mathcal{V}_{2}\) _is given by Eq. (_84_) with_ \(b\) _given by_ \[b=\frac{1}{2d}\sum_{n=1}^{J}\left(\sum_{m=1}^{J}G_{nm}(\{F_{m},F_{n}\}- \delta_{mn}I/d)+2c_{n}F_{n}\right).\] (103) Proof.: To simplify the computation, we introduce \[\tilde{G}_{n}=\sum_{m=1}^{J}G_{nm}F_{m}+c_{n}I\;. \tag{104}\] Eq. (102) is obtained by substituting Eq. (82) into Eq. (83). Eq. (103) is obtained by substituting Eq. (102) into Eq. (85): \[b_{ij} =\frac{1}{2d}\sum_{k=1}^{d}(\tilde{x}_{ijkk}+\tilde{x}_{kkij}) \tag{105a}\] \[=\frac{1}{2d}\sum_{k=1}^{d}\left[\sum_{n=1}^{J}(\tilde{G}_{n})_{ kj}(F_{n})_{ik}+(\tilde{G}_{n})_{ik}(F_{n})_{kj}\right.\] \[\qquad-\left.\frac{1}{d}\delta_{ij}(\tilde{G}_{n})_{lk}(F_{n})_{ kl}\right]\] (105b) \[=\frac{1}{2d}\Bigg{[}\sum_{m,n=1}^{J}G_{nm}\Big{[}(F_{n}F_{m})_{ ij}+(F_{m}F_{n})_{ij}\] \[\qquad-\left.\mathrm{Tr}(F_{m}F_{n})_{\delta_{ij}}/{d}\right]+ \sum_{n=1}^{J}2c_{n}(F_{n})_{ij}\Bigg{]}\,. \tag{105c}\] Eqs. (5a) and (5b) are obtained by substituting \(\tilde{x}\) from Eq. (102) into Eq. (97b) and Eq. (99) respectively: \[H_{ij} =\frac{1}{2id}\sum_{k=1}^{d}\sum_{n=1}^{J}\left[(\tilde{G}_{n})_{ ik}(F_{n})_{kj}-(\tilde{G}_{n})_{kj}(F_{n})_{ik}\right] \tag{106a}\] \[=\frac{1}{2id}\sum_{n=1}^{J}([\tilde{G}_{n},F_{n}])_{ij}=\frac{1}{2 id}\sum_{m,n=1}^{J}G_{nm}([F_{m},F_{n}])_{ij}, \tag{106b}\] and \[a_{mn} =\sum_{i,j,k,l=1}^{d}\left(F_{m}\right)_{ji}\left(\sum_{n^{\prime }=1}^{J}(\tilde{G}_{n^{\prime}})_{kj}(F_{n^{\prime}})_{il}\right)(F_{n})_{lk} \tag{107a}\] \[=\sum_{n^{\prime}=1}^{J}\mathrm{Tr}(\tilde{G}_{n^{\prime}}F_{m}F _{n^{\prime}}F_{n})\] (107b) \[=\sum_{i=1}^{J}\mathrm{Tr}\left[\left(\sum_{j=1}^{J}G_{ij}F_{j}+ c_{i}I\right)F_{m}F_{i}F_{n}\right]\,. \tag{107c}\] Eqs. (5a) and (5b) of Theorem 1 are now a special case (the first item) of Lemma 2. Once \(a\) is computed from Eq. (5b), one can check if the given ODE generates a Lindblad equation (and not just a Markovian quantum master equation) by testing whether \(a\) is positive semi-definite. What remains is to formulate a complete-positivity condition directly in terms of \(G\) and \(\bar{c}\); we do this next. ### Complete positivity We now complete the proof of Theorem 1. **Lemma 3** (Positive-semidefiniteness condition of Theorem 1).: _A pair \((G,\bar{c})\) generates a (completely positive) Lindblad master equation iff Eq. (6) holds for all traceless \(B\in\mathcal{B}(\mathcal{H})\)._ Proof.: The condition \(a\geq 0\) is equivalent to the condition that for any vector \(b=\{b_{m}\}_{m=1}^{J}\) one has \[\sum_{m,n=1}^{J}b_{m}^{\star}a_{mn}b_{n}\geq 0\;. \tag{108}\] Such vectors are in one-to-one correspondence with \(B\in M_{0}(d,\mathbb{C})\) [i.e., with traceless \(B\in B(\mathcal{H})\)] via expansion in the basis \(\{F_{m}\}\): \[B=\sum_{m=1}^{J}b_{m}F_{m},\qquad b_{m}=\mathrm{Tr}(F_{m}B). \tag{109}\] Substituting Eq. (5b) into the right hand side of Eq. (108) and simplifying it using Eq. (109) we obtain the right-hand side of Eq. (6): \[\sum_{m,n=1}^{J}b_{m}^{*}a_{mn}b_{n}=\] \[\sum_{m,n=1}^{J}b_{m}^{*}\sum_{i=1}^{J}\operatorname{Tr}\left[ \left(\sum_{j=1}^{J}G_{ij}F_{j}+c_{i}I\right)F_{m}F_{i}F_{n}\right]\!b_{n}=\] \[\sum_{i=1}^{J}\operatorname{Tr}\left[\left(\sum_{j=1}^{J}G_{ij}F_ {j}+c_{i}I\right)B^{\dagger}F_{i}B\right]. \tag{110}\] ### Complete positivity and convex geometry There is a fruitful complementary description of our complete positivity results using convex geometry, which we explain in this subsection. For the necessary background in convex geometry, see Appendix C. The last part of Theorem 1, i.e., the just-proven Lemma 3, describes the set of pairs \((G,\bar{c})\) corresponding to positive semidefinite \(a\) (or, equivalently, to Lindblad master equations). We denote this set by \(\mathcal{V}_{6+}\) (a subset of \(\mathcal{V}_{6}\)). Moreover, we introduce \(\mathcal{V}_{i+}\) (\(i=1,\ldots,6\)) to be images of \(\mathcal{V}_{6+}\) under the maps \(\varphi_{i6}\) described by the commutative diagram Fig. 1. \(\mathcal{V}_{1+}\) consists of pairs \((H,a)\) where \(H\) is an arbitrary Hermitian matrix and \(a\) is positive semidefinite. The set \(M_{*}(d,\mathbb{C})\) of positive semidefinite matrices is a convex cone with compact support (see Lemma 6 in Appendix C). The complete positivity part of Theorem 1 uses the description of the cone \(M_{*}(d,\mathbb{C})\) along with a set of linear inequalities Eq. (108), i.e., the set of supporting hyperplanes. An alternative is to note that \(M_{*}(d,\mathbb{C})\) is a convex hull of its extreme rays (Lemma 8). The extreme rays of \(M_{*}(J,\mathbb{C})\) are the rays generated by rank 1 matrices (Lemma 7), i.e., matrices \(a\) of the form \(bb^{\dagger}\), where \(b\in\mathbb{C}^{J}\). Using the map \(\varphi_{61}\) this provides a description for \(\mathcal{V}_{6+}\) given by the following: **Proposition 13**.: _The cone \(\mathcal{V}_{6+}\) of pairs \((G,\bar{c})\) generating a (completely positive) Lindblad master equation coincides with the convex hull of the following elements:_ * _Elements of the form_ \((Q,\bar{0})\)_, where_ \(Q\) _is given by Eq. (_48_) for_ \(H\in M_{0,\text{aa}}(d,\mathbb{C})\)_;_ * _Elements of the form_ \((R,\bar{c})\)_, where_ \[R_{kl} =\operatorname{Tr}\left[F_{k}\big{(}BF_{l}B^{\dagger}-\frac{1}{2} \left\{B^{\dagger}B,F_{l}\right\}\big{)}\right]\] (111a) \[c_{k} =\frac{1}{d}\operatorname{Tr}\bigl{(}\bigl{[}B,B^{\dagger} \bigr{]}F_{k}\bigr{)}\] (111b) \[B \in M_{0}(d,\mathbb{C})\.\] (111c) Proof.: According to the discussion above, \(\mathcal{V}_{6+}\) is generated by elements of the form \(\varphi_{61}((H,a=0))\) and \(\varphi_{61}((H=0,a=bb^{\dagger}))\). The former are given by Eq. (48). The image of the latter can be computed using Eqs. (57) and (58b), after identifying \(b\in\mathbb{C}^{J}\) with elements \(B\in M_{0}(d,\mathbb{C})\) via Eq. (109): \[R_{kl} =\sum_{i,j=1}^{J}b_{i}b_{j}^{*}\operatorname{Tr}\left[F_{k}\big{(} F_{i}F_{l}F_{j}-\frac{1}{2}\left\{F_{j}F_{i},F_{l}\right\}\big{)}\right]\] \[=\operatorname{Tr}\left[F_{k}\big{(}BF_{l}B^{\dagger}-\frac{1}{2} \left\{B^{\dagger}B,F_{l}\right\}\big{)}\right] \tag{112a}\] \[c_{k} =\frac{1}{d}\sum_{i,j=1}^{J}b_{i}b_{j}^{*}\operatorname{Tr}\bigl{(} \bigl{[}F_{i},F_{j}\bigr{]}F_{k}\bigr{)}=\frac{1}{d}\operatorname{Tr}\bigl{(} \bigl{[}B,B^{\dagger}\bigr{]}F_{k}\bigr{)}. \tag{112b}\] ### Consistency By construction, the forward maps defined in Theorem 2 are consistent with our previously defined maps. More explicitly, this is stated in the following proposition. **Proposition 14**.: _Let \(H^{\prime}\) be a Hermitian operator in \(\mathcal{B}(\mathcal{H})\) and let \(a\) be a Hermitian \(J\times J\) matrix. Let \(Q,R,\bar{c}\) be obtained from \((H^{\prime},a)\) using Eqs. (44b), (45a) and (45b), and \(G=Q+R\). Let \(H=H^{\prime}-\frac{1}{d}\operatorname{Tr}(H^{\prime})I\) be the traceless component of \(H^{\prime}\), and let \(\varphi_{ij}\) be the maps from Theorem 2. Then_ \[(Q,0) =\varphi_{61}((H,0)), \tag{113a}\] \[(R,c) =\varphi_{61}((0,a)),\] (113b) \[(G,c) =\varphi_{61}((H,a)). \tag{113c}\] Proof.: First, note that Eq. (1b) yields the same operator \(\mathcal{L}\) if we change \(H\) by a constant. Hence, we could apply Eq. (45a) to \(H\) instead of \(H^{\prime}\) and get the same \(Q\). One of the equivalent ways to describe \(\varphi_{16}\) is \(\varphi_{61}=\varphi_{65}\circ\varphi_{54}\circ\varphi_{41}\). Note that \[(\varphi_{54}\circ\varphi_{41})((H,0))=\mathcal{L}_{H}+\left.\mathcal{L}_{a} \right|_{a=0}=\mathcal{L}_{H}. \tag{114}\] Thus, Eq. (113a) follows from comparison of Eq. (45a) with Eq. (81). Similarly, \[(\varphi_{54}\circ\varphi_{41})((0,a))=\mathcal{L}_{a}, \tag{115}\] and Eq. (113b) follows from comparison of Eqs. (44b) and (45b) with Eq. (81). Finally, Eq. (113c) is the sum of Eqs. (113a) and (113b). In the following Proposition, we show that Eqs. (5a) and (71) are consistent, i.e., yield the same result when used to recover \(H\). **Proposition 15**.: _For any \(J\times J\) matrix \(G\), the r.h.s. of Eq. (5a) and Eq. (71) (when \(Q\) is replaced with \(G\)) are equal to each other._ Proof.: Comparing Eq. (5a) to Eq. (71) one can see that for a given \(G\) the claim is equivalent to \[\frac{1}{2id}G_{nm}[F_{m},F_{n}]=-\frac{1}{2d}f_{nmI}G_{nm}F_{I} \tag{116}\] where we use the Einstein summation notation. Eq. (116) follows directly from the definition of the structure constants Eq. (65). Next, we show that the inverse maps [not only Eqs. (5a) and (71)] also yield the same result when used to recover \(H\), and in fact one may interchange \(G\) and \(Q\) when using these maps, or subtract any matrix from \(G\) which can be obtained using the formula for \(R\) (even from a different \(\mathcal{L}_{a}\); e.g., any real symmetric matrix). **Proposition 16**.: _Let \(H,a,Q,R,G,\tilde{c}\) be the same as in Proposition 14. Then all of the following methods recover the same \(H\):_ 1. _Applying Eq._ (5a) _to_ \(G\) _(as is);_ 2. _Applying Eq._ (5a) _to_ \(Q\) _in place of_ \(G\)_;_ 3. _Computing_ \(Q^{\prime}\) _to be the antisymmetric part of_ \(G\) _and applying Eq._ (5a) _to_ \(Q^{\prime}\) _in place of_ \(G\)_;_ 4. _Computing_ \(Q^{\prime\prime}=G-R^{\prime}\) _where_ \((R^{\prime},\tilde{c}^{\prime})=\varphi_{61}((0,a^{\prime}))\) _for any Hermitian_ \(J\times J\) _matrix_ \(a^{\prime}\) _(possibly different from_ \(a\)_) and applying Eq._ (5a) _to_ \(Q^{\prime\prime}\) _in place of_ \(G\)_._ Proof.: According to Theorem 2, the diagram Fig. 1 is commutative. Since Eq. (5) is is the formula for \(\varphi_{16}\) and \(\varphi_{61}((H,a))=(G,c)\), applying Eq. (5a) to \(G\) recovers the original \(H\) (statement #1). Since \(\varphi_{61}((H,0))=(Q,\tilde{0})\), applying Eq. (5a) to \(Q\) also recovers the original \(H\) (statement #2). Since Eq. (5a) contracts \(G_{nm}\) with a commutator, which is antisymmetric with respect to the order of indices, the result of that application is independent of adding any symmetric matrix to \(G\) (statement #3). The final statement follows from \(\varphi_{61}((H,a-a^{\prime}))=(Q^{\prime\prime},c-c^{\prime})\). ### Decomposing \(G\) into \(G=Q+R\), and the space of possible matrices \(R\) **Proposition 17**.: _The decomposition of \(G\) into \(Q\) and \(R\) can be obtained by the following formulas:_ \[Q_{ij} =-\frac{1}{2d}\sum_{m,n=1}^{J}G_{nm}\operatorname{Tr}\left([F_{i},F_{j}],[F_{n},F_{m}]\right) \tag{117a}\] \[=\frac{1}{2d}\sum_{m,n=1}^{J}G_{nm}f_{knm}f_{kij}.\] (117b) \[R_{ij} =G_{ij}-Q_{ij}. \tag{117c}\] Proof.: Substituting Eq. (5a) into Eq. (48a), we obtain: \[Q_{ij}=\frac{1}{2d}\sum_{m,n=1}^{J}G_{nm}\operatorname{Tr}\left(F_{i}\left[F_ {j},[F_{m},F_{n}]\right]\right), \tag{118}\] which is equivalent to Eq. (117a). Eq. (117b) is then obtained using the definition of the structure constants in Eq. (65). Finally, Eq. (117c) follows from \(G=R+Q\). Note that an alternative way to obtain the formula for \(R\) is to substitute Eq. (5b) into Eq. (58b) (which results in a significantly more complex computation giving the same final result). Naively, one might be tempted to set \(Q\) to be the antisymmetric part of \(G\), or \(R\) to be the symmetric part of \(G\), instead of using the expressions given in Proposition 17. This, however, only works for \(d\leq 2\). This follows from the dimension counting done in Corollary 2 below: the space of antisymmetric matrices \(R\) has dimension \(J(J-3)\big{/}2\), which is non-vanishing unless \(d\in\{1,2\}\) (recall that \(J=d^{2}-1\)). We give an example illustrating this for \(d=3\) in Section V.2. However, as stated in Propositions 15 and 16 above, taking the antisymmetric part of \(G\) would still result in the correct \(H\) being recovered. **Proposition 18**.: _For any \(R\in M(J,\mathbb{R})\), \(\tilde{c}\in\mathbb{R}^{J}\) a pair \((R,\tilde{c})\) can be obtained from some Hermitian matrix \(a\) if and only if_ \[\sum_{m,n=1}^{J}R_{mn}[F_{n},F_{m}]=0. \tag{119}\] _In particular, this condition is independent of \(\tilde{c}\), i.e., for any \(\tilde{c}^{\prime}\in\mathbb{R}^{J}\) the pair \((R,\tilde{c})\) can be obtained from some Hermitian matrix \(a\) if and only if \((R,\tilde{c}^{\prime})\) can (from a possibly different Hermitian matrix \(a^{\prime}\))._ Proof.: According to Proposition 14 a pair \((R,\tilde{c})\) can be obtained from some Hermitian matrix \(a\) if and only if \((R,\tilde{c})\in\varphi_{61}(\{(0,a):a\in M_{\text{sa}}(J,\mathbb{C})\})\). According to Theorem 2 this is equivalent to \[H=0\quad\text{ where }\quad(H,a)=\varphi_{16}((R,\tilde{c})). \tag{120}\] From Lemma 2, this is equivalent to Eq. (5a) evaluating to 0 when \(R\) is used instead of \(G\), i.e. when Eq. (119) holds. **Corollary 2**.: _The space of matrices \(R\) which can be obtained from some Hermitian matrix \(a\) is an \(\mathbb{R}\)-linear subspace of \(M(J,\mathbb{R})\) of dimension \(J^{2}\) - \(J\) and includes the space of symmetric matrices._ _The space of antisymmetric matrices \(R\), which can be obtained from some Hermitian matrix \(a\) has dimension \(J(J-3)/2\)._ Proof.: As explained in the proof of Proposition 18, the space of possible \((R,c)\) is a bijective image of a \(J^{2}\) dimensional space. According to Proposition 18 it contains all possible pairs of the form \((0,\tilde{c})\) -- a \(J\)-dimensional subspace, hence the space of possible values for \(R\) has dimension \(J^{2}-J\). Furthermore, any symmetric matrix satisfies Eq. (119) due to contraction with an antisymmetric commutator there. The final statement follows from the direct calculation: \[J^{2}-J-J(J+1)/2=J(J-3)/2\, \tag{121}\] where \(J(J+1)/2\) is the dimension of the space of symmetric matrices. Examples This section provides several examples to illustrate the general theory developed above. Various supporting calculations can be found in Ref. [19]. ### A qubit Consider a single qubit, i.e., \(\mathcal{H}=\mathbb{C}^{2}\). As the nice operator basis, we choose the normalized Pauli matrices \(\frac{1}{\sqrt{2}}\{I,\sigma^{\pi},\sigma^{y},\sigma^{z}\}\). #### v.1.1 A qubit with pure dephasing Consider the Lindblad equation for pure dephasing: \(\dot{\rho}=\mathcal{L}_{a}(\rho)=\gamma(\sigma^{z}\rho\sigma^{z}-\rho)\), for which \[a=\begin{bmatrix}0&0&0\\ 0&0&0\\ 0&0&2\gamma\end{bmatrix}\,. \tag{122}\] Here \(Q=0\) (since \(H=0\)) and, by Propositions 6, 7 and 10, \(\tilde{c}=0\), since \(a\) is real-symmetric. Using Eq. (58) we find \[G=R=\begin{bmatrix}-2\gamma&0&0\\ 0&-2\gamma&0\\ 0&0&0\end{bmatrix}\,. \tag{123}\] The inverse problem retrieves \(H\) from \(G\) and \(a\) from \((G,\tilde{c})\). Using Eqs. (5a) and (5b), we indeed find \(H=0\) and \(a\) as given in Eq. (122). #### v.1.2 A qubit with amplitude damping Now consider the Lindblad equation for spontaneous emission: \[\dot{\rho} =\mathcal{L}_{H}(\rho)+\mathcal{L}_{a}(\rho) \tag{124a}\] \[\mathcal{L}_{H}(\rho) =-i\omega[\sigma^{z},\rho]\] (124b) \[\mathcal{L}_{a}(\rho) =\gamma(\sigma^{-}\rho\sigma^{+}-\frac{1}{2}\{\sigma^{+}\sigma^{- },\rho\})\;, \tag{124c}\] where \(\sigma^{\pm}=\frac{1}{2}(\sigma^{\mp}\mp i\sigma^{y})\). This yields \[a=\begin{bmatrix}\gamma&-i\gamma&0\\ i\gamma&\gamma&0\\ 0&0&0\end{bmatrix}\,, \tag{125}\] i.e., \(a\) is not real-symmetric, and the Lindbladian is non-unital (in particular, \(\tilde{c}\) is non-zero). Then, using Eqs. (48) and (58) we find \[Q =\begin{bmatrix}0&-2\omega&0\\ 2\omega&0&0\\ 0&0&0\end{bmatrix} \tag{126a}\] \[R =\begin{bmatrix}-\gamma&0&0\\ 0&-\gamma&0\\ 0&0&-2\gamma\end{bmatrix}\,, \tag{126b}\] and using Eq. (57) \[\tilde{c}=(0,0,\sqrt{2}\gamma)^{T}\;. \tag{127}\] Using Eqs. (5a) and (5b) with \(G=Q+R\) we then indeed find \(H=\omega\sigma^{z}\) and and \(a\) as given in Eq. (125). #### v.1.3 Example of \(G\) giving rise to a non-CP map Let \(\tilde{c}=\vec{0}\) and \[G=\begin{bmatrix}0&1&0\\ 0&0&0\\ 0&0&0\end{bmatrix}\,, \tag{128}\] Using Eq. (5b) this yields: \[a=\begin{bmatrix}0&\frac{1}{2}&0\\ \frac{1}{2}&0&0\\ 0&0&0\end{bmatrix}\,, \tag{129}\] whose eigenvalues are \(\{-1/2,1/2,0\}\). I.e., \(a\) is not positive semidefinite, which means that the corresponding Markovian quantum master equation does not generate a CP map. Hence, the pair \((G,\tilde{c})\) with matrix \(G\) given in Eq. (128) and \(\tilde{c}=0\) does not correspond to a Lindblad equation, only to a Markovian quantum master equation. ### A qutrit As the nice operator basis for this \(d=3\)-dimensional case, we choose the \(8\) normalized Gell-Mann matrices, including the normalized identity matrix: \[F_{0} =\frac{1}{\sqrt{3}}I,\;\{F_{m}\}_{m=1}^{d}=\begin{cases}\left( \begin{array}{ccc}0&\frac{1}{\sqrt{2}}&0\\ \frac{1}{\sqrt{2}}&0&0\\ 0&0&0\end{array}\right),\begin{pmatrix}0&0&\frac{1}{\sqrt{2}}\\ 0&0&0\\ \frac{1}{\sqrt{2}}&0&0\end{pmatrix},\\ \left(\begin{array}{ccc}0&0&0\\ 0&0&\frac{1}{\sqrt{2}}\\ 0&\frac{1}{\sqrt{2}}&0\end{array}\right),\begin{pmatrix}0&-\frac{i}{\sqrt{2}} &0\\ \frac{i}{\sqrt{2}}&0&0\\ 0&0&0\end{pmatrix},\\ \left(\begin{array}{ccc}0&0&0\\ 0&0&-\frac{i}{\sqrt{2}}\\ 0&\frac{i}{\sqrt{2}}&0\end{array}\right),\begin{pmatrix}0&-\frac{i}{\sqrt{2}} \\ 0&0&0\\ \frac{i}{\sqrt{2}}&0&0\end{pmatrix},\\ \left(\begin{array}{ccc}0&0&0\\ 0&0&-\frac{i}{\sqrt{2}}\\ 0&\frac{i}{\sqrt{2}}&0\end{array}\right),\begin{pmatrix}\frac{1}{\sqrt{2}}&0&0 \\ 0&-\frac{1}{\sqrt{2}}&0\\ 0&0&0\end{pmatrix},\\ \left(\begin{array}{ccc}0&0&-\frac{i}{\sqrt{2}}\\ 0&0&-\sqrt{\frac{2}{3}}\end{array}\right)\end{pmatrix} \tag{130}\] #### v.2.1 A qutrit with amplitude damping Consider the example of \(a\) given in Eq. (63), i.e., a qutrit undergoing amplitude damping between its two lowest levels (similar to the qubit example given in Section V.1.2). The corresponding \(R\) matrix was given in Eq. (64); assuming \(H=0\) we have \(G=R\). In addition, we find \[\tilde{c}=(0,0,0,0,0,\frac{\sqrt{2}}{3},0,0)\;. \tag{131}\] Computing \(a\) using Eq. (5b), we find that it is indeed identical to Eq. (63). #### v.1.2 Illustration of Propositions 16 and 17 Note that for symmetric \(G\), the decomposition is trivial: \(Q=0\), \(R=G\). Therefore, to illustrate Proposition 17, let us choose an arbitrary antisymmetric \(G\); for example \[G=\left(\begin{array}{cccccccc}0&0&0&0&-\frac{1}{4}&0&0&0\\ 0&0&0&\frac{1}{4}&0&0&0&0\\ 0&0&0&0&0&0&0\\ 0&-\frac{1}{4}&0&0&0&0&0&0\\ \frac{1}{4}&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&\frac{1}{4}&-\frac{\sqrt{3}}{4}\\ 0&0&0&0&0&-\frac{1}{4}&0&\frac{\sqrt{3}}{4}\\ 0&0&0&0&0&\frac{\sqrt{3}}{4}&-\frac{\sqrt{3}}{4}&0\end{array}\right). \tag{132}\] Computing the Hamiltonian \(H\) that arises from this \(G\) using Eq. (71) [or, equivalently, Eq. (5a)] yields: \[H=\left(\begin{array}{cccc}0&0&0\\ 0&0&\frac{1}{6}\\ 0&\frac{1}{6}&0\end{array}\right)\,. \tag{133}\] When computing \(Q\) from this Hamiltonian using Eq. (48), we observe that \(Q\) does not equal the antisymmetric part of \(G\), as anticipated by Corollary 2 and the comment after Proposition 17. Instead, we find: \[Q=\left(\begin{array}{cccccccc}0&0&0&0&\frac{1}{6}&0&0&0\\ 0&0&0&\frac{1}{6}&0&0&0&0\\ 0&0&0&0&0&0&0\\ 0&-\frac{1}{6}&0&0&0&0&0&0\\ -\frac{1}{6}&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&\frac{1}{6}&-\frac{1}{2\sqrt{3}}\\ 0&0&0&0&0&-\frac{1}{6}&0&0\\ 0&0&0&0&0&\frac{1}{2\sqrt{3}}&0&0\end{array}\right). \tag{134}\] As expected, if we then use Eq. (117a) to decompose \(G\) into \(Q\) and \(R\), we get the same \(Q\) as the one in Eq. (134). Also, as guaranteed by Proposition 16, if we use that \(Q\) to compute \(H\) again, we will get the same \(H\) as in Eq. (133). ## VI The rarity of Lindbladians As posed in the title, the question that motivated this work can be interpreted as the probability that a given pair \((G,\vec{c})\) will give rise to a Lindbladian (given a natural choice of a distribution over the pairs \((G,\vec{c})\)). Alternatively, one can use a natural choice of a distribution for \((H,a)\) in order to induce the distribution on the pairs \((G,\vec{c})\) and compute a probability that a given pair \((G,\vec{c})\) with that distribution gives rise to a Lindbladian. In the remainder of this section, we provide partial answers to these questions and describe the difference between these two distributions. ### Natural distribution on the pairs of \((G,\vec{c})\) Suppose all elements of \(G\) and \(\sqrt{d}\vec{c}\) are picked independently from a standard normal distribution. Such a distribution is known as the Ginibre Orthogonal Ensemble (Gi-nOE) [25]. One might be interested in the following question: "How likely is it that these \(G\) and \(\vec{c}\) generate a Lindbladian, i.e. \(a\geq 0?\)". Let us denote this probability as \(\tilde{P}_{J}^{\text{GinOE}}\). While we do not know the asymptotics of \(\tilde{P}_{J}^{\text{GinOE}}\), here we provide a non-rigorous attempt at deriving the asymptotics of an upper bound. Recall that a necessary condition for \(a\geq 0\) is that all eigenvalues of \(G\) have a non-positive real part. Denoting the probability of the latter by \(P_{J}^{\text{GinOE}}\), this implies \[\tilde{P}_{J}^{\text{GinOE}}\leq P_{J}^{\text{GinOE}}. \tag{135}\] The distribution of eigenvalues in the GinOE is known: see, e.g. [25, Eq. (1.7)]. One can use either the methods of [26] or some rigorous alternative to those methods to estimate the asymptotics of \(P_{J}^{\text{GinOE}}\), which would be an upper bound for \(\tilde{P}_{J}^{\text{GinOE}}\) due to Eq. (135). More specifically, assuming the same (non-rigorous) argument as in Ref. [26] applies to the GinOE, we can state that \[P_{J}^{\text{GinOE}}=\exp(-\theta^{\text{GinOE}}J^{2}+O(J)). \tag{136}\] We discuss the estimation of the positive constant \(\theta^{\text{GinOE}}\) in Appendix E. The important point about Eq. (136) is that the probability decays rapidly in the Hilbert space dimension \(d\) (recall that \(J=d^{2}-1\)). ### Natural distribution on \(a\) In this subsection, we examine the prevalence of Lindbladians within the set of all Liouvillians. Specifically, we attempt to answer the following question: "Given a random \(H\) and \(a\), what is the probability that \(\mathcal{L}_{H,a}\) is a Lindbladian?" Since the condition \(a\geq 0\) that guarantees a Liouvillian is a Lindbladian depends only on \(a\) and not on \(H\), defining the distribution in the space of Hermitian matrices \(a\) is sufficient to answer this question. Additionally, the scale of \(a\) is irrelevant; changing \(a\) to \(\alpha a\) (where \(\alpha>0\) can be randomly selected from a distribution which may depend on \(a\)) does not alter the answer to \(a\geq 0\). One natural choice for the distribution is the Gaussian Unitary Ensemble (GUE), also known as the \(\beta=2\) Gaussian Ensemble. In this ensemble, the real and imaginary parts of each component of a \(J\times J\) matrix \(A\) are independently selected from a normal distribution with a mean of \(0\) and5 a variance of \(1/\beta=1/2\). The matrix \(a\) is then computed as \(a=(A+A^{\dagger})/2\). The condition \(a\geq 0\) is equivalent to \(\lambda_{j}\geq 0\) for \(j=1,\ldots,J\) where \(\lambda=\{\lambda_{j}\}_{j=1}^{J}\) is the vector of eigenvalues of \(a\). Let us denote the probability of this event as \(P_{J}^{\text{GUE}}\). The joint probability density function (PDF) of eigenvalues of a matrix from the GUE is well known to be: \[p(\lambda) =\frac{1}{G_{\beta,J}}e^{-\frac{a}{2}\sum_{j^{\prime}=1}^{J}\lambda _{j}^{2}}\prod_{1\leq j<k\leq J}\lvert\lambda_{k}-\lambda_{j}\rvert^{\beta} \tag{137a}\] \[=\frac{1}{G_{\beta,J}}e^{-\beta E(\lambda)}\, \tag{137b}\] where \(E(\lambda)=\frac{1}{2}\sum_{j=1}^{J}\lambda_{j}^{2}-\sum_{1\leq j<k\leq J}\ln \lvert\lambda_{j}-\lambda_{k}\rvert\) and \(G_{\beta,J}\) is the normalization factor (partition function); see [27, Eq. (1.4)] for further details. Thus, the distribution of \(\lambda\) can be interpreted as the canonical ensemble of a two-dimensional gas (in \(\mathbb{C}\)) with \(J\) charged particles (each of unit charge with the same sign), constrained to the real line in a harmonic potential at temperature \(1/\beta=1/2\), also known as a log-gas. The probability \(P_{J}^{\text{GUE}}\) in this interpretation is the probability that, by random chance, all particles happen to be to the right of the origin. \(P_{J}^{\text{GUE}}\) is known as the probability of atypically large fluctuations of extreme value statistics of the GUE, and was estimated in Ref. [26] as: \[P_{J}^{\text{GUE}}=3^{-J^{2}/2+O(J)}. \tag{138}\] The estimate uses non-rigorous techniques from statistical mechanics, including the replacement of particle density (sum of delta functions) with a smooth function and the use of functional integrals and functional Fourier transforms. While rigorously proving this estimate is left to future research, Eq. (138) shows that the probability of a randomly picked Liouvillian being a Lindbladian decays rapidly with \(d\), just like Eq. (136) for the GinOE. For example, already for \(d=4\), the estimate gives \(P_{J}^{\text{GUE}}\sim 10^{-54}\), demonstrating that it is extremely unlikely to find a Lindbladian for \(d=4\) by picking a random \(a\) from the GUE. ### Comparison between these distributions Given the distribution on the pairs \((G,\vec{c})\) described in Section VI.1, one can derive a distribution for \(a\) or a matrix related to the tensor \(x\) from Theorem 2. \(\bar{P}^{\text{GinOE}}\) is then equal to the probability that \(a\geq 0\) (or, equivalently, \(x\geq 0\)) given that distribution. Due to the linearity of \(\varphi_{16}\), the components of \(a\) also follow a multivariate Gaussian distribution with mean 0. Using Eq. (5b) one can compute the covariances: \[\mathbb{E}(a_{mn}a_{m^{\prime}n^{\prime}})=\delta_{mn^{\prime}}\delta_{nm^{ \prime}}-\frac{1}{d}\operatorname{Tr}(F_{m^{\prime}}F_{n^{\prime}}F_{m}F_{n}). \tag{139}\] This distribution does not match the GUE investigated in Section VI.2: for comparison covariances in the GUE are given by \[\mathbb{E}(a_{mn}a_{m^{\prime}n^{\prime}})=\frac{1}{2}\delta_{mn^{\prime}} \delta_{nm^{\prime}}. \tag{140}\] The normalization difference (\(1\)_vs_\(1/2\)) is due to a difference in conventions and is irrelevant to the question of whether \(a\geq 0\). The presence of the second term in Eq. (139), however, is significant and, for \(d>2\), introduces a dependence of the probability distribution given by Eq. (139) on the choice of the nice operator basis. Nevertheless, the distribution of \(x\) computed from \((H=0,a)\) is independent of both the nice operator basis and the choice of the orthonormal basis in the Hilbert space \(\mathcal{H}\): \[\mathbb{E}(x_{ijkl}x_{i^{\prime}j^{\prime}k^{\prime}l^{\prime}})= \delta_{jk^{\prime}}\delta_{j^{\prime}k}\delta_{li^{\prime}}\delta_{li^{ \prime}i}\] \[+d^{-1}(-\delta_{jk^{\prime}}\delta_{j^{\prime}i^{\prime}}\delta_ {lk}\delta_{l^{\prime}i}-\delta_{jk^{\prime}}\delta_{j^{\prime}k}\delta_{li} \delta_{l^{\prime}i^{\prime}}-\delta_{ji}\delta_{j^{\prime}k}\delta_{li^{ \prime}i}\delta_{l^{\prime}k^{\prime}})\] \[+d^{-2}(\delta_{jk^{\prime}}\delta_{j^{\prime}i}\delta_{lk}\delta _{l^{\prime}i^{\prime}}+\delta_{ji}\delta_{j^{\prime}k}\delta_{lk^{\prime}} \delta_{l^{\prime}i^{\prime}}\] \[+\delta_{jk^{\prime}}\delta_{j^{\prime}i^{\prime}}\delta_{li} \delta_{l^{\prime}k}+\delta_{ji^{\prime}}\delta_{j^{\prime}k}\delta_{li}\delta_{ l^{\prime}k^{\prime}})\] \[+d^{-3}(-\delta_{jk^{\prime}}\delta_{j^{\prime}i^{\prime}}\delta_ {lk}\delta_{l^{\prime}i}-\delta_{ji}\delta_{j^{\prime}k^{\prime}}\delta_{lk} \delta_{l^{\prime}i^{\prime}}-\delta_{ji}\delta_{j^{\prime}i^{\prime}}\delta_{lk ^{\prime}k}\delta_{l^{\prime}k}\] \[-\delta_{jk}\delta_{j^{\prime}i^{\prime}}\delta_{li}\delta_{l^{ \prime}k^{\prime}}-\delta_{ji}\delta_{j^{\prime}k}\delta_{li^{\prime}i^{\prime}} \delta_{l^{\prime}k^{\prime}}-\delta_{ji^{\prime}}\delta_{j^{\prime}i^{\prime}} \delta_{lk}\delta_{l^{\prime}k^{\prime}})\] \[+4d^{-4}\delta_{ji}\delta_{j^{\prime}i^{\prime}}\delta_{lk}\delta_ {l^{\prime}k^{\prime}}+Jd^{-4}\delta_{ji}\delta_{j^{\prime}i^{\prime}}\delta_{lk }\delta_{l^{\prime}k^{\prime}}. \tag{141}\] This \(x\) can be interpreted as a matrix with indices \((ij)\) and \((lk)\): \(x_{(ij)(lk)}=x_{ijkl}\). With this interpretation, such \(x\) is positive semidefinite if and only if \(a\) is. While beyond the scope of this work, one can use Eq. (139) or Eq. (141) to attempt to estimate \(\bar{P}_{J}^{\text{GinOE}}\). One approach might be to first compute the joint distribution of eigenvalues of \(a\) or of \(x\) similarly to how they are computed for the GUE (see, e.g., Ref. [27]), i.e. by integrating out unitary symmetry. One may attempt to integrate out the unitaries in \(U(\mathcal{H})\) first, and then integrate over the quotient space \(U(B_{0}(\mathcal{H}))/U(\mathcal{H})\), which consists of the unitaries in \(B_{0}(\mathcal{H})\) modulo those in \(U(\mathcal{H})\), where \(U(\mathcal{H})\) denotes the manifold of unitaries acting on a finite-dimensional Hilbert space \(\mathcal{H}\). That second integration appears to be non-trivial because while Eq. (141) is symmetric with respect to \(U(\mathcal{H})\), it is no longer symmetric with respect to \(U(B_{0}(\mathcal{H}))/U(\mathcal{H})\). Once the joint distribution of eigenvalues is computed (or estimated), one can attempt to use it to estimate the probability that all of them are non-negative. ## VII Overlap with prior work After this work was completed, it came to our attention that our main results could be reconstructed by combining several earlier results, in particular using Refs. [5, Section 3.2.2], [28, Section 7.1.2], and [29]. An aspect that is common to these three works and distinguishes them from ours is that they do not represent \(\mathcal{L}\) by its action on the basis elements \(F_{i}\), i.e., they do not have \((G,\vec{c})\). In this sense, their motivation is different from ours, given that our starting point is the title question "which (nonhomogeneous linear first-order) differential equations correspond to the Lindblad equation?", formulated in terms of \((G,\vec{c})\) [Eq. (3)]. Nevertheless, it is possible to use these earlier results to prove Theorem 1: one can show that the map \(\varphi_{64}\) given by Eq. (81) is a bijective linear map with inverse given by Eq. (82). With this established, one can substitute Eq. (82) instead of \(\mathcal{L}\) to apply the above results to obtain Theorem 1. The overlap is described in more detail below. We first summarize it in the following list, mentioning the parts of the Theorem 1 and the overlaps we have found. 1. Bijectivity of the map \(\varphi_{61}\). * Surjectivity follows from [5, Section 3.2.2]. * Injectivity follows from [28, Section 7.1.2]. 2. Explicit formulas [Eq. (5)] for the inverse (\(\varphi_{16}\)). * Follows from [5, Section 3.2.2]. 3. Complete positivity condition Eq. (6) in terms of \(G\) and \(\tilde{c}\). * Follows from [29]. ### Overlap with [5, Section 3.2.2] Item 2 and surjectivity in Item 1 follow from [5, Section 3.2.2]. To see this, one needs to take \(V(t)\) given by \[V(t)\rho=\rho+t\mathcal{L}(\rho)+O(t^{2}) \tag{142}\] with \(\mathcal{L}\) given by Eq. (82), and write it in the form [5, Eq. (3.50)] (see, e.g., Appendix D on how to do that). Then one can follow [5, Eqs. (3.54)-(3.63)] to recover \(a\) and \(H\) and write \(\mathcal{L}\) in the form Eq. (1). After simplification, this would provide explicit formulas Eq. (5) for \(a\) and \(H\) representing that \(\mathcal{L}\). Since the procedure works for any \(\mathcal{L}\) from \(\mathcal{V}_{4}\) (i.e. any \((G,\tilde{c})\)), this shows the surjectivity of \(\varphi_{61}\) in Item 1. ### Overlap with [28, Section 7.1.2] Reference [28, Theorem 7.1] asserts that a linear map is a Lindbladian if and only if it can be represented in any of the four forms described in [28, Eqs. (7.20)-(7.23)]. [28, Eq. (7.23)] matches Eq. (1), where [28] uses the matrix \(C=a^{2}/2\) referred to as the "Kossakowski matrix". Reference [28, Proposition 7.4] establishes the uniqueness of the decomposition of the Lindbladian \(\mathcal{L}\) into \(\mathcal{L}_{H}\) and \(\mathcal{L}_{a}\), with \(H\) being unique up to an additive constant. The uniqueness of \(a\) can be deduced from the second part of this proposition. The complete positivity condition, Eq. (6), can be obtained from [28] in two steps. First, by comparing [28, Eq. (7.20)] with [28, Eq. (7.14)], one may note that that the condition we need is called "Conditional complete positivity" in [28]. Second, one may transform [28, Proposition 7.2, Eq. (7.15)] into Eq. (6). It should be noted that the expression of the complete positivity condition in Ref. [28] differs from the expression used in our paper, and thus, some work would be required to derive one condition from the other. ### Overlap with [29] The blog post [29] provides a complete positivity condition in a form more similar to the one used in our work. [29] defines6\(\mathcal{L}^{\text{PT}}\) as follows. First, introduce a matrix notation for a superoperator \(S\) acting on \(M_{d}(\mathbb{C})\) where \(S\) is described by a matrix with elements \(S_{(nn^{\prime})(mm^{\prime})}\) (matrix rows and columns indexed by elements of \(\{1,\ldots,d\}^{2}\)) such that: Footnote 6: “PT” stands for “partial transpose”. \[(S(A))_{nn^{\prime}}=\sum_{m,m^{\prime}=1}^{d}S_{(nn^{\prime})(mm^{\prime})}A _{mm^{\prime}}. \tag{143}\] Then, define \(S^{\text{PT}}\) using \[S^{\text{PT}}_{(nm)(n^{\prime}m^{\prime})}=S_{(nn^{\prime})(mm^{\prime})}. \tag{144}\] An alternative way to describe \(S^{\text{PT}}\) is \[\operatorname{Tr}(AS^{\text{PT}}(B))=\sum_{j,k=1}^{d}\left(AS(|j\rangle\! \langle k|)B\right)_{jk}. \tag{145}\] With this notation, the complete positivity condition becomes7 Footnote 7: In the blog post, the condition used a “superprojector” removing the trace. The equivalent alternative used here is to require that \(B\) is traceless. \[\forall B\in M_{0}(d,\mathbb{C})\quad\operatorname{Tr}(B^{\dagger}\mathcal{L }^{\text{PT}}(B))\geq 0\;. \tag{146}\] Using Eq. (145), this can be rewritten as \[\forall B\in M_{0}(d,\mathbb{C})\quad\sum_{j,k=1}^{d}\left(B^{\dagger}\mathcal{ L}(|j\rangle\!\langle k|)B\right)_{jk}\geq 0. \tag{147}\] This is the same as requiring Eq. (6) to hold for all traceless \(B\in\mathcal{B}(\mathcal{H})\), as in Theorem 1. ## VIII Summary and Outlook A standard approach to solving a (time-independent quantum) Markovian quantum master equation is to vectorize it and solve the corresponding nonhomogeneous first-order linear ODE for the coherence vector. Here we posed the inverse problem: when does such a 1ODE of the form \(\tilde{v}=G\tilde{v}+\tilde{c}\) correspond to a Markovian quantum master equation? When does it correspond to a completely positive Markovian quantum master equation, i.e., a Lindblad equation? What are the parameters, i.e., \(a\) and \(H\), of such master equations in terms of the parameters \(G\) and \(\tilde{c}\) of the 1ODE? Finally, how ubiquitous are Lindbladians? We have shown that the answer to the first question is "always". We also expressed the parameters of such Markovian quantum master equations using an expansion in a nice operator basis, which yields explicit expressions for the Hamiltonian and the matrix \(a\) of coefficients of the dissipator in terms of the parameters \((G,\bar{c})\) of the 1ODE; see Theorem 1. In essence, this means that every 1ODE of the form \(\tilde{v}=G\tilde{v}+\bar{c}\) is directly representable as a Markovian quantum master equation, Eq. (1). However, complete positivity (i.e., whether the result is a Lindblad equation) is not guaranteed and must be checked on a case-by-case basis. Toward this end, we have also formulated the complete positivity condition directly in terms of \(\left(G,\bar{c}\right)\); see Eq. (6). This condition is equivalent to the positive semidefiniteness of \(a\), which is simpler to check in practice after \(a\) has been computed from \(G\) and \(\bar{c}\) using Eq. (5b). Our work assumed the setting of finite-dimensional Hilbert spaces. We left open for future research the problem of connecting 1ODEs to quantum master equations in the infinite-dimensional setting. We also left open the complete answer to the question of the ubiquity of Lindbladians: namely, while we have argued that the condition of positivity of \(a\) makes Lindbladians extremely rare when viewed from the perspective of random matrix theory, the answer to what is the probability that \(a\geq 0\) for all real \(G\) (sampled from the Ginibre Orthogonal Ensemble (GinOE) [25]) such that \(G\) has non-positive eigenvalues, is still open. Answering this question will quantify the probability that a randomly selected 1ODE results in a Lindblad master equation. ###### Acknowledgements. We thank Dr. Evgeny Mozgunov for his valuable input and helpful discussions. We also extend our appreciation to Dr. Frederik vom Ende for pointing out that Eq. (14) was proven in [1] and for asking us to provide a simple proof (see Proposition 19), and especially to Dr. Jess Riedel for pointing out the connections to, and overlap with [28; 29], which led us to add Section VII to the final version of this work. This research was sponsored by the Army Research Office and was accomplished under Grant Number W911NF-20-1-0075. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Office or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes, notwithstanding any copyright notation herein. ## Appendix A Solution of Eq. (3) for the coherence vector Eq. (3) is a linear, first-order, nonhomogeneous differential equation (1ODE). We provide the solution in two parts, first for diagonalizable and invertible \(G\), then for general \(G\). ### Solution for diagonalizable and invertible \(G\) Let us first assume that \(G\) is diagonalizable over \(\mathbb{R}^{J}\) and also invertible. We look for a solution in the form \[\bar{v}(t)=\bar{v}^{(0)}(t)+\bar{v}^{(\infty)}\, \tag{10}\] where \(\bar{v}^{(0)}(t)\) is the homogeneous part and \(\bar{v}^{(\infty)}\) is the nonhomogeneous (time-independent) part. Let \(\vec{x}^{(k)}\) and \(\lambda_{k}\) represent the eigenvectors and (possibly degenerate and complex) eigenvalues of \(G\), i.e., \[G\vec{x}^{(k)}=\lambda_{k}\vec{x}^{(k)}\,\qquad k=1,\ldots,J. \tag{11}\] It is then straightforward to check by direct differentiation and substitution that \[\bar{v}^{(0)}(t) =\sum_{k=1}^{M}s_{k}e^{\lambda_{k}t}\vec{x}^{(k)} \tag{12a}\] \[\bar{v}^{(\infty)} =-G^{-1}\bar{c} \tag{12b}\] in the solution in Eq. (3). The coefficients \(s_{k}\) are determined by the initial condition \(\bar{v}(0)\): \[\bar{v}^{(0)}(0)=\sum_{k=1}^{M}s_{k}\bar{x}^{(k)}=X\bar{s}\,\qquad\text{col}_{k}(X)=\vec{x}^{(k)}\, \tag{13}\] i.e., \(X\) is the matrix whose columns are the eigenvectors of \(G\). Also, \(\bar{v}^{(0)}(0)=\bar{v}(0)-\bar{v}^{(\infty)}\). Thus \[\bar{s}=X^{-1}(\bar{v}(0)+G^{-1}\bar{c}). \tag{14}\] The eigenvalues can be decomposed as \(\lambda_{k}=\mathrm{Re}(\lambda_{k})+i\,\mathrm{Im}(\lambda_{k})\). The imaginary part describes a rotation of \(\bar{v}\). The real part is constrained by complete positivity and trace preservation to be non-positive, or else the norm of the coherence vector would not be bounded [recall Eq. (42)]. ### Solution for general \(G\) The general case is where \(G\) is not diagonalizable over \(\mathbb{R}^{J}\) and may not be invertible. In this case, we can still use a similarity transformation \(S\) to transform \(G\) into Jordan canonical form: \[G_{J}=SGS^{-1}=\left(\begin{array}{ccc}J_{1}&&\\ &\ddots&\\ &&J_{q}\end{array}\right)\, \tag{15}\] where the \(q\) Jordan blocks have the form \[J_{j}=\left(\begin{array}{cccc}\mu_{j}&1&&\\ &\mu_{j}&\ddots&\\ &&\ddots&\\ &&\mu_{j}&1\\ &&\mu_{j}\end{array}\right)=\mu_{j}I+K_{j}. \tag{16}\] The \(\mu_{j}\)'s are the (possibly degenerate, complex) eigenvalues, and \(K_{j}\) are nilpotent matrices: \(K_{j}^{d_{j}}=0\), where \(d_{j}\) is the dimension of \(J_{j}\). When all \(d_{j}=1\), \(G\) is diagonalizable, and reduces to the diagonalized form of \(G\). When one or more of the \(d_{j}>1\), then \(G\) is not diagonalizable, meaning that a similarity transformation does not exist that transforms \(G\) into a diagonal matrix. The eigenvalues are the solutions of \(\det\left(G-\mu I\right)=0\). Applying \(S\) from the left to Eq. (3) yields \[S\ddot{\bar{v}}=SGS^{-1}S\bar{v}+S\bar{c}\quad\implies\quad\dot{\bar{\bar{w}}}= G_{J}\bar{w}+\bar{c}^{\prime}\;, \tag{100}\] where \(\bar{w}=S\bar{v}\) and we defined \(\bar{c}^{\prime}=S\bar{c}\). We again look for a solution in the form \[\bar{w}(t)=\bar{w}^{(0)}(t)+\bar{w}^{(\infty)}\;, \tag{101}\] where now \(\bar{w}^{(0)}(t)\) is the homogeneous part and \(\bar{w}^{(\infty)}\) is the nonhomogeneous (time-independent) part. First, let us solve for the homogeneous part. Since the Jordan blocks are decoupled, the general solution of the homogenous part is \[\bar{w}^{(0)}(t)=\bigoplus_{j=1}^{d}\bar{w}_{j}^{(0)}(t)\;, \tag{102}\] where we use the direct sum notation to denote that the summands must be stacked into a single-column vector. Each of the \(q\) summands satisfies a differential equation of the form \[\dot{\bar{w}}_{j}^{(0)}=J_{j}\bar{w}_{j}^{(0)}\;. \tag{103}\] **Claim 1**.: _The solution for a general \(d_{j}\) dimensional Jordan block is a vector_ \[\ddot{w}_{j}^{(0)}=(w_{j,1}^{(0)},\ldots,w_{j,k}^{(0)},\ldots,w_{j,d_{j}}^{(0) })^{T} \tag{104}\] _with components:_ \[w_{j,k}^{(0)}(t) =e^{\mu_{j}t}p_{d_{j}-k}(t) \tag{105a}\] \[p_{d_{j}-k}(t) =\sum_{n=k}^{d_{j}}w_{j,n}^{(0)}(0)\frac{t^{n-k}}{(n-k)!}\;,\quad k =1,\ldots,d_{j}\;. \tag{105b}\] Here \(p_{d_{j}-k}(t)\) denotes a polynomial of degree \(d_{j}-k\). The highest degree of such polynomials is \(d_{j}-1\), of the last component \(w_{j,d_{j}}^{(0)}\). Proof.: From the structure of the \(j\)'th Jordan block \(J_{j}\), it is clear that \[\dot{w}_{j,d_{j}}^{(0)} =\mu_{j}w_{j,d_{j}}^{(0)} \tag{106a}\] \[\dot{w}_{j,k}^{(0)} =\mu_{j}w_{j,k}^{(0)}+w_{j,k+1}^{(0)}\;,\qquad k\in\{1,\ldots d_{j }-1\}\;, \tag{106b}\] and if we differentiate Eq. (105) we obtain, first for \(k=d_{j}\): \[\dot{w}_{j,d_{j}}^{(0)}=\mu_{j}e^{\mu_{j}t}p_{0}=\mu_{j}w_{j,d_{j}}^{(0)}\;, \tag{107}\] and second, for \(k<d_{j}\): \[\dot{w}_{j,k}^{(0)} =\mu_{j}w_{j,k}^{(0)}+\sum_{n=k+1}^{d_{j}}w_{j,n}^{(0)}(0)\frac{t ^{n-k-1}}{(n-k-1)!} \tag{108a}\] \[=\mu_{j}w_{j,k}^{(0)}+w_{j,k+1}^{(0)}\;,\qquad k\in\{1,\ldots d_{j }-1\}\;. \tag{108b}\] Note that we can be certain that for all \(d_{j}>1\), the corresponding \(\operatorname{Re}(\mu_{j})<0\), since a positive or zero real part would violate Eq. (42). Next, let us solve for the nonhomogeneous part. As before, it is determined by the solution form (101) subject to the boundary conditions \[\bar{w}(0) =\bar{w}^{(0)}(0)+\bar{w}^{(\infty)} \tag{109a}\] \[\bar{w}(\infty) =\bar{w}^{(\infty)}\;. \tag{109b}\] On the one hand, \(\dot{\bar{w}}=\dot{\bar{w}}^{(0)}\), and on the other hand \(\dot{\bar{\bar{w}}}=G_{J}(\bar{w}^{(0)}(t)+\bar{w}^{(\infty)})+\bar{c}^{\prime} =G_{J}\bar{w}^{(0)}(t)+G_{J}\bar{w}^{(\infty)}+\bar{c}^{\prime}\). Since only the homogeneous part is time-dependent, we must have \(\ddot{\bar{w}}^{(0)}=G_{J}\bar{w}^{(0)}(t)\), so that the particular solution satisfies \[G_{J}\bar{w}^{(\infty)}=-\bar{c}^{\prime}\;. \tag{110}\] This linear system has a solution for \(\ddot{w}^{(\infty)}\) iff \(\operatorname{rank}(G_{J})=\operatorname{rank}(G_{J}|\bar{c}^{\prime})\) (the augmented matrix). The solution is unique only if, in addition, \(G_{J}\) is full rank (equal to \(J\)); otherwise, the solution for \(\ddot{w}^{(\infty)}\) is underdetermined. Since \(G_{J}\) is in Jordan form, it is in full-rank row-echelon form unless there are Jordan blocks with \(\mu_{j}=0\). However, as a consequence of the norm upper bound Eq. (42), any such Jordan blocks must have a dimension \(d_{j}=1\), since otherwise, the polynomials in Eq. (105) read grow unboundedly. Therefore, the kernel dimension \(\dim(G_{J})\) - \(\operatorname{rank}(G_{J})\) must be the number of Jordan blocks with \(\mu_{j}=0\) and \(d_{j}=1\), and in these blocks, Eq. (100) becomes the trivial scalar differential equation \(\dot{w}_{j}=c^{\prime}_{j}\), whose solution is \(w_{j}(t)=c^{\prime}_{j}t+w_{j}(0)\). But this means that \(c^{\prime}_{j}=0\) since otherwise \(w_{j}(t)\) is unbounded. Thus, in each block with \(\mu_{j}=0\) we have \(w_{j}(t)=w_{j}(0)\). Such Jordan blocks represent frozen degrees of freedom that do not evolve in time. ## Appendix B Injectivity of the map \(a\mapsto(R,\bar{c})\) Here we provide an alternative proof of injectivity of the linear map \(\mathcal{F}\colon a\mapsto(R,\bar{c})\) introduced in Section III.6, which repurposes the ideas of Theorem 2 to focus on injectivity alone. But first, we need a Lemma separate from Theorem 2: **Lemma 4**.: _If \(\mathcal{F}\) maps matrices \(a_{1}\) and \(a_{2}\) to the same \((R,\bar{c})\), then \(\mathcal{L}_{a_{1}}=\mathcal{L}_{a_{2}}\) where \(\mathcal{L}_{a}\) is the dissipative Liouvillian of Eq. (1c) acting on arbitrary \(d\times d\) operators \(X\), not just positive \(\rho\) with \(\operatorname{Tr}(\rho)=1\)._ Proof.: Proposition 8 trivially holds for arbitrary \(d\times d\) operators \(X\) and vectors \(\bar{v}\) with complex-valued components computed using Eq. (40), with \(\rho\) replaced by \(X\). It shows that after coordinatization the Markovian quantum master equation is equivalent to a linear ordinary differential equation for such vectors, so the solution to \(\dot{X}=\mathcal{L}_{a}(X)\) is unique up to initial condition for any \(X\in\mathcal{B}(\mathcal{H})\). Suppose \(a_{1}\) and \(a_{2}\) both map to the same \(\bar{c}\) and \(R\) by Eqs. (57) and (58). Let \(X_{1}(t)\) be the solution to \(\dot{X}_{1}(t)=\mathcal{L}_{a_{1}}[X_{1}(t)]\) and \(X_{2}(t)\) be the solution to \(\dot{X}_{2}(t)=\mathcal{L}_{a_{2}}[X_{2}(t)]\). Furthermore, let them have the same initial conditions: \(X(0)=X_{1}(0)=X_{2}(0)\). Let \(\tilde{v}_{1}(t)\) and \(\tilde{v}_{2}(t)\) be the vectors associated with \(X_{1}(t)\) and \(X_{2}(t)\) via Eq. (40). Then we have \[\dot{\tilde{v}}_{1}(t) =R\tilde{v}_{1}(t)+\tilde{c} \tag{12a}\] \[\dot{\tilde{v}}_{2}(t) =R\tilde{v}_{2}(t)+\tilde{c}. \tag{12b}\] By uniqueness of the solution to \(\dot{\tilde{v}}=R\tilde{v}+\tilde{c}\) and that \(\tilde{v}_{1}(0)=\tilde{v}_{2}(0)\) since \(X_{1}(0)=X_{2}(0)\), we have \(\tilde{v}_{1}(t)=\tilde{v}_{2}(t)\) for all \(t\), which implies \(X_{1}(t)=X_{2}(t)\) for all \(t\). Thus \(\dot{X}=\mathcal{L}_{a_{1}}(X)\) and \(\dot{X}=\mathcal{L}_{a_{2}}(X)\) have the same unique solution up to initial conditions. For any operator \(X(0)\), let \(X(t)\) be the unique solution to both \(\dot{X}=\mathcal{L}_{a_{1}}(X)\) and \(\dot{X}=\mathcal{L}_{a_{2}}(X)\) with initial condition \(X(0)\). Then \(\dot{X}(t)|_{0}=\mathcal{L}_{a_{1}}[X(0)]=\mathcal{L}_{a_{2}}[X(0)]\) for arbitrary \(X(0)\). Thus \(\mathcal{L}_{a_{1}}=\mathcal{L}_{a_{2}}\). **Lemma 5**.: \(\mathcal{F}\) _is injective (one-to-one)._ Proof.: To prove injectivity we need to show that \(\mathcal{F}(a_{1})=\mathcal{F}(a_{2})\implies a_{1}=a_{2}\), but since \(\mathcal{F}\) is linear it suffices to show that \(\mathcal{F}(a)=(0,\tilde{0})\implies a=0\). Let us take a matrix \(a\) such that \(\mathcal{F}(a)=(0,\tilde{0})\). From Lemma 4: \[\mathcal{L}_{a}=\mathcal{L}_{0}=0, \tag{13}\] where \(\mathcal{L}_{a}\) is the dissipative Liouvillian in Eq. (1). Let \(\ket{j}\) be the vector with the \(j\)-th component equal to \(1\) and all other components equal to \(0\). Then, from Eq. (13), \(\forall j,k=1,\ldots,d\) we have \(\mathcal{L}_{a}(\ket{j}\!\bra{k})=0\). In particular, using Eq. (1c): \[0 =\sum_{k=1}^{d}\bra{i}\mathcal{L}_{a}(\ket{j}\!\bra{k})\ket{k} \tag{14a}\] \[=\sum_{m,n=1}^{J}a_{mn}\big{[}\bra{i}F_{m}\ket{j}\sum_{k=1}^{d} \bra{k}F_{n}\ket{k}-\frac{1}{2}\bra{i}F_{n}F_{m}\ket{j}\sum_{k=1}^{d}\!\bra{k} k\] \[\qquad-\frac{1}{2}\delta_{ij}\operatorname{Tr}(a)-\frac{d}{2}\sum _{m,n=1}^{J}a_{mn}\bra{i}F_{n}F_{m}\ket{j}\,\] (14b) \[=-\frac{1}{2}\delta_{ij}\operatorname{Tr}(a)-\frac{d}{2}\sum_{m,n =1}^{J}a_{mn}\bra{i}F_{n}F_{m}\ket{j}\, \tag{14c}\] where we used \(\operatorname{Tr}(F_{n})=0\) and \(\operatorname{Tr}(F_{n}F_{m})=\delta_{nm}\) (Definition 1). For the same reason, \(\sum_{i=1}^{d}\bra{i}F_{n}F_{m}\ket{i}=\operatorname{Tr}(F_{n}F_{m})= \delta_{nm}\), so that after summing Eq. (14c) over \(i=j=1,\ldots,d\) we obtain \[0 =-\frac{1}{2}\sum_{i=1}^{d}\operatorname{Tr}(a)-\frac{d}{2}\sum_{m,n =1}^{J}a_{mn}\delta_{nm}=-d\operatorname{Tr}(a)\, \tag{15}\] i.e., \(\operatorname{Tr}(a)=0\). From Eq. (14c) it follows that \(\bra{i}A\ket{j}=0\)\(\forall i,j\), where \(A\equiv\sum_{m,n=1}^{J}a_{mn}F_{n}F_{m}\), i.e., \(A=0\). Hence, we have shown that \(\forall X\): \[\mathcal{L}_{a}(X)=\sum_{m,n=1}^{J}a_{mn}F_{n}XF_{n}. \tag{16}\] We can now express the matrix elements \(a_{mn}\) in terms of \(\mathcal{L}_{a}\), as follows: \[a_{mn} =\sum_{m^{\prime},n^{\prime}=1}^{J}a_{m^{\prime}n^{\prime}} \operatorname{Tr}(F_{n}F_{n^{\prime}})\operatorname{Tr}(F_{m^{\prime}}F_{m}) \tag{17a}\] \[=\sum_{j,k=1}^{d}\bra{j}F_{n}\big{[}\sum_{m^{\prime},n^{\prime} =1}^{J}a_{m^{\prime}n^{\prime}}F_{n^{\prime}}\ket{j}\!\bra{k}F_{m^{\prime}} \big{]}F_{m}\ket{k}\] (17b) \[=\sum_{j,k=1}^{J}\bra{j}F_{n}\mathcal{L}_{a}(\ket{j}\!\bra{k})F_{ m}\ket{k}=0\, \tag{17c}\] since \(\mathcal{L}_{a}=0\). That is, \(a=0\). ## Appendix C Facts from convex geometry In this section, we summarize the facts from convex geometry needed for this work. Let \(M_{+}(J,\mathbb{C})\) be the set of positive semidefinite \(J\times J\) matrices with complex coefficients: \[M_{+}(J,\mathbb{C})=\left\{a\in M_{\text{sa}}(J,\mathbb{C}):\forall b\in \mathbb{C}^{J}\ b^{\dagger}ab\geq 0\right\}. \tag{18}\] This equation describes \(M_{+}(J,\mathbb{C})\) as the intersection of infinitely many half-spaces in \(M_{\text{sa}}(J,\mathbb{C})\), whose boundaries all pass through the origin \(a=0\). Therefore, \(M_{+}(J,\mathbb{C})\) is a closed convex cone in \(M_{\text{sa}}(J,\mathbb{C})\). Let \[M_{+,1}(J,\mathbb{C})=\{a\in M_{+}(J,\mathbb{C})\text{: }\operatorname{Tr}(a)=1\}, \tag{19}\] i.e., the set of unit-trace, positive-semidefinite matrices. **Lemma 6**.: \(M_{+}(J,\mathbb{C})\) _is a closed convex cone with a compact base, which can be chosen to be \(M_{+,1}(J,\mathbb{C})\)._ Proof.: As noted above, \(M_{+}(J,\mathbb{C})\) is a closed convex cone. By definition, \(M_{+,1}(J,\mathbb{C})\) is an intersection of that cone and a hyperplane not passing through the origin. Thus, it remains to prove that 1. any of the rays in \(M_{+}(J,\mathbb{C})\) passes through \(M_{+,1}(J,\mathbb{C})\), i.e., every non-zero element of \(M_{+}(J,\mathbb{C})\) is proportional to an element of \(M_{+,1}(J,\mathbb{C})\) with a positive coefficient; 2. \(M_{+,1}(J,\mathbb{C})\) is compact. To prove the first statement, let \(a\neq 0,a\in M_{+}(J,\mathbb{C})\). Since \(a\) is Hermitian it can be diagonalized. Eq. (18) implies that all its eigenvalues are non-negative, hence \(\operatorname{Tr}(a)\geq 0\). In fact, at least one of the eigenvalues has to be positive, otherwise \(a=0\), hence \(\operatorname{Tr}(a)>0\). Therefore \(a=\operatorname{Tr}(a)(a/\operatorname{Tr}(a))\) where the coefficient \(\operatorname{Tr}(a)>0\) and the element \(a/\operatorname{Tr}(a)\in M_{+,1}(J,\mathbb{C})\). Since \(M_{+}(J,\mathbb{C})\) is an intersection of closed sets, it is closed. Thus, it remains to prove it is bounded, e.g.: \[\forall a\in M_{+,1}(J,\mathbb{C})\quad\sum_{i,j=1}^{J}|a_{ij}|^{2}\leq 1. \tag{103}\] Let \(\{\lambda_{i}\}_{i=1}^{J}\) be the eigenvalues of such an \(a\). Then \(\sum_{i=1}^{J}\lambda_{i}=1\) and \(0\leq\lambda_{i}\leq 1\). Therefore, indeed, \[\sum_{i,j=1}^{J}|a_{ij}|^{2}=\operatorname{Tr}(a^{\dagger}a)=\sum_{i=1}^{J} \lambda_{i}^{2}\leq\sum_{i=1}^{J}\lambda_{i}=1. \tag{104}\] **Lemma 7**.: _The extreme rays of \(M_{+}(J,\mathbb{C})\) are the rays generated by rank \(1\) matrices._ Note: this is [30, Exercise 7.5.15]. Here we provide a proof for completeness. Proof.: Let \(\{\alpha a:\alpha\in\mathbb{R}_{+}\}\) be an extreme ray in \(M_{+}(J,\mathbb{C})\). Here \(a\in M_{+}(J,\mathbb{C})\setminus\{0\}\). Since \(a\) is Hermitian, we can diagonalize it by a unitary, hence write it in the form \[a=\sum_{i=1}^{k}b_{i}b_{i}^{\dagger} \tag{105}\] for some \(b_{i}\in\mathbb{C}^{J}\) orthogonal to each other, where \(k=\operatorname{rank}(a)\). Since \(a\neq 0\), \(k\geq 1\). If \(k\geq 2\) then Eq. (105) represents \(a\) as a sum of non-proportional elements of \(M_{+}(J,\mathbb{C})\), contradicting the assumption that \(a\) lies on an extreme ray in \(M_{+}(J,\mathbb{C})\). Thus, \(k=1\). Vice-versa, if \(\operatorname{rank}(a)=1\), \(a\in M_{+}(J,\mathbb{C})\), we want to prove that \(a\) lies on an extreme ray of \(M_{+}(J,\mathbb{C})\). Assume to the contrary that \[a=a_{1}+a_{2} \tag{106}\] where \(a_{1},a_{2}\in M_{+}(J,\mathbb{C})\) and \(a_{1}\) is not proportional to \(a\). We can write \(a=\alpha bb^{\dagger}\) for some \(\alpha\in\mathbb{R}_{+}\), \(b\in\mathbb{C}^{J}\) with \(\left|b\right|=1\), \(\alpha>0\). Then for \(i=1,2\), we can decompose \[a_{i}/\alpha=\beta_{i}bb^{\dagger}+\tilde{a}_{i} \tag{107}\] where \(\beta_{i}=b^{\dagger}a_{i}b/\alpha\), \(b^{\dagger}\tilde{a}_{i}b=0\). Eq. (106) implies that \(\beta_{1}+\beta_{2}=1\), \(\tilde{a}_{1}+\tilde{a}_{2}=0\). By our assumption \(\tilde{a}_{1}\neq 0\) (otherwise \(a_{1}\) would be proportional to \(a\)). Therefore, there is a vector \(c\) s.t. \(c^{\dagger}\tilde{a}_{1}c=0\). Let \(c=\gamma b+c_{+}\), where \(b^{\dagger}c_{1}=0\). Note that \(c_{1}^{\dagger}\tilde{a}_{1}c_{1}=c_{1}^{\dagger}a_{1}c_{1}/\alpha\), hence \(c_{1}^{\dagger}\tilde{a}_{1}c_{1}\geq 0\). Similarly, \(c_{1}^{\dagger}\tilde{a}_{1}c_{1}=-c_{1}^{\dagger}a_{2}c_{1}/\alpha\), hence \(c_{1}^{\dagger}\tilde{a}_{1}c_{1}\leq 0\). Thus, \(c_{1}^{\dagger}\tilde{a}_{1}c_{1}=0\). For any \(\delta\in\mathbb{C}\) we have \[\big{(}b+\delta c_{1}\big{)}^{\dagger}a_{1}(b+\delta c_{1})\geq 0\, \tag{108}\] and we have just shown that the quadratic term in this expression is zero. Since the inequality Eq. (108) has to hold for all \(\delta\), the linear term is zero too, i.e., \(b^{\dagger}a_{1}c_{\perp}=0\). Thus, \(c^{\dagger}\tilde{a}_{1}c=0\), contradicting the assumption. **Lemma 8**.: \(M_{+}(J,\mathbb{C})\) _is the convex hull of its extreme rays._ Here we present two alternative ways to see this. Proof 1.: Any positive semidefinite matrix can be diagonalized. Such diagonalization results [30, Theorem 7.5.2] in a decomposition of the matrix into rank-\(1\) positive semidefinite matrices [as in Eq. (105)], which - according to Lemma 7 - lie on the extreme rays of \(M_{+}(J,\mathbb{C})\). Proof 2.: In the context of finite dimensional vector spaces, Krein-Milman (also known as Minkowski's theorem, see, e.g., [31, Corollary 18.5.1]) states that any compact convex set is the convex hull of its extreme points. As a corollary any convex cone with a compact base is the convex hull of its extreme rays. As follows from Lemma 6, \(M_{+}(J,\mathbb{C})\) is such a cone. ## Appendix D Representation of a superoperator using a nice operator basis **Proposition 19**.: _Any superoperator \(\mathcal{E}\in\mathcal{B}[\mathcal{B}(\mathcal{H})]\) can always be represented as_ \[\mathcal{E}(A)=\sum_{i,j=0}^{J}c_{ij}F_{i}\bullet F_{j}. \tag{109}\] This is [1, Lemma 2.2]. It can also be seen directly by noting that any linear operator can be represented by a matrix and, thus, \(\mathcal{E}(A)_{kn}=\sum_{l,m=1}^{d}\mathcal{E}_{klmn}A_{lm}\) for some tensor \(\mathcal{E}_{klmn}\). The tensor \(\mathcal{E}_{klmn}\) can be seen as a matrix in 2 ways: first, as a function of indices \(k\) and \(l\), and second, as a function of indices \(m\) and \(n\). Applying "coordinatization" [Eqs. (8) and (9)] twice to \(\mathcal{E}_{klmn}\) we get \(c_{ij}\). Proof.: Let \(\mathcal{E}\) be any superoperator in \(\mathcal{B}[\mathcal{B}(\mathcal{H})]\). Denoting \(\mathcal{E}_{klmn}=\mathcal{E}(\left|l\!\left\langle m\right|\right\rangle_{kn}\) and applying Eq. (11) to \(\delta_{kk^{\prime}}\delta_{ll^{\prime}}\) and \(\delta_{mm^{\prime}}\delta_{nn^{\prime}}\) for any \(A\in\mathcal{B}(\mathcal{H})\) we get \[\mathcal{E}(A)_{kn}=\sum_{l,m=1}^{d}\mathcal{E}_{klmn}A_{lm} \tag{110a}\] \[=\sum_{k^{\prime},l^{\prime},m^{\prime},n^{\prime},l,m=1}^{d}\delta _{kk^{\prime}}\delta_{ll^{\prime}}\delta_{mm^{\prime}}\delta_{nn^{\prime}} \mathcal{E}_{k^{\prime}l^{\prime}m^{\prime}n^{\prime}}A_{lm}\] (110b) \[=\sum_{k^{\prime},l^{\prime},m^{\prime},n^{\prime},l,m=1}^{d}\sum_{ i,j=0}^{J}F_{i\,kl}F_{i^{\prime}l^{\prime}k^{\prime}}F_{j\,mn}F_{j\,n^{\prime}m^{ \prime}}\mathcal{E}_{k^{\prime}l^{\prime}m^{\prime}n^{\prime}}A_{lm}\] (110c) \[=\sum_{i,j=0}^{J}c_{ij}(F_{i}AF_{j})_{kn}, \tag{110d}\] where \[c_{ij}=\sum_{k^{\prime},l^{\prime},m^{\prime},n^{\prime}=1}^{d}F_{i^{\prime}l^{ \prime}k^{\prime}}F_{j\,n^{\prime}m^{\prime}}\mathcal{E}_{k^{\prime}l^{\prime}m^{ \prime}n^{\prime}}. \tag{111}\] ## Appendix E Computation of the constant \(\theta^{\text{GinoGE}}\) The constant \(\theta^{\text{GinoGE}}\) in Eq. (136) can be computed as \[\theta^{\text{GinoGE}}=\inf_{\rho\in\mathcal{S}_{+}}E(\rho)-\inf_{\rho\in \mathcal{S}}E(\rho), \tag{147}\] where \[E(\rho) =\frac{1}{2}\int_{\mathbb{C}}\left(\left|z\right|^{2}-\int_{ \mathbb{C}}\ln|z-w|\,\rho(w)d^{2}w\right)\rho(z)d^{2}z, \tag{148a}\] \[\mathcal{S} =\left\{\rho\in C_{0}(\mathbb{C}\to\mathbb{R}_{+}):\int_{ \mathbb{C}}\rho(z)d^{2}z=1,\right.\] \[\qquad\left.\forall\,z\;\rho(z)=\rho(z^{*})\right\},\] (148b) \[\mathcal{S}_{+} =\left\{\rho\in\mathcal{S}:\forall\,z\;\operatorname{Re}(z)<0 \Rightarrow\rho(z)=0\right\},\] (148c) \[d^{2}z =d\operatorname{Re}(z)d\operatorname{Im}(z)=\frac{i}{2}dzdz^{*}, \tag{148d}\] where \(C_{0}(X\to Y)\) is the set of continuous functions from \(X\) to \(Y\) with compact support. One can then compute that \(\inf_{\rho\in\mathcal{S}}E(\rho)=5/8-\ln(2)/4\) and is achieved near \(\rho(z)=\frac{1}{2\pi}\mathbb{I}\!\left(\left|z\right|^{2}<2\right)\), where \(\mathbb{I}\) is the indicator function. One can then try to argue that the \(\inf_{\rho\in\mathcal{S}_{+}}E(\rho)\) is achieved for \(\rho\) of the following form: \[\rho(x+iy)=g(y)\delta(x)+\frac{1}{2\pi}\mathbb{I}(0\leq x\leq f(y)). \tag{149}\] Note that while the function Eq. (149) is not continuous, it can be approximated by a continuous function in \(\mathcal{S}_{+}\) as soon as \(f\) and \(g\) are non-negative continuous functions with compact support and \[\int_{\mathbb{R}}\left(g(y)+\frac{f(y)}{2\pi}\right)dy=1. \tag{150}\] For such \(\rho\) we have \[E(\rho) =\int_{\mathbb{R}}dy_{1}\!\left(\frac{f(y_{1})^{3}}{12\pi}+\frac {y_{1}^{2}f(y_{1})}{4\pi}+g(y_{1})\frac{y_{1}^{2}}{2}-\right.\] \[\qquad\left.\int_{\mathbb{R}}dy_{2}\!\left(g(y_{1})g(y_{2})\ln \left|y_{1}-y_{2}\right|/2+\right.\right.\] \[\qquad\left.\left.\frac{1}{2\pi}g(y_{2})I_{1}(f(y_{1}),\left|y_{2} -y_{1}\right|)+\right.\right.\] \[\qquad\left.\left.\frac{1}{8\pi^{2}}I_{2}(f(y_{1}),f(y_{2}),y_{2} -y_{1})\right)\!\right)\!, \tag{151}\] where \[I_{1}(x,y) =\int_{0}^{x}dx_{1}\ln\left|x_{1}+iy\right|, \tag{152a}\] \[I_{2}(x_{1},x_{2},y) =\int_{0}^{x_{1}}dx_{3}\int_{0}^{x_{2}}dx_{4}\ln\left|x_{2}-x_{4} +iy\right|. \tag{152b}\] One can then solve this optimization problem numerically (or attempt to find an analytical solution) to find \(\theta^{\text{GinoGE}}\).
2306.12593
Neighborhood Variants of the KKM Lemma, Lebesgue Covering Theorem, and Sperner's Lemma on the Cube
We establish a "neighborhood" variant of the cubical KKM lemma and the Lebesgue covering theorem and deduce a discretized version which is a "neighborhood" variant of Sperner's lemma on the cube. The main result is the following: for any coloring of the unit $d$-cube $[0,1]^d$ in which points on opposite faces must be given different colors, and for any $\varepsilon>0$, there is an $\ell_\infty$ $\varepsilon$-ball which contains points of at least $(1+\frac{\varepsilon}{1+\varepsilon})^d$ different colors, (so in particular, at least $(1+\frac{2}{3}\varepsilon)^d$ different colors for all sensible $\varepsilon\in(0,\frac12]$).
Jason Vander Woude, Peter Dixon, A. Pavan, Jamie Radcliffe, N. V. Vinodchandran
2023-06-21T22:26:55Z
http://arxiv.org/abs/2306.12593v1
# Neighborhood Variants of the KKM Lemma, ###### Abstract We establish a "neighborhood" variant of the cubical KKM lemma and the Lebesgue covering theorem and deduce a discretized version which is a "neighborhood" variant of Sperner's lemma on the cube. The main result is the following: for any coloring of the unit \(d\)-cube \([0,1]^{d}\) in which points on opposite faces must be given different colors, and for any \(\varepsilon>0\), there is an \(\ell_{\infty}\)\(\varepsilon\)-ball which contains points of at least \((1+\frac{\varepsilon}{1+\varepsilon})^{d}\) different colors, (so in particular, at least \((1+\frac{2}{3}\varepsilon)^{d}\) different colors for all sensible \(\varepsilon\in(0,\frac{1}{2}]\)). ###### Contents * 1 Introduction * 1.1 The Lebesgue Covering Theorem and the KKM Lemma on the Cube * 1.2 Sperner's Lemma on the Cube * 1.3 Outline * 1.4 Notation * 2 Proof of the Neighborhood KKM-Lebesgue Theorem * 3 Proof of the Neighborhood Sperner Lemma * 4 Discussion * A Skipped Proofs * A.1 Equivalence of the KKM Lemma, Lebesgue Covering Theorem, and Color Variant * A.2 Background for the Neighborhood KKM-Lebesgue Theorem * B Proof of the Folklore Continuous Multi-Pigeonhole Principle _Note._ The main result of this paper (Theorem 1.3 (Neighborhood KKM-Lebesgue Theorem)) along with most contents of the appendices originally appeared in a theoretical computer science paper of ours currently in the computer science category of ArXiv [10]. The purpose of the present article is to highlight this result on its own and make it more visible to the mathematical community. It is reproduced here along with a detailed motivation, additional discussion, and a discretized version of the result that does not appear in [10]. We note, however, that [10] contains additional results including an analogous result for \(\mathbb{R}^{d}\) (as compared to \([0,1]^{d}\) here) which will not be discussed in this paper; in that context a slightly better bound of \((1+2\varepsilon)^{d}\) (as compared to \((1+\frac{2}{3}\varepsilon)^{d}\) here) can be achieved. Furthermore, [10] includes results for _every norm_ on \(\mathbb{R}^{d}\) (not just \(\ell_{\infty}\)), but this is less natural on \([0,1]^{d}\), so in this paper we focus only on the \(\ell_{\infty}\) norm. ## 1 Introduction The Lebesgue covering theorem (see [6] or [3, Theorem IV 2]), Sperner's lemma on the cube (see [1]), and the KKM lemma on the cube (see [5, 11, 4, 8]) are all known to be naturally equivalent in that any of the three results can be used to fairly directly prove any of the others. Informally, they guarantee that in any well-behaved coloring/covering of the \(d\)-dimensional cube, there exists a point at the closure of at least \(d+1\) colors/sets. We prove the following "neighborhood" variant(s) of these results by considering an open ball instead of a point: _for any well-behaved coloring/covering of the \(d\)-dimensional cube and any \(\varepsilon>0\), there exists a placement of the \(\ell_{\infty}\)\(\varepsilon\)-ball (i.e. a cube of side length \(2\varepsilon\)) which intersects at least \((1+\frac{\varepsilon}{1+\varepsilon})^{d}\) colors/sets._ Thus, while the traditional results give a linear bound \((d+1)\) on the number of colors/sets, for any fixed \(\varepsilon\), our neighborhood variant gives an exponential (in \(d\)) bound on the number of colors/sets. We first state a theorem which is naturally equivalent1 to both the cubical KKM lemma and the Lebesgue covering theorem but stated in terms of colorings to be more convenient to work with (Theorem 1.2). Then we formally state our main result (Theorem 1.3) which is a neighborhood variant of Theorem 1.2. As direct corollaries of Theorem 1.3, we obtain neighborhood variants of both the cubical KKM lemma and the Lebesgue covering theorem. We also demonstrate that under an appropriate perspective, our main result implies a neighborhood variant of Sperner's lemma on the cube. Footnote 1: The equivalence is discussed in Subsection 1.1 and proved in Subsection A.1. Below, we consider \([0,1]^{d}\) to be the standard unit \(d\)-cube and \(V=\{0,1\}^{d}\) to be its set of vertices. A face \(F\) of the cube \([0,1]^{d}\) is a product set \(F=\prod_{i=1}^{d}F_{i}\) where each \(F_{i}\) is one of three sets: \(\{0\}\), \(\{1\}\), or \([0,1]\). For a set \(X\subseteq\mathbb{R}^{d}\) and coordinate \(i\in[d]\), we will use the projection notation \(\pi_{i}(X)=\{x_{i}:\vec{x}\in X\}\). Two faces \(F,F^{\prime}\) are said to be opposite each other if there is some coordinate \(\hat{t}_{0}\in[d]\) such that \(F_{i_{0}}=\{0\}\) and \(F^{\prime}_{i_{0}}=\{1\}\) (or vice versa). We will frequently say that a pair of points \(\vec{x},\vec{x}\,^{\prime}\in[0,1]^{d}\) belong to opposite faces if there is an opposite pair of faces \(F,F^{\prime}\) with \(\vec{x}\in F\) and \(\vec{x}\,^{\prime}\in F^{\prime}\). Equivalently, a pair of points \(\vec{x},\vec{x}\,^{\prime}\in[0,1]^{d}\) belong to opposite faces if there is some coordinate \(i_{0}\in[d]\) such that \(x_{i_{0}}=0\) and \(x^{\prime}_{i_{0}}=1\) (or vice versa). A final equivalent characterization is that a pair of points \(\vec{x},\vec{x}\,^{\prime}\in[0,1]^{d}\) belong to opposite faces if \(\|\vec{x}-\vec{x}\,^{\prime}\|_{\infty}=1\). We will say that a set \(S\) does not contain points of opposite faces if for every pair of opposite faces \(F,F^{\prime}\) at least one of \(S\cap F\) and \(S\cap F^{\prime}\) is empty. Equivalently, \(S\) does not contain points of opposite faces if no pair of points from \(S\) are \(\ell_{\infty}\) distance \(1\) apart. **Definition 1.1** (Sperner-Lebesgue-Knaster-Kuratowski-Mazurkiewicz Coloring).: _For a set \(\Lambda\subseteq[0,1]^{d}\) (possibly \(\Lambda=[0,1]^{d}\)), an SLKKM coloring of \(\Lambda\) is a function \(\chi:\Lambda\to C\) for some set \(C\) such that \(\chi\) does not map points on opposite faces to the same color. If \(|C|<\infty\), we call \(\chi\) a finite SLKKM coloring._ **Theorem 1.2** (KKM-Lebesgue Theorem).: _Given a finite SLKKM coloring \(\chi\) of \([0,1]^{d}\), there exists a point \(\vec{p}\in[0,1]^{d}\) belonging to the closure of at least \(d+1\) color sets (i.e. \(\big{|}\{c\in C:\vec{p}\in\chi^{-1}(c)\}\big{|}\geq d+1\))._ The main result of this paper is that if we are interested in a small cubical region (an open \(\ell_{\infty}\)\(\varepsilon\)-ball), rather than a single point as in the KKM-Lebesgue Theorem, then for any SLKKM coloring we can find a point \(\vec{p}\) where the open \(\ell_{\infty}\)\(\varepsilon\)-ball at \(\vec{p}\) intersects a significant number of colors. The focus on an open ball instead of a point is why we consider this to be a "neighborhood" variant of the cubical KKM lemma and the Lebesgue covering theorem. In the statement below, \(B_{\infty}^{\circ}(\varepsilon,\vec{p})\) denotes the open \(\varepsilon\)-ball at \(\vec{p}\) with respect to the \(\ell_{\infty}\) norm. **Theorem 1.3** (Neighborhood KKM-Lebesgue Theorem).: _Given an SLKKM coloring of \([0,1]^{d}\), for any \(\varepsilon\in(0,\infty)\) there exists a point \(\vec{p}\in[0,1]^{d}\) such that \(B_{\infty}^{\circ}(\varepsilon,\vec{p})\) contains points of at least \(\left\lceil\left(1+\frac{\varepsilon}{1+\varepsilon}\right)^{d}\right\rceil\) different colors. In particular, if \(\varepsilon\in(0,\frac{1}{2}]\) then \(B_{\infty}^{\circ}(\varepsilon,\vec{p})\) contains points of at least \(\left\lceil\left(1+\frac{2}{3}\varepsilon\right)^{d}\right\rceil\) different colors._ The proof will be given in Section2, but for now we offer three comments about this theorem. First, we no longer need a finiteness assumption (as in the KKM-Lebesgue Theorem) because we are not working with closures2. Second, restricting to \(\varepsilon\leq\frac{1}{2}\) in the above statement is reasonable because for \(\varepsilon>\frac{1}{2}\) the \(\ell_{\infty}\)\(\varepsilon\)-ball can be placed at the center of the unit cube and is then a superset of the cube and hence it contains all the points in the cube. Note that no two vertices of the cube can have the same color in an SLKKM coloring as they belong to opposite faces. This means that every SLKKM coloring uses at least \(2^{d}\) colors (and there exist SLKKM colorings with only \(2^{d}\) colors--e.g. color each of the \(2^{d}\) orthants a distinct color). Thus, for \(\varepsilon>\frac{1}{2}\), we don't need the bound of \(\left\lceil\left(1+\frac{\varepsilon}{1+\varepsilon}\right)^{d}\right\rceil\) in the Neighborhood KKM-Lebesgue Theorem since we just demonstrated a better (tight) bound of \(2^{d}\). For this reason, it is sensible to only consider \(\varepsilon\leq\frac{1}{2}\) in the statement of the Neighborhood KKM-Lebesgue Theorem and consequently obtain the cleaner (and only moderately worse) bound of \(\left\lceil\left(1+\frac{2}{3}\varepsilon\right)^{d}\right\rceil\). Footnote 2: The reason the KKM-Lebesgue Theorem requires finiteness is because for a finite collection of sets, the union of closures equals the closure of unions, but this property does not hold in general for infinite collections. Even without the finiteness condition, it is still the case that there exists a point \(\vec{p}\) such that every open set containing \(\vec{p}\) intersects at least \(d+1\) colors. See for example the proof of LemmaA.4 where this is demonstrated. The KKM-Lebesgue Theorem does not hold in general without the finiteness hypothesis as demonstrated by the non-finite SLKKM coloring which assigns a unique color to each point of \([0,1]^{d}\). Third, while the statement is given for fixed \(\varepsilon\) and \(d\), we have already encountered applications of this theorem where \(\varepsilon\) is a function of the dimension \(d\) (see [9, 10]), so we briefly note the asymptotic behavior of this bound. If \(\varepsilon\in O\left(\frac{1}{d}\right)\), then our lower bound on the number of colors given by the Neighborhood KKM-Lebesgue Theorem is an \(O(1)\) bound3 which is asymptotically worse than the value \(d+1\) given in the KKM-Lebesgue Theorem. However, if \(\varepsilon\in\omega\left(\frac{\ln(d)}{d}\right)\) then our bound is super-polynomial in the dimension4. In particular, if \(\varepsilon\in\Theta(1)\), then our bound is asymptotically exponential in \(d\). Footnote 3: This is because \(\lim_{d\to\infty}(1+\frac{\varepsilon}{d})^{d}=e^{\varepsilon}\). Footnote 4: Let \(k=(1+\frac{2}{3}\varepsilon)^{d}\). Because \(\varepsilon\in(0,\frac{1}{2}]\) we have \(\frac{2}{3}\varepsilon\in(0,\frac{1}{3}]\). Using the inequality \(\ln(1+x)\geq\frac{\pi}{2}\) for small enough \(x\) (in particular for \(x\in(0,\frac{1}{3}]\)) we have \(\ln(k)=d\ln(1+\frac{2}{3}\varepsilon)\geq\frac{1}{3}d\varepsilon\), so \(\varepsilon\leq\frac{3\ln(k)}{d}\). Thus, if the bound \(k\) is polynomial in \(d\), then \(\varepsilon\in O\left(\frac{\ln(d)}{d}\right)\), so if \(\varepsilon\in\omega\left(\frac{\ln(d)}{d}\right)\) the \(k\) is not polynomial in \(d\). ### The Lebesgue Covering Theorem and the KKM Lemma on the Cube We will now briefly discuss the standard statements of the Lebesgue covering theorem and the cubical version of the KKM Lemma and how they relate to the KKM-Lebesgue Theorem (Theorem1.2). We can capture the hypotheses of the Lebesgue covering theorem and the cubical KKM lemma with the following two definitions. A Lebesgue cover is a finite closed cover of \([0,1]^{d}\) with no set containing points on opposite faces. Informally, a KKM cover is a finite closed cover of \([0,1]^{d}\) with the sets of the cover associated to the vertices of the cube; we require that any point in a face \(F\) is covered by one of the sets corresponding to the vertices defining \(F\). **Definition 1.4** (Lebesgue Cover).: _A Lebesgue cover of \([0,1]^{d}\) is an indexed family \(\mathcal{C}=\{C_{n}\}_{n\in[N]}\) of closed subsets of \(\mathbb{R}^{d}\) (for some \(N\in\mathbb{N}\)) which covers \([0,1]^{d}\) such that for each \(n\in[N]\), \(C_{n}\) does not contain points on opposite faces of the cube._ **Definition 1.5** (KKM Cover).: _A KKM cover of \([0,1]^{d}\) is an indexed family \(\mathcal{C}=\{C_{\vec{v}}\}_{\vec{v}\in\{0,1\}^{d}}\) of closed subsets of \(\mathbb{R}^{d}\) such that for each face \(F\) of the cube, \(F\subseteq\bigcup_{\vec{v}\in F\cap\{0,1\}^{d}}C_{\vec{v}}\). (In particular, the cube itself is a face, so the cube is covered.)_ While the two notions of covers are different from each other5, they are both sufficient to guarantee the same conclusion as in the KKM-Lebesgue Theorem: at least \(d+1\) sets in the cover meet at some point. Footnote 5: For example, consider the indexed family \(\mathcal{C}=\{C_{\vartheta}\}_{\vartheta\in\{0,1\}^{d}}\) where \(C_{\vartheta}=[0,1]^{d}\) for each \(\vec{v}\); this is a KKM cover, but not a Lebesgue cover. Conversely, a finite Lebesgue cover with cardinality exceeding \(|\{0,1\}^{d}|=2^{d}\) is not a KKM cover. **Theorem 1.6** (Lebesgue Covering Theorem).: _Given a Lebesgue cover of \([0,1]^{d}\), there exists a point \(\vec{p}\in[0,1]^{d}\) belonging to at least \(d+1\) sets in the cover (i.e. \(|\{n\in[N]:\vec{p}\in C_{n}\}|\geq d+1\))._ **Theorem 1.7** (Cubical KKM Lemma).: _Given a KKM cover of \([0,1]^{d}\), there exists a point \(\vec{p}\in[0,1]^{d}\) belonging to at least \(d+1\) sets in the cover (i.e. \(\left|\{\vec{v}\in\{0,1\}^{d}:\vec{p}\in C_{\vec{v}}\}\right|\geq d+1\))._ These results are well known and can be found for example in [5, 11, 4, 8, 1] for the Cubical KKM Lemma and in [6] or [3, Theorem IV 2] for the Lebesgue Covering Theorem. Though the Lebesgue Covering Theorem, the Cubical KKM Lemma, and the KKM-Lebesgue Theorem (our coloring combination of the two) are all naturally equivalent to each other (a careful proof of this folklore result is given in Subsection A.1), we chose to use the KKM-Lebesgue Theorem to represent these two more famous results and motivate our main result (the Neighborhood KKM-Lebesgue Theorem) for the following reasons. For our purposes, we don't want to work directly with a KKM cover because we need to deal with extensions along boundaries that are cumbersome to work with for a KKM cover since sets can intersect opposite faces. We also don't want to work with Lebesgue covers because the definition of a Lebesgue cover actually implies that every set in the cover (when viewed as a subset of \([0,1]^{d}\)) has \(\ell_{\infty}\) diameter strictly less than \(1\), but we are okay with sets that have diameter \(1\) as long as they don't contain points on opposite faces (i.e. no points in the set attain \(\ell_{\infty}\) distance \(1\) from each other). Also, we don't want to require the sets to be closed which is formally a requirement of both types of covers. Nonetheless, as one may expect, our main result (the Neighborhood KKM-Lebesgue Theorem) can be equivalently restated by replacing the SLKKM coloring hypothesis with either a Lebesgue cover (Corollary 1.8) or a KKM cover (Corollary 1.9). The proofs that these corollaries follow from the Neighborhood KKM-Lebesgue Theorem are given after the statements below. As this direction of the implications is sufficient to prove all three results, we don't provide the proofs of the reverse implications of the equivalence, but the main ideas are identical to those used to prove the equivalence of the Cubical KKM Lemma, the Lebesgue Covering Theorem, and the KKM-Lebesgue Theorem (found in Subsection A.1). **Corollary 1.8** (Neighborhood Lebesgue Theorem).: _Given a Lebesgue cover of \([0,1]^{d}\), for any \(\varepsilon\in(0,\infty)\) there exists a point \(\vec{p}\in[0,1]^{d}\) such that \(B_{\infty}^{\circ}(\varepsilon,\vec{p})\) intersects at least \(\left\lceil\left(1+\frac{\varepsilon}{1+\varepsilon}\right)^{d}\right\rceil\) sets in the cover. In particular, if \(\varepsilon\in(0,\frac{1}{2}]\) then \(B_{\infty}^{\circ}(\varepsilon,\vec{p})\) intersects at least \(\left\lceil\left(1+\frac{2}{3}\varepsilon\right)^{d}\right\rceil\) sets in the cover._ **Corollary 1.9** (Neighborhood KKM Theorem).: _Given a KKM cover of \([0,1]^{d}\), for any \(\varepsilon\in(0,\infty)\) there exists a point \(\vec{p}\in[0,1]^{d}\) such that \(B_{\infty}^{\circ}(\varepsilon,\vec{p})\) intersects at least \(\left\lceil\left(1+\frac{\varepsilon}{1+\varepsilon}\right)^{d}\right\rceil\) sets in the cover. In particular, if \(\varepsilon\in(0,\frac{1}{2}]\) then \(B_{\infty}^{\circ}(\varepsilon,\vec{p})\) intersects at least \(\left\lceil\left(1+\frac{2}{3}\varepsilon\right)^{d}\right\rceil\) sets in the cover._ Proof of Corollary 1.8 from the Neighborhood KKM-Lebesgue Theorem (Theorem 1.3).: Let \(N\in\mathbb{N}\) and \(\mathcal{C}=\{C_{n}\}_{n\in[N]}\) be a Lebesgue cover of \([0,1]^{d}\). Because this is a cover, every point of \([0,1]^{d}\) belongs to some set, so define \(\chi\) as follows: \[\chi:[0,1]^{d}\rightarrow[N]\] \[\chi(\vec{x})=\min\{n\in[N]:\vec{x}\in C_{n}\}.\] This is trivially a finite SLKKM coloring of \([0,1]^{d}\) because the codomain of \(\chi\) is finite and for \(\vec{x}^{(0)}\) and \(\vec{x}^{(1)}\) on opposite faces, there is no \(n\in[N]\) for which both \(\vec{x}^{(0)}\in C_{n}\) and \(\vec{x}^{(1)}\in C_{n}\) and thus \(\{n\in[N]:\vec{x}^{(0)}\in C_{n}\}\) and \(\{n\in[N]:\vec{x}^{(1)}\in C_{n}\}\) are disjoint, so \(\chi(\vec{x}^{(0)})\neq\chi(\vec{x}^{(1)})\). Note that \(\chi^{-1}(n)\) is the set of points of color \(n\). By the Neighborhood KKM-Lebesgue Theorem, there exists \(\vec{p}\in[0,1]^{d}\) such that \[\big{|}\{n\in[N]:B^{\circ}_{\infty}(\varepsilon,\vec{p})\cap\chi^{-1}(n)\}\big{|} \geq\Big{(}1+\tfrac{\varepsilon}{1+\varepsilon}\Big{)}^{d}\,.\] Fix such a \(\vec{p}\) for the remainder of the proof. For each \(n\in[N]\), observe that \(\chi^{-1}(n)\subseteq C_{n}\) because for any \(\vec{x}\in\chi^{-1}(n)\) we have \(\chi(\vec{x})=n\), so by definition of \(\chi\) we have \(\vec{x}\in C_{n}\). The following subset containment then follows immediately: \[\{n\in[N]:B^{\circ}_{\infty}(\varepsilon,\vec{p})\cap\chi^{-1}(n)\neq\emptyset \}\quad\subseteq\quad\{n\in[N]:B^{\circ}_{\infty}(\varepsilon,\vec{p})\cap C_ {n}\neq\emptyset\}.\] and since the former has cardinality at least \((1+\tfrac{\varepsilon}{1+\varepsilon})^{d}\), so does the latter, which proves the result. (The "in particular" part of the result is just an arithmetic fact which is demonstrated in the proof of the Neighborhood KKM-Lebesgue Theorem (Theorem 1.3).) Proof of Corollary 1.9 from the Neighborhood KKM-Lebesgue Theorem (Theorem 1.3).: Let \(\mathcal{C}=\{C_{\vec{v}}\}_{\vec{v}\in\{0,1\}^{d}}\) be a KKM cover of \([0,1]^{d}\). For each \(\vec{x}\in[0,1]^{d}\), let \(F_{\vec{x}}\) denote the smallest face of the cube containing \(\vec{x}\) (i.e. \(F_{\vec{x}}\) is the intersection of all faces containing \(\vec{x}\), and it is well-known and easily verified that this intersection is also a face). By the defining property of a KKM cover, we have \(F_{\vec{x}}\subseteq\bigcup_{\vec{v}\in F_{\vec{x}}\cap\{0,1\}^{d}}C_{\vec{v}}\), so in particular there exists some \(\vec{v}\in F_{\vec{x}}\cap\{0,1\}^{d}\) with \(\vec{x}\in C_{\vec{v}}\). Define the function \(\chi\) as follows where \(\min_{\mathrm{lex}}\) denotes the minimum element in a subset of \(\{0,1\}^{d}\) under the lexicographic ordering: \[\chi:[0,1]^{d}\to\{0,1\}^{d}\] \[\chi(\vec{x})=\min_{\mathrm{lex}}\{\vec{v}\in F_{\vec{x}}\cap\{0,1\}^{d}:\vec{x}\in C_{\vec{v}}\}\] We have already demonstrated that the the set in the definition is not empty, so \(\chi\) is well-defined. We claim that \(\chi\) is a finite SLKKM coloring of \([0,1]^{d}\). The finiteness is trivial because the codomain of \(\chi\) is finite, so we need only show it is an SLKKM coloring. Suppose \(F^{(0)}\) and \(F^{(1)}\) are opposite faces of the cube (i.e. there is some coordinate \(j\in[d]\) such that \(\pi_{j}(F^{(0)})=\{0\}\) and \(\pi_{j}(F^{(1)})=\{1\}\)) and let \(\vec{x}^{(0)}\in F^{(0)}\) and \(\vec{x}^{(1)}\in F^{(1)}\). Because \(\pi_{j}(F^{(0)})\cap\pi_{j}(F^{(1)})=\emptyset\), it follows that \(F^{(0)}\cap F^{(1)}=\emptyset\), so \(F^{(0)}\) and \(F^{(1)}\) are disjoint sets. Now consider the faces \(F_{\vec{x}^{(0)}}\) and \(F_{\vec{x}^{(1)}}\). Because \(\vec{x}^{(0)}\in F^{(0)}\) and \(F_{\vec{x}^{(0)}}\) is by definition the intersection of all faces containing \(\vec{x}^{(0)}\), we have \(F_{\vec{x}^{(0)}}\subseteq F^{(0)}\) (and similarly replacing "\(0\)" with "\(1\)") so that also \(F_{\vec{x}^{(0)}}\) and \(F_{\vec{x}^{(1)}}\) are disjoint. By definition of \(\chi\) we have \(\chi(\vec{x}^{(0)})\in F_{\vec{x}^{(0)}}\) and \(\chi(\vec{x}^{(1)})\in F_{\vec{x}^{(1)}}\) showing that \(\chi(\vec{x}^{(0)})\neq\chi(\vec{x}^{(1)})\), so \(\chi\) is an SLKKM coloring. By the Neighborhood KKM-Lebesgue Theorem, there exists \(\vec{p}\in[0,1]^{d}\) such that \[\big{|}\{\vec{v}\in\{0,1\}^{d}:B^{\circ}_{\infty}(\varepsilon,\vec{p})\cap \chi^{-1}(\vec{v})\neq\emptyset\}\big{|}\geq\Big{(}1+\tfrac{\varepsilon}{1+ \varepsilon}\Big{)}^{d}\,.\] Fix such a \(\vec{p}\) for the remainder of the proof. For each \(\vec{v}\in\{0,1\}^{d}\), observe that \(\chi^{-1}(\vec{v})\subseteq C_{\vec{v}}\) because for any \(\vec{x}\in\chi^{-1}(\vec{v})\) we have \(\chi(\vec{x})=\vec{v}\), so by definition of \(\chi\) we have \(\vec{x}\in C_{\vec{v}}\). The following subset containment then follows immediately: \[\{\vec{v}\in\{0,1\}^{d}:B^{\circ}_{\infty}(\varepsilon,\vec{p})\cap\chi^{-1}( \vec{v})\neq\emptyset\}\quad\subseteq\quad\{\vec{v}\in\{0,1\}^{d}:B^{\circ}_{ \infty}(\varepsilon,\vec{p})\cap C_{\vec{v}}\neq\emptyset\}.\] and since the former has cardinality at least \((1+\tfrac{\varepsilon}{1+\varepsilon})^{d}\), so does the latter which proves the result. (The "in particular" part of the result is just an arithmetic fact which is demonstrated in the proof of the Neighborhood KKM-Lebesgue Theorem (Theorem 1.3).) ### Sperner's Lemma on the Cube In light of the fact that our main result can (and should) be viewed as a neighborhood variant of the Lebesgue Covering Theorem and the Cubical KKM Lemma, it is natural to ask whether we can obtain a discretized version of our result to serve as a neighborhood version of Sperner's lemma on the cube, and the answer is yes. It is important to note that there are multiple natural ways to adapt Sperner's lemma from the simplex to the cube. For example, [1] considers subdividing the cube into simplices and using \(2^{d}\) colors (because this is natural in the broader polytope setting in which they work) while [5] considers subdividing the cube into smaller cubes but still using \(2^{d}\) colors, and [11] considers a subdivision into smaller cubes with either \(2^{d}\) colors or with \(d+1\) colors. Clearly, it would not make sense to try to consider our style of result for the cubical adaptions of Sperner's lemma which use \(d+1\) colors because we cannot possibly hope in general for some \(\varepsilon\)-ball to intersect \(\Big{|}\big{(}1+\tfrac{2}{3}\varepsilon\big{)}^{d}\Big{|}\)-many colors if only \(d+1\) colors are used. Thus, the natural context for us is the cubical adaptions of Sperner's lemma in which \(2^{d}\) colors are used. Another observation is that arbitrary triangulations or cube decompositions of \([0,1]^{d}\) defined by some set of points \(\Lambda\) are not strong enough for us. We are working with an \(\varepsilon\)-ball, so we need to have some information about how close together points in \(\Lambda\) are from each other. We define below the notion we will use to make our main result discrete; informally, a set \(\Lambda\) is called \(\rho\)-proximate if every point within a face \(F\) is distance at most \(\rho\) from a point in \(\Lambda\) which belongs to the same face \(F\). In the formal definition below, while \(\Lambda\) can be thought of as a discrete set, this is not necessary. **Definition 1.10** (\(\rho\)-Proximate Set).: _Let \(\Lambda\subseteq[0,1]^{d}\). For \(\rho\in[0,\infty)\), \(\Lambda\) is called \(\rho\)-proximate if for every face \(F\) of \([0,1]^{d}\) and for every \(\vec{x}\in F\), there exists \(\vec{y}\in F\cap\Lambda\) such that \(\left\|\vec{x}-\vec{y}\right\|_{\infty}\leq\rho\)._ _Remark 1.11_.: We could have equivalently rephrased the last part of the definition as "\(\ldots\) for every face \(F\) of \([0,1]^{d}\) and for every \(\vec{x}\in F\), we have \(F\cap\Lambda\cap\overline{B}_{\infty}(\rho,\vec{x})\neq\emptyset\)". Or we could have equivalently rephrased it as "\(\ldots\) for every face \(F\) of \([0,1]^{d}\), we have \(F\subseteq\bigcup_{\vec{y}\in F\cap\Lambda}\overline{B}_{\infty}(\rho,\vec{p})\)". Or we could have equivalently rephrased it as "\(\ldots\) for every face \(F\) of \([0,1]^{d}\), we have \(F\subseteq(F\cap\Lambda)+\overline{B}_{\infty}(\rho,\vec{0})\)" where "\(+\)" denotes the Minkowski sum of the two sets. \(\triangle\) Of particular note is that for any \(\rho\in[0,1]\) (taking \(n=\lfloor 1/\rho\rfloor\)) the grid \(\{0,\rho,2\rho,\ldots,(n-1)\rho,n\rho,1\}^{d}\) of \((n+2)^{d}\) points is \(\rho\)-proximate. This grid occurs naturally in many contexts and is one of the simplest settings in which we can consider discretizing the main result. Now, if we discretize the coloring hypothesis of the Neighborhood KKM-Lebesgue Theorem to a set of points that are \(\rho\)-proximate, then we get the same conclusion as the Neighborhood KKM-Lebesgue Theorem but have to account for this spacing using the triangle inequality. Therefore, an "effective \(\varepsilon\)" of "\(\varepsilon-\rho\)" appears in the result. **Theorem 1.12** (Neighborod Sperner Lemma).: _Let \(\rho\in[0,\infty)\) and \(\Lambda\subseteq[0,1]^{d}\) be \(\rho\)-proximate. Let \(\rho^{\prime}=\min(\rho,\frac{1}{2})\). Given an SLKKM coloring of \(\Lambda\), for any \(\varepsilon\in(0,\infty)\) there exists a point \(\vec{p}\in[0,1]^{d}\) such that \(B_{\infty}^{\circ}(\varepsilon,\vec{p})\) contains points of at least \(\left\lceil\left(1+\frac{(\varepsilon-\rho^{\prime})}{1+(\varepsilon-\rho^{ \prime})}\right)^{d}\right\rceil\) different colors. In particular, if \(\varepsilon\in(0,\frac{1}{2}]\) then \(B_{\infty}^{\circ}(\varepsilon,\vec{p})\) contains points of at least \(\left\lceil\left(1+\frac{2}{3}(\varepsilon-\rho^{\prime})\right)^{d}\right\rceil\) different colors._ The proof is given in Section 3. _Remark 1.13_.: The Neighborhood Sperner Lemma and the Neighborhood KKM-Lebesgue Theorem are in fact naturally equivalent. We will prove the Neighborhood Sperner Lemma as a corollary of the Neighborhood KKM-Lebesgue Theorem showing one direction of the equivalence. For the other direction, we recover the statement of the Neighborhood KKM-Lebesgue Theorem from the Neighborhood Sperner Lemma in the special case that \(\Lambda=[0,1]^{d}\) and \(\rho=0\) (because \([0,1]^{d}\) is \(0\)-proximate). \(\triangle\) It should be unsurprising that the definition of \(\rho\)-proximate includes the property that every point of the cube is within distance \(\rho\) of a point in \(\Lambda\) because we want to show that some point \(\vec{p}\) is \(\varepsilon\)-close to many colors, so we need to know that the color classes (and thus the points in \(\Lambda\)) are not all mutually far apart6. The condition that such points must belong to the same face may be less obvious but probably not surprising considering the nature of Sperner's lemma and the KKM lemma; the reason we need such a property is demonstrated by the set \(\Lambda=(0,1)^{d}\). If we didn't require that the point \(\vec{y}\) in Definition 1.10 is in both \(F\) and \(\Lambda\), then the set \(\Lambda=(0,1)^{d}\) would be \(\rho\)-proximate for each \(\rho\in(0,\infty)\), and yet it would be a valid SLKKM coloring to assign every point of \(\Lambda\) the same color, and thus we could not guarantee the existence of an \(\varepsilon\)-ball intersecting more than \(1\) color. As a consequence of this requirement in Definition 1.10, we obtain the following simple property. Footnote 6: A set with this requirement is called a \(\rho\)-net in some contexts. _Remark 1.14_.: If \(\Lambda\) is \(\rho\)-proximate for some \(\rho\in[0,\infty)\), then \(\Lambda\) contains all vertices of the cube (i.e. \(\Lambda\supseteq\{0,1\}^{d}\)). This is because for any vertex \(\vec{v}\in\{0,1\}^{d}\), the singleton set \(F=\prod_{i=1}^{d}\{v_{i}\}=\{\vec{v}\}\) is a face of the cube, so by definition of \(\rho\)-proximate, for each \(\vec{x}\in F\), there is some associated \(\vec{y}\in F\cap\Lambda\), so in particular \(F\cap\Lambda\neq\emptyset\). Since \(F\) contains only the point \(\vec{v}\), we must have \(\vec{v}\in\Lambda\). \(\triangle\) ### Outline The remainder of the paper is laid out as follows. In Subsection 1.4, we briefly state additional notation we use. In Section 2, we prove the Neighborhood KKM-Lebesgue Theorem. In Section 3, we prove the Neighborhood Sperner Lemma as a corollary. We finish in Section 4 with a short discussion of how we hope the bounds might be improved, and we mention some limitations on what is possible. Proofs of lemmas omitted from the main text are included in Appendix A. For completeness, we include a proof of the folklore Continuous Multi-Pigeonhole Principle (Proposition B.7) in Appendix B which is stated later and is central to the proof of the Neighborhood KKM-Lebesgue Theorem. ### Notation We write \(B^{\circ}_{\mathbb{I}\cdot\mathbb{I}}(\varepsilon,\vec{p})\) (resp. \(B^{\circ}_{\infty}(\varepsilon,\vec{p})\)) to indicate the open ball of radius \(\varepsilon\) at \(\vec{p}\) with respect to a generic norm \(\left\|\cdot\right\|\) (resp. the \(\ell_{\infty}\) norm). Similarly, we use \(\overline{B}_{\mathbb{I}\cdot\mathbb{I}}(\varepsilon,\vec{p})\) and \(\overline{B}_{\infty}(\varepsilon,\vec{p})\) for closed balls. We use \(v_{\mathbb{I}\cdot\mathbb{I},d}\) to denote the volume of the unit radius ball in dimension \(d\) with respect to a generic norm \(\left\|\cdot\right\|\). We write \(m\) for the Lebesgue measure and \(m_{out}\) for the induced outer measure. For two sets \(A,B\subseteq\mathbb{R}^{d}\), we write \(A+B\) to indicate the Minkowski sum \(A+B=\left\{\vec{a}+\vec{b}:\vec{a}\in A\text{ and }\vec{b}\in B\right\}\). ## 2 Proof of the Neighborhood KKM-Lebesgue Theorem We will require a few lemmas for the proof which we state here. Proofs are provided in Subsection A.2. The first result will later allow us to pass a result through a limit because the answer will be an integer. **Fact A.5**.: _For any \(\alpha\in\mathbb{R}\), there exists \(\gamma\in\mathbb{R}\) such that \(\gamma<\alpha\) and \(\left\lceil\gamma\right\rceil=\left\lceil\alpha\right\rceil\)._ The second result is an inequality that can be proved using basic calculus. The interpretation of the result is that (for appropriate parameters) in the expression \((x^{1/d}+\alpha)^{d}\) we can "approximately factor out" the \(x\) term because there is both a \(d\)th root and \(d\)th power involved to get the no larger expression \(x(1+\alpha)^{d}\). **Lemma A.6**.: _For \(d\in[1,\infty)\) and \(x\in[0,1]\) and \(\alpha\in[0,\infty)\), it holds that \((x^{1/d}+\alpha)^{d}\geq x(1+\alpha)^{d}\)._ The third result is a consequence of the Generalized Brunn-Minkowski Inequality (discussed in Subsection A.2) which says that for non-empty measurable sets \(A,B\subseteq\mathbb{R}^{d}\) for which \(A+B\) is also measurable, it holds that \(m(A+B)\geq\left(m(A)^{\frac{1}{d}}+m(B)^{\frac{1}{d}}\right)^{d}\). The main content of our corollary is dealing with non-measurable sets in such a way that we can apply the Generalized Brunn-Minkowski Inequality. **Corollary A.12**.: _Let \(d\in\mathbb{N}\) and \(Y\subseteq\mathbb{R}^{d}\) and \(\varepsilon\in(0,\infty)\). Then \(Y+B^{\circ}_{\infty}(\varepsilon,\vec{0})\) is open (and thus Borel measurable), and \(m(Y+B^{\circ}_{\infty}(\varepsilon,\vec{0}))\geq\left(m_{out}(Y)^{\frac{1}{d} }+2\varepsilon\right)^{d}\)._ The final result we need is heavily based on the ideas behind the common proof of Blitchfeldt's theorem and can be viewed as a continuous version of the pigeonhole principle with multiplicity. It says that if we have a family of subsets of \(S\) and in total their measure is at least \(k\cdot m(S)\), then there is a point belonging to at least \(\left\lceil k\right\rceil\) sets in the family. While this is a known result, it is frequently enough stated informally or only in the (very simple) special case \(\left\lceil k\right\rceil=2\) that for completeness and convenience we include a proof in Appendix B. **Proposition B.7** (Continuous Multi-Pigeonhole Principle).: _Let \(d\in\mathbb{N}\) and \(S\subset\mathbb{R}^{d}\) be measurable with finite measure. Let \(\mathcal{A}\) be a family of measurable subsets of \(S\) and let \(k=\left\lceil\frac{\sum_{A\in\mathcal{A}}m(A)}{m(S)}\right\rceil\). If \(k<\infty\), then there exists \(\vec{p}\in S\) such that \(\vec{p}\) belongs to at least \(k\) members of \(\mathcal{A}\). If \(k=\infty\), then for any integer \(n\), then there exists \(\vec{p}\in S\) such that \(\vec{p}\) belongs to at least \(n\) members of \(\mathcal{A}\)._ Figure 1: (a) shows an SLKKM coloring \(\chi\) of the unit cube \([-\frac{1}{2},\frac{1}{2}]^{d}\) for \(d=2\) (i.e. no color includes points on opposite edges). (b) shows the natural extension \(\gamma\) of that coloring to \([-\frac{1}{2}-\varepsilon,\frac{1}{2}+\varepsilon]^{d}\). The extension is obtained by mapping each point \(\vec{y}\in[-\frac{1}{2}-\varepsilon,\frac{1}{2}+\varepsilon]^{d}\) to the point \(\vec{x}\in[-\frac{1}{2},\frac{1}{2}]^{d}\) for which each coordinate value is restricted to be within \([-\frac{1}{2},\frac{1}{2}]\), and then \(\vec{y}\) is given whatever color \(\vec{x}\) had. (c), (e), and (g) show three of the five colors and demonstrate that there is at least one quadrant of the \(\varepsilon\)-ball that can be Minkowski summed with the color so that the sum remains a subset of the extended cube. For red it is the lower right quadrant, for purple it is the upper right, and for gray it could be the upper left (shown) or the upper right. (d), (f), and (h) show the resulting Minkowski sum for each color. Utilizing the Brunn-Minkowski inequality, this set will have substantially greater area—by a factor of at least \((1+\frac{\varepsilon}{1+\varepsilon})^{d}\). Now we are ready to prove the main result of this work restated below. The proof is illustrated in Figure 1. **Theorem 1.3** (Neighborhood KKM-Lebesgue Theorem).: _Given an SLKKM coloring of \([0,1]^{d}\), for any \(\varepsilon\in(0,\infty)\) there exists a point \(\vec{p}\in[0,1]^{d}\) such that \(B^{\circ}_{\infty}(\varepsilon,\vec{p})\) contains points of at least \(\left\lceil\left(1+\frac{\varepsilon}{1+\varepsilon}\right)^{d}\right\rceil\) different colors. In particular, if \(\varepsilon\in(0,\frac{1}{2}]\) then \(B^{\circ}_{\infty}(\varepsilon,\vec{p})\) contains points of at least \(\left\lceil\left(1+\frac{2}{3}\varepsilon\right)^{d}\right\rceil\) different colors._ Proof.: For convenience, we will assume that the cube is \([-\frac{1}{2},\frac{1}{2}]^{d}\) rather than \([0,1]^{d}\). Let \(C\) be a set (of colors) and \(\chi\colon[-\frac{1}{2},\frac{1}{2}]^{d}\to C\) be an SLKKM coloring of the unit cube \([-\frac{1}{2},\frac{1}{2}]^{d}\). Let \(C^{\prime}=\operatorname{range}(\chi)\) so that we know every color in \(C^{\prime}\) appears for some point in the cube. We first deal with the case where \(C^{\prime}\) has infinite cardinality7. If \(C^{\prime}\) has infinite cardinality, then because we can cover the cube with finitely many \(\varepsilon\)-balls, one of these balls must contain points of infinitely many colors, so the result is true. Thus, we assume from now on that \(C^{\prime}\) has finite cardinality. Footnote 7: If one accepts the axiom of choice, then we don’t need to deal with this as a special case, but by doing so, we can avoid requiring the axiom of choice in the proof. For each color \(c\in C^{\prime}\) we will let \(X_{c}\) denote the set of points assigned color \(c\) by \(\chi\)--that is, \(X_{c}=\chi^{-1}(c)\). Note that the hypothesis that no color includes points of opposite faces formally means that that for every color \(c\in C^{\prime}\), the set \(X_{c}\) has the property that for each coordinate \(i\in[d]\), the projection \(\pi_{i}(X_{c})=\{x_{i}:\vec{x}\in X_{c}\}\) does not contain both \(-\frac{1}{2}\) and \(\frac{1}{2}\). The first step in the proof is to extend the coloring \(\chi\) to the larger cube \([-\frac{1}{2}-\varepsilon,\frac{1}{2}+\varepsilon]^{d}\) in a natural way. Consider the following function \(f\) which truncates points in the larger interval to be in the smaller interval: \[f\colon[-\tfrac{1}{2}-\varepsilon,\tfrac{1}{2}+\varepsilon] \to[-\tfrac{1}{2},\tfrac{1}{2}]\] \[f(y)\mathop{=}^{\text{def}}\begin{cases}-\tfrac{1}{2}&y\leq- \tfrac{1}{2}\\ y&y\in(-\tfrac{1}{2},\tfrac{1}{2})\\ \tfrac{1}{2}&y\geq\tfrac{1}{2}\end{cases}\] Let \(\vec{f}\colon[-\tfrac{1}{2}-\varepsilon,\tfrac{1}{2}+\varepsilon]^{d}\to[- \tfrac{1}{2},\tfrac{1}{2}]^{d}\) be the function which is \(f\) in each coordinate: \(\vec{f}(\vec{y})\mathop{=}^{\text{def}}\langle f(y_{i})\rangle_{i=1}^{d}\). Now extend the coloring \(\chi\) to the coloring \(\gamma\colon[-\tfrac{1}{2}-\varepsilon,\tfrac{1}{2}+\varepsilon]^{d}\to C^{\prime}\) defined by \[\gamma(\vec{x})\mathop{=}^{\text{def}}\chi\big{(}\vec{f}\,(\vec{x})\,\big{)}.\] For each color \(c\in C^{\prime}\), let \(Y_{c}=\gamma^{-1}(c)\) denote the points assigned color \(c\) by \(\gamma\) and note that \(X_{c}\subseteq Y_{c}\). Consistent with this notation, we will typically refer to a point in the unit cube as \(\vec{x}\) and a point in the extended cube as \(\vec{y}\). We make the following claim which implies that for each color \(c\in C^{\prime}\), the set \(Y_{c}\) of points of that color in the extended coloring are contained in a set bounded away from one side of the extended cube \([-\tfrac{1}{2}-\varepsilon,\tfrac{1}{2}+\varepsilon]^{d}\) in each coordinate. **Claim A**.: _For each color \(c\in C^{\prime}\) there exists an orientation \(\vec{v}\in\{-1,1\}^{d}\) such that \(Y_{c}\subseteq\prod_{i=1}^{d}v_{i}\cdot(-\tfrac{1}{2},\tfrac{1}{2}+\varepsilon]\)._ Proof of Claim.: Fix an arbitrary coordinate \(i\in[d]\). Note that for every \(\vec{y}\in Y_{c}\) we have \(\vec{f}(\vec{y})\in X_{c}\) which is to say that the \(\vec{y}\) has the same color in the extended coloring as \(f(\vec{y})\) does in the original coloring (see justification8). Footnote 8: For every \(\vec{y}\in Y_{c}\) we have (by definition of \(Y_{c}\)) that \(\gamma(\vec{y})=c\) and (by definition of \(\gamma\)) that \(\gamma(\vec{y})=\chi(\vec{f}(\vec{y}))\) showing that \(\chi(f(\vec{y}))=c\) and thus (by definition of \(X_{c}\)) that \(f(\vec{y})\in X_{c}\). Note that if there is some \(\vec{y}\in Y_{c}\) with \(y_{i}\leq-\tfrac{1}{2}\), then \(f(y_{i})=-\tfrac{1}{2}\) so the fact that \(X_{c}\ni\vec{f}(\vec{y})\) implies that \(\pi_{i}(X_{c})\ni f(y_{i})=-\tfrac{1}{2}\). Similarly, if there is some \(\vec{y}\in Y_{c}\) with \(y_{i}\geq\tfrac{1}{2}\), then \(\pi_{i}(X_{c})\ni\tfrac{1}{2}\). Recall that by hypothesis, \(\pi_{i}(X_{c})\) does not contain both \(-\tfrac{1}{2}\) and \(\tfrac{1}{2}\) which means it is either the case that for all \(\vec{y}\in Y_{c}\) we have \(y_{i}>-\tfrac{1}{2}\) (so \(\pi_{i}(Y_{c})\subseteq(-\tfrac{1}{2},\tfrac{1}{2}+\varepsilon]\)) or it is the case that for all \(\vec{y}\in Y_{c}\) we have \(y_{i}<\tfrac{1}{2}\) (so \(\pi_{i}(Y_{c})\subseteq[-\tfrac{1}{2}-\varepsilon,\tfrac{1}{2})\)). Thus we can choose \(v_{i}\in\{-1,1\}\) such that \(\pi_{i}(Y_{c})\subseteq v_{i}\cdot(-\tfrac{1}{2},\tfrac{1}{2}+\varepsilon]\). Since this is true for each coordinate \(i\in[d]\) we can select \(\vec{v}\in\{-1,1\}^{d}\) such that \[Y_{c}\subseteq\prod_{i=1}^{d}\pi_{i}(Y_{c})\subseteq\prod_{i=1}^{d}v_{i}\cdot(- \tfrac{1}{2},\tfrac{1}{2}+\varepsilon]\] as claimed. For an orientation \(\vec{v}\in\{-1,1\}^{d}\), let \(B_{\vec{v}}\) denote the set \(B_{\vec{v}}\operatorname*{\stackrel{{\text{def}}}{{=}}}\prod_{i=1}^{d }-v_{i}\cdot(0,\varepsilon)\) which should be interpreted as a on open orthant of the \(\ell_{\infty}\)\(\varepsilon\)-ball centered at the origin--specifically the orthant opposite the orientation \(\vec{v}\). Building on Claim A, we get the following: **Claim B**.: _For each color \(c\in C^{\prime}\), there exists an orientation \(\vec{v}\in\{-1,1\}^{d}\) such that \(Y_{c}+B_{\vec{v}}\subseteq[-\frac{1}{2}-\varepsilon,\frac{1}{2}+\varepsilon]^ {d}\)._ Proof of Claim.: Let \(\vec{v}\) be an orientation given in Claim A for color \(c\). We get the following chain of containments: \[Y_{c}+B_{\vec{v}} =Y_{c}+\left(\prod_{i=1}^{d}-v_{i}\cdot(0,\varepsilon)\right)\] (Def'n of \[B_{\vec{v}}\] ) \[\subseteq\left(\prod_{i=1}^{d}v_{i}\cdot(-\tfrac{1}{2},\tfrac{1}{ 2}+\varepsilon]\right)+\left(\prod_{i=1}^{d}-v_{i}\cdot(0,\varepsilon)\right)\] (Claim A) \[=\left(\prod_{i=1}^{d}v_{i}\cdot(-\tfrac{1}{2},\tfrac{1}{2}+ \varepsilon]\right)+\left(\prod_{i=1}^{d}v_{i}\cdot(-\varepsilon,0)\right)\] (Factor a negative) \[=\prod_{i=1}^{d}v_{i}\cdot(-\tfrac{1}{2}-\varepsilon,\tfrac{1}{ 2}+\varepsilon)\] (Minkowski sum of rectangles) \[\subseteq[-\tfrac{1}{2}-\varepsilon,\tfrac{1}{2}+\varepsilon]^{d}.\] ( \[v_{i}\in\{-1,1\}\] ) This proves the claim. We also claim that \(Y_{c}+B_{\vec{v}}\) has substantial measure. **Claim C**.: _For each color \(c\in C^{\prime}\) and any orientation \(\vec{v}\in\{-1,1\}^{d}\), the set \(Y_{c}+B_{\vec{v}}\) is Borel measurable and \(m(Y_{c}+B_{\vec{v}})\geq m_{out}(Y_{c})\cdot\left(1+\frac{\varepsilon}{1+ \varepsilon}\right)^{d}\)._ Proof of Claim.: Let \(M=(1+\varepsilon)^{d}\) which is the measure of \(\prod_{i=1}^{d}v_{i}\cdot(-\tfrac{1}{2},\tfrac{1}{2}+\varepsilon]\), and because by Claim A, \(Y_{c}\) is a subset of some such set, we have \(m_{out}(Y_{c})\leq M\). We have that \(Y_{c}+B_{\vec{v}}\) is Borel measurable and that \(m\left(Y_{c}+B_{\vec{v}}\right)\geq\left(m_{out}(Y_{c})^{\frac{1}{2}}+ \varepsilon\right)^{d}\) by Corollary A.12 (because \(B_{\vec{v}}\) is some translation of \(B_{\infty}^{\infty}(\tfrac{\varepsilon}{2},\overline{0})\) and translations are irrelevant to the measure concerns of Corollary A.12). Thus, we have the following chain of inequalities: \[m(Y_{c}+B_{\vec{v}}) \geq\left(m_{out}(Y_{c})^{1/d}+\varepsilon\right)^{d}\] (Above) \[=M\cdot\left(\frac{m_{out}(Y_{c})^{1/d}}{M^{1/d}}+\frac{ \varepsilon}{M^{1/d}}\right)^{d}\] (Factor out \[M\] ) \[\geq M\cdot\left(\frac{m_{out}(Y_{c})}{M}\right)\cdot\left(1+ \frac{\varepsilon}{M^{1/d}}\right)^{d}\] (Lemma A.6) \[=m_{out}(Y_{c})\cdot\left(1+\frac{\varepsilon}{1+\varepsilon} \right)^{d}\] (Simplify and use \[M=(1+\varepsilon)^{d}\] ) Now, consider the indexed family \(\mathcal{A}=\{Y_{c}+B_{\vec{v}(c)}\}_{c\in C^{\prime}}\) (where \(\vec{v}(c)\) is an orientation for \(c\) as in Claim A and Claim B) noting that this family has finite cardinality because \(C^{\prime}\) has finite cardinality. Considering the sum of measures of sets in \(\mathcal{A}\), we have the following: \[\sum_{A\in\mathcal{A}}m(A) =\sum_{c\in C^{\prime}}m\left(Y_{c}+B_{\vec{v}(c)}\right)\] (Def'n of \[\mathcal{A}\] ; measurability was shown above) \[\geq\left(1+\frac{\varepsilon}{1+\varepsilon}\right)^{d}\cdot\sum_{ c\in C^{\prime}}m_{out}(Y_{c})\] (Claim C and linearity of summation) \[\geq\left(1+\frac{\varepsilon}{1+\varepsilon}\right)^{d}\cdot m_ {out}\left(\bigcup_{c\in C^{\prime}}Y_{c}\right)\] (Countable/finite subaddativity of outer measures) \[=\left(1+\frac{\varepsilon}{1+\varepsilon}\right)^{d}\cdot m_{ out}\left([-\tfrac{1}{2}-\varepsilon,\tfrac{1}{2}+\varepsilon]^{d}\right)\] (The \[Y_{c}\] 's partition \[[-\tfrac{1}{2}-\varepsilon,\tfrac{1}{2}+\varepsilon]^{d}\] ) \[=\left(1+\frac{\varepsilon}{1+\varepsilon}\right)^{d}\cdot(1+2 \varepsilon)^{d}\] (Evaluate outer measure) By Claim B, each member of \(\mathcal{A}\) is a subset of \([-\tfrac{1}{2}-\varepsilon,\tfrac{1}{2}+\varepsilon]^{d}\), so by Proposition B.7, there exists a point \(\vec{p}\in[-\tfrac{1}{2}-\varepsilon,\tfrac{1}{2}+\varepsilon]^{d}\) that belongs to at least \[\left\lceil\frac{\left(1+\frac{\varepsilon}{1+\varepsilon}\right)^{d}\cdot(1 +2\varepsilon)^{d}}{(1+2\varepsilon)^{d}}\right\rceil=\left\lceil\left(1+ \frac{\varepsilon}{1+\varepsilon}\right)^{d}\right\rceil\] sets in \(\mathcal{A}\). That is, \(\vec{p}\) belongs to \(Y_{c}+B_{\vec{v}(c)}\) for at least \(\left\lceil\left(1+\frac{\varepsilon}{1+\varepsilon}\right)^{d}\right\rceil\) colors \(c\in C^{\prime}\). For each such color \(c\), it follows that \(\vec{p}+(-\varepsilon,\varepsilon)^{d}\) intersects \(Y_{c}\) (see justification9). Note that \(\vec{p}+(-\varepsilon,\varepsilon)^{d}=B_{\infty}^{\circ}(\varepsilon,\vec{p})\) showing that \(B_{\infty}^{\circ}(\varepsilon,\vec{p})\) contains points of at least \(\left\lceil\left(1+\frac{\varepsilon}{1+\varepsilon}\right)^{d}\right\rceil\) colors (according to the coloring of \(\gamma\) since we are discussing sets \(Y_{c}\)). Footnote 9: If \(\vec{p}\in Y_{c}+B_{\vec{v}(c)}\subseteq Y_{c}+(-\varepsilon,\varepsilon)^{d}\), then by definition of Minkowski sum there exists \(\vec{y}\in Y_{c}\) and \(\vec{b}\in(-\varepsilon,\varepsilon)^{d}\) such that \(\vec{p}=\vec{y}+\vec{b}\) so \(Y_{c}\ni\vec{y}=\vec{p}-\vec{b}\) and also \(\vec{p}-\vec{b}\in\vec{p}+(-\varepsilon,\varepsilon)^{d}\) demonstrating that these two sets contain a common point. What we really want, though, is a point in the unit cube that has this property rather than a point in the extended cube, and we want it with respect to the original coloring \(\chi\) rather than the extended coloring \(\gamma\). We will show that the point \(\vec{f}\) (\(\vec{p}\)) suffices. **Claim D**.: _If \(c\in C^{\prime}\) is a color for which \(B_{\infty}^{\circ}(\varepsilon,\vec{p})\cap Y_{c}\neq\emptyset\), then also \(B_{\infty}^{\circ}(\varepsilon,\vec{f}\,(\vec{p}))\cap X_{c}\neq\emptyset\)._ Proof of Claim.: Let \(\vec{y}\in B_{\infty}^{\circ}(\varepsilon,\vec{p})\cap Y_{\vec{c}}\). Then because \(\vec{y}\in B_{\infty}^{\circ}(\varepsilon,\vec{p})\), we have \(\left\lVert\vec{y}-\vec{p}\right\rVert_{\infty}<\varepsilon\), so for each coordinate \(i\in[d]\) we have \(\left\lvert y_{i}-p_{i}\right\rvert<\varepsilon\). It is easy to analyze the 9 cases (or 3 by symmetries) arising in the definition of \(f\) to see that this implies \(\left\lvert f(y_{i})-f(p_{i})\right\rvert<\varepsilon\) as well (i.e. \(f\) maps pairs of values in its domain so that they are no farther apart), thus \(\left\lVert\vec{f}\,(\vec{y})-\vec{f}\,(\vec{p})\right\rVert_{\infty}<\varepsilon\) and thus \(\vec{f}\,(\vec{y})\in B_{\infty}^{\circ}(\varepsilon,\vec{f}\,(\vec{p}))\). Also, as justified in a prior footnote 8, for any \(\vec{y}\in Y_{c}\) we have \(\vec{f}(\vec{y})\in X_{c}\) so that \(\vec{f}\,(\vec{y})\in B_{\infty}^{\circ}(\varepsilon,\vec{f}\,(\vec{p}))\cap X _{c}\) which shows that the intersection is non-empty. Thus, because \(B_{\infty}^{\circ}(\varepsilon,\vec{p})\) intersects \(Y_{c}\) for at least \(\left\lceil\left(1+\frac{\varepsilon}{1+\varepsilon}\right)^{d}\right\rceil\) choices of color \(c\in C^{\prime}\), by Claim D\(\vec{f}(\vec{p})\) is a point in the unit cube for which \(B_{\infty}^{\circ}(\varepsilon,\vec{f}\,(\vec{p}))\) intersects \(X_{c}\) for at least \(\left\lceil\left(1+\frac{\varepsilon}{1+\varepsilon}\right)^{d}\right\rceil\) different colors \(c\in C^{\prime}\). That is, this ball contains points from at least this many of the original color sets. The final step in the proof of the theorem is to clean up the expression with an inequality. Note that \(C^{\prime}\) must contain of at least \(2^{d}\) colors because each of the \(2^{d}\) corners of the unit cube must be assigned a unique color since any pair of corners belong to an opposite pair of faces on the cube. For this reason it is trivial that for \(\varepsilon>\tfrac{1}{2}\) there is a point \(\vec{p}\) such that \(B_{\infty}^{\circ}(\varepsilon,\vec{p})\) intersects at least \(2^{d}\) colors: just let \(\vec{p}\) be the midpoint of the unit cube. Thus, the only interesting case is \(\varepsilon\in(0,\tfrac{1}{2}]\), and for such \(\varepsilon\) we have \(1+\varepsilon\leq\tfrac{3}{2}\) and thus \(\tfrac{\varepsilon}{1+\varepsilon}\geq\tfrac{2}{3}\varepsilon\) showing that \(\left(1+\frac{\varepsilon}{1+\varepsilon}\right)^{d}\geq(1+\tfrac{2}{3} \varepsilon)^{d}\) which completes the proof. Proof of the Neighborhood Sperner Lemma Now we restate and prove the Neighborhood Sperner Lemma. Basically, we assign each point of \([0,1]^{d}\) to a color of a point in \(\Lambda\) nearby in a careful way and show that the resulting coloring still doesn't use the same color on opposite faces. We then use the Neighborhood KKM-Lebesgue Theorem to find the point \(\vec{p}\) and pass back the result because \(\Lambda\) approximates \([0,1]^{d}\). Part of the proof is illustrated in Figure 2. [Neighborhood Sperner Lemma] Let \(\rho\in[0,\infty)\) and \(\Lambda\subseteq[0,1]^{d}\) be \(\rho\)-proximate. Let \(\rho^{\prime}=\min(\rho,\frac{1}{2})\). Given an SLKKM coloring of \(\Lambda\), for any \(\varepsilon\in(0,\infty)\) there exists a point \(\vec{p}\in[0,1]^{d}\) such that \(B_{\infty}^{\circ}(\varepsilon,\vec{p})\) contains points of at least \(\left\lceil\left(1+\frac{(\varepsilon-\rho^{\prime})}{1+(\varepsilon-\rho^{ \prime})}\right)^{d}\right\rceil\) different colors. In particular, if \(\varepsilon\in(0,\frac{1}{2}]\) then \(B_{\infty}^{\circ}(\varepsilon,\vec{p})\) contains points of at least \(\left\lceil\left(1+\frac{2}{3}(\varepsilon-\rho^{\prime})\right)^{d}\right\rceil\) different colors. Proof.: Let \(C\) be a set and \(\chi:\Lambda\to C\) denote the coloring of \(\Lambda\), and let \(C^{\prime}=\operatorname{range}(\chi)\). We first deal with the case where \(C^{\prime}\) has infinite cardinality10. If \(C^{\prime}\) has infinite cardinality, then because we can cover the cube (and thus \(\Lambda\)) with finitely many \(\varepsilon\)-balls, one of these balls must contain points of infinitely many colors, so the result is true. Thus, we assume from now on that \(C^{\prime}\) has finite cardinality. Footnote 10: If one accepts the axiom of choice, then we don’t need to deal with this as a special case, but by doing so, we can avoid requiring the axiom of choice in the proof. The next step in the proof is the following observation. **Claim A**.: _If \(\rho\geq\frac{1}{2}\) then the fact that \(\Lambda\) is \(\rho\)-proximate implies that \(\Lambda\) is \(\frac{1}{2}\)-proximate._ Proof of Claim.: For any face \(F\) of the cube, we have \(F=\prod_{i=1}^{d}F_{i}\) where each \(F_{i}\) is \(\{0\}\), \(\{1\}\), or \([0,1]\). This means that any \(\vec{x}\in F\), we can find a vertex \(\vec{v}\) of the cube (a point where each coordinate \(v_{i}\) is \(0\) or \(1\)) which also belongs to \(F\) such that in each coordinate we have \(\left|x_{i}-v_{i}\right|\leq\frac{1}{2}\). Since \(\vec{v}\) is a vertex, by Remark 1.14, we have \(\vec{v}\in\Lambda\). Thus, \(\vec{v}\in F\cap\Lambda\) and \(\left\|\vec{x}-\vec{v}\right\|_{\infty}\leq\frac{1}{2}\). Thus, we may continue knowing that \(\Lambda\) is not only \(\rho\)-proximate but in fact \(\rho^{\prime}\)-proximate where \(\rho^{\prime}=\min(\rho,\frac{1}{2})\). Next we comment on what happens if the term \((\varepsilon-\rho^{\prime})\) is non-positive. Essentially, we have to take show that the expressions \((1+\frac{(\varepsilon-\rho^{\prime})}{1+(\varepsilon-\rho^{\prime})})\) and \((1+\frac{2}{3}(\varepsilon-\rho^{\prime}))\) are never large negative values to make sure the bound we are giving does not take on large positive values when \(d\) is even. **Claim B**.: _The stated result holds when \(\varepsilon\leq\rho^{\prime}\)._ Proof of Claim.: Note that when \(\varepsilon\leq\rho^{\prime}\), then because \(\rho^{\prime}\in[0,\frac{1}{2}]\) this implies \(\varepsilon\in(0,\frac{1}{2}]\), so \((\varepsilon-\rho^{\prime})\in(-\frac{1}{2},0]\). On the interval \(x\in(-\frac{1}{2},0]\), the expression \(1+\frac{1}{1+x}\) is in \((0,1]\), so we have \(1+\frac{(\varepsilon-\rho^{\prime})}{1+(\varepsilon-\rho^{\prime})}\in(0,1]\) and thus \((1+\frac{(\varepsilon-\rho^{\prime})}{1+(\varepsilon-\rho^{\prime})})^{d}\in(0,1]\), so \(\left\lceil(1+\frac{(\varepsilon-\rho^{\prime})}{1+(\varepsilon-\rho^{\prime} )})^{d}\right\rceil=1\). Similarly, because \((\varepsilon-\rho^{\prime})\in(-\frac{1}{2},0]\), we have \(\frac{2}{3}(\varepsilon-\rho^{\prime})\in(-\frac{1}{3},0]\), so \(1+\frac{2}{3}(\varepsilon-\rho^{\prime})\in(\frac{2}{3},1]\), so \((1+\frac{2}{3}(\varepsilon-\rho^{\prime}))^{d}\in(0,1]\), and again \(\left\lceil(1+\frac{2}{3}(\varepsilon-\rho^{\prime}))^{d}\right\rceil=1\). Because \(\Lambda\) is non-empty (by Remark 1.14) it is trivial to find a point where the \(\varepsilon\) ball contains points of at least \(1\) color showing that the result is true when \(\varepsilon\leq\rho^{\prime}\). Thus, we assume from now on that \(\varepsilon>\rho^{\prime}\) which implies \((\varepsilon-\rho^{\prime})\in(0,\infty)\) (the hypothesis we need on the ball radius to apply the Neighborhood KKM-Lebesgue Theorem). Next, for each \(\vec{x}\in[0,1]^{d}\), let \(F^{(\vec{x})}\) denote the smallest face containing \(\vec{x}\) (i.e. the intersection of all faces containing \(\vec{x}\)) and let \(C^{(\vec{x})}\) denote the set of colors present in the face \(F^{(\vec{x})}\) and within \(\rho^{\prime}\) of \(\vec{x}\) (formally, \(C^{(\vec{x})}=\{\chi(\vec{y}):\vec{y}\in F^{(\vec{x})}\cap\Lambda\cap\overline {B}_{\infty}(\rho^{\prime},\vec{x})\}\)) noting that \(C^{(\vec{x})}\) is non-empty by Remark 1.11. Let \(\gamma:[0,1]^{d}\to C^{\prime}\subseteq C\) map each \(\vec{x}\) to some11 color in \(C^{(\vec{x})}\). We claim that \(\gamma\) is an SLKKM coloring so that we will be able to apply the Neighborhood KKM-Lebesgue Theorem to \(\gamma\). Footnote 11: Because \(C^{\prime}\) has finite cardinality, this does not require the axiom of choice. **Claim C**.: _The coloring \(\gamma\) is does not assign the same color to points on opposite faces._ Figure 2: (a) shows a set \(\Lambda\subseteq[0,1]^{2}\) which is \(\rho\)-proximate for \(\rho=\frac{1}{6}\) (i.e. every vertex is displayed, every point of each edge is within distance \(\frac{1}{6}\) of some displayed point on the same edge, and every point in the interior is within distance \(\frac{1}{6}\) of some displayed point). The distance \(\rho\) is shown visually in the upper left. (b) shows an SLKKM coloring of \(\Lambda\) (i.e. a function \(\chi:\Lambda\to C=\{R_{ed},B_{lue},P_{urple},Y_{ellow}\}\) in which no color is used on opposite faces). (c) shows how this coloring is used to produce a coloring of \([0,1]^{2}\) (i.e. a function \(\gamma:[0,1]^{d}\to\{0,1\}^{2}\)): an order is put on the set of colors \(C\) (in this case \(R_{ed},B_{lue},P_{urple},Y_{ellow}\) as shown at the top of (c)) and each point \(\vec{x}\) of the cube is mapped to the first color in the ordering for which there is a point \(\vec{y}\in\Lambda\) of this color that is (1) within distance \(\rho\) of \(\vec{x}\), and (2) is in the smallest face containing \(\vec{x}\). For example, if \(\vec{x}\) is on an edge, then the only points \(\vec{y}\) considered are points in \(\Lambda\) on the same edge which are distance at most \(\rho\) away. (d) clarifies the coloring \(\gamma\) by emphasizing the colors on the vertices, the edges, and the boundaries between colors. Proof of Claim.: We show that points on opposite faces are assigned different colors by \(\gamma\). Let \(F^{(0)}\) and \(F^{(1)}\) be opposite faces of the cube--that is, there is some coordinate \(j\) such that \(\pi_{j}(F^{(0)})=\{0\}\) and \(\pi_{j}(F^{(1)})=\{1\}\). Let \(\vec{x}^{(0)}\in F^{(0)}\) be arbitrary. Because \(F^{(\vec{x}^{(0)})}\) is the intersection of all faces containing \(\vec{x}\), we have \(\vec{x}^{(0)}\in F^{(\vec{x}^{(0)})}\subseteq F^{(0)}\). By definition of \(\gamma\), there is some \(\vec{y}^{(0)}\in F^{(\vec{x}^{(0)})}\cap\Lambda\subseteq F^{(0)}\cap\Lambda\) such that \(\gamma(\vec{x}^{(0)})=\chi(\vec{y}^{(0)})\). Similarly, there is some \(\vec{y}^{(1)}\in F^{(1)}\cap\Lambda\) such that \(\gamma(\vec{x}^{(1)})=\chi(\vec{y}^{(1)})\). By hypothesis of the coloring \(\chi\), because \(\vec{y}^{(0)}\) and \(\vec{y}^{(1)}\) belong to opposite faces of the cube (i.e. \(F^{(0)}\) and \(F^{(1)}\)), we have \(\chi(\vec{y}^{(0)})\neq\chi(\vec{y}^{(0)})\) showing that \(\gamma(\vec{x}^{(0)})\neq\gamma(\vec{x}^{(1)})\). The following claim says that for any point \(\vec{p}\), all of the colors (of \(\gamma\)) appearing in the \((\varepsilon-\rho^{\prime})\) ball at \(\vec{p}\) also appear (as colors of \(\chi\)) in \(\Lambda\) within the \(\varepsilon\) ball at \(\vec{p}\). The connection back to \(\Lambda\) below is because for any \(c\in C^{\prime}\), we have \(\chi^{-1}(c)\subseteq\Lambda\). **Claim D**.: _The following subset containment holds for each point \(\vec{p}\in[0,1]^{d}\) (see comment12):_ Footnote 12: In the former set we express \(c\in\operatorname{range}(\gamma)\) rather than \(c\in C^{\prime}\), because it is possible that \(\operatorname{range}(\gamma)\subsetneq C^{\prime}\). The coloring \(\gamma\) was defined as one of many choices, and in the natural choices we would have \(\gamma(\vec{y})=\chi(\vec{y})\) for each \(\vec{y}\in\Lambda\), but we did not require this, so \(\gamma\) is not necessarily an extension of \(\chi\), and it is possible that it does not surject onto \(C^{\prime}\). \[\{c\in\operatorname{range}(\gamma):\gamma^{-1}(c)\cap B_{\infty}^{\diamond}( \varepsilon-\rho^{\prime},\vec{p})\neq\emptyset\}\quad\subseteq\quad\{c\in \operatorname{range}(\chi)=C^{\prime}:\chi^{-1}(c)\cap B_{\infty}^{\diamond}( \varepsilon,\vec{p})\neq\emptyset\}\] Proof of Claim.: If \(c\) belongs to the left set, then \(\gamma^{-1}(c)\cap B_{\infty}^{\diamond}(\varepsilon-\rho^{\prime},\vec{p})\neq\emptyset\), so let \(\vec{x}\in\gamma^{-1}(c)\cap B_{\infty}^{\diamond}(\varepsilon-\rho^{\prime}, \vec{p})\). Then \(\gamma(\vec{x})=c\) and using the defining property of \(\gamma\) that \(\gamma(\vec{x})\in C^{(\vec{x})}\), we have the following: \[c=\gamma(\vec{x})\in C^{(\vec{x})}=\{\chi(\vec{y}):\vec{y}\in F^{(\vec{x})} \cap\Lambda\cap\overline{B}_{\infty}(\rho^{\prime},\vec{x})\}.\] This means that there is some \(\vec{y}\in\Lambda\cap\overline{B}_{\infty}(\rho^{\prime},\vec{x})\) such that \(\chi(\vec{y})=c\) (i.e. \(\vec{y}\in\chi^{-1}(c)\)). Also, for this \(\vec{y}\), because \(\left\lVert\vec{y}-\vec{x}\right\rVert_{\infty}\leq\rho^{\prime}\) and \(\left\lVert\vec{x}-\vec{p}\right\rVert_{\infty}<\varepsilon-\rho^{\prime}\) we have by the triangle inequality that \(\vec{y}\in B_{\infty}^{\diamond}(\varepsilon,\vec{p})\). Thus, \(\vec{y}\) demonstrates that \(\chi^{-1}(c)\cap B_{\infty}^{\diamond}(\varepsilon,\vec{p})\) is non-empty which shows that \(c\) belongs to the right set. Now we can finish off the proof. By the Neighborhood KKM-Lebesgue Theorem (using the fact that \((\varepsilon-\rho^{\prime})\in(0,\infty)\) along with Claim C), there exists \(\vec{p}\in[0,1]^{d}\) such that \(B_{\infty}^{\diamond}(\varepsilon-\rho^{\prime},\vec{p})\) intersects at least \(\left\lceil(1+\frac{(\varepsilon-\rho^{\prime})}{1+(\varepsilon-\rho^{ \prime})})^{d}\right\rceil\) colors (formally, \(\{c\in\operatorname{range}(\gamma):\gamma^{-1}(c)\cap B_{\infty}^{\diamond}( \varepsilon-\rho^{\prime},\vec{p})\neq\emptyset\}\) has cardinality at least \(\left\lceil(1+\frac{(\varepsilon-\rho^{\prime})}{1+(\varepsilon-\rho^{\prime}) })^{d}\right\rceil\)). Thus, by Claim D the set \(\{c\in\operatorname{range}(\chi)=C^{\prime}:\chi^{-1}(c)\cap B_{\infty}^{ \diamond}(\varepsilon,\vec{p})\neq\emptyset\}\) also has cardinality at least \(\left\lceil(1+\frac{(\varepsilon-\rho^{\prime})}{1+(\varepsilon-\rho^{\prime}) })^{d}\right\rceil\) which is what we set out to prove. (Informally, this latter set is the colors \(c\) for which there is a point in \(\Lambda\cap B_{\infty}^{\diamond}(\varepsilon,\vec{p})\) that is mapped to \(c\) by the original coloring \(\chi\).) Finally, if \(\varepsilon\in(0,\frac{1}{2}]\), then because we have at this point that \(\varepsilon>\rho^{\prime}\), we in fact have \(\varepsilon-\rho^{\prime}\in(0,\frac{1}{2}]\). Thus, by the same inequalities used in the proof of the Neighborhood KKM-Lebesgue Theorem, we have \(\left\lceil\left(1+\frac{(\varepsilon-\rho^{\prime})}{1+(\varepsilon-\rho^{ \prime})}\right)^{d}\right\rceil\geq\left\lceil(1+\frac{2}{3}(\varepsilon-\rho^{ \prime}))^{d}\right\rceil\) which completes the proof. ## 4 Discussion In this section, we will give some discussion of the bound that we achieved in the Neighborhood KKM-Lebesgue Theorem (and equivalently in the Neighborhood Sperner Lemma) including some limitations on improving that bound and some desires for future improvements of our result. We begin by defining the best possible function we could use to replace the expression "\(\left\lceil(1+\frac{\varepsilon}{1+\varepsilon})^{d}\right\rceil\)" in the statement of the Neighborhood KKM-Lebesgue Theorem. Let \(K^{\diamond}:\mathbb{N}\times(0,\infty)\to\mathbb{N}\) be defined by \[K^{\diamond}(d,\varepsilon)\operatorname{\stackrel{{\rm def}}{{=}}} \max\left\{\kappa\in\mathbb{N}:\begin{array}{l}\text{ for every SLKKM coloring of }[0,1]^{d},\text{ there}\\ \text{exists a point }\vec{p}\in[0,1]^{d}\text{ such that}\\ B_{\infty}^{\diamond}(\varepsilon,\vec{p})\text{ intersects at least }\kappa\text{ colors}\end{array}\right\}\] and define the function \(\overline{K}\) similarly but using closed balls instead of open balls. A few comments are in order. First, this maximum is well defined because there exist SLKKM colorings of \([0,1]^{d}\) using \(2^{d}\) colors (e.g. color each of the \(2^{d}\) orthants of the cube a unique color dealing with boundaries between colors arbitrarily) and so the set that the maximum is being taken over is necessarily a subset of \([2^{d}]\) and thus has a maximum. Second, it is trivially the case for all \(d\in\mathbb{N}\) and \(\varepsilon\in(0,\infty)\) that \(\overline{K}(d,\varepsilon)\geq K^{\circ}(d,\varepsilon)\) because the closed \(\varepsilon\)-ball is a superset of the open \(\varepsilon\)-ball. Third, by the same superset reasoning we also know that for each fixed \(d\in\mathbb{N}\), both functions \(K^{\circ}\) and \(\overline{K}\) are non-decreasing in \(\varepsilon\) (we will refer to both of these properties as monotonicity throughout the discussion). We can say much more about \(K^{\circ}\) and \(\overline{K}\), and the information from the discussion that follows is summarized in Table 1. Immediate lower bound:The first obvious thing we can say is that for all \(d\in\mathbb{N}\) and \(\varepsilon\in(0,\infty)\), we know that \(K^{\circ}(d,\varepsilon)\geq d+1\) (ditto for \(\overline{K}\)) which follows straightforwardly from the KKM-Lebesgue Theorem taking care with the infinite cardinality13. This is a better bound for small \(\varepsilon\) (relative to \(d\)) than our bound in the Neighborhood KKM-Lebesgue Theorem because \(\lim_{\varepsilon\to 0}\Bigl{[}(1+\frac{\varepsilon}{1+\varepsilon})^{d} \Bigr{]}=1<d+1\), so for sufficiently small \(\varepsilon\) it holds that \(\Bigl{[}(1+\frac{\varepsilon}{1+\varepsilon})^{d}\Bigr{]}\leq d+1\). Footnote 13: One can “collapse” the possibly infinitely many colors into just \(2^{d}\) colors to obtain a new finite SLKKM coloring, and by the Neighborhood KKM-Lebesgue Theorem, there is a point at the closure of at least \(d+1\) of the collapsed colors, and any open set around this point intersects at least \(d+1\) of the original color sets. See Lemma A.4 for example, where this result shows up as part of the larger proof. Trivial tight bound for large \(\varepsilon\):The second obvious thing we can say is that for any \(d\in\mathbb{N}\) and \(\varepsilon>\frac{1}{2}\), we have \(K^{\circ}(d,\varepsilon)=2^{d}\). This is because the open \(\ell_{\infty}\)\(\varepsilon\)-ball placed at the center of the unit \(d\)-cube is a superset of the cube itself, so it intersects all \(2^{d}\) corners--no two of which have the same color in an SLKKM coloring--so \(K^{\circ}(d,\varepsilon)\geq 2^{d}\). And (by definition) \(K^{\circ}(d,\varepsilon)\leq 2^{d}\) because there are SLKKM colorings with only \(2^{d}\) colors. Similarly, for any \(d\in\mathbb{N}\) and \(\varepsilon\geq\frac{1}{2}\) (note the non-strict inequality this time), we have \(\overline{K}(d,\varepsilon)=2^{d}\). Perspective of this paper:The third thing that we can say is that for all \(d\in\mathbb{N}\) and \(\varepsilon\in(0,\infty)\) we have \(\overline{K}(d,\varepsilon)\geq K^{\circ}(d,\varepsilon)\geq\left\lceil(1+ \frac{\varepsilon}{1+\varepsilon})^{d}\right\rceil\) by the Neighborhood KKM-Lebesgue Theorem. In other words, the purpose of this paper is to put some lower bound on \(K^{\circ}\) (and consequently on \(\overline{K}\)). Non-trivial tight bound for small \(\varepsilon\):The fourth thing we can say is something that we don't believe is at all obvious; it turns out that we know the value of \(K^{\circ}\) (and \(\overline{K}\)) exactly for all dimensions for a specific small regime of \(\varepsilon\) near \(0\). Specifically, for any \(d\in\mathbb{N}\) and \(\varepsilon\in(0,\frac{1}{2d}]\), we know that \(d+1\geq\overline{K}(d,\varepsilon)\geq K^{\circ}(d,\varepsilon)\); along with the "first obvious thing" we said, this gives equality with \(d+1\). The reason for this is the following. We demonstrated in [9, Theorem 7.20] that in each dimension \(d\), there is a partition \(\mathcal{P}_{d}\) of \(\mathbb{R}^{d}\) consisting of translates of the half-open cube \([0,1)^{d}\) with the property that for every point \(\vec{p}\in\mathbb{R}^{d}\), the closed ball \(\overline{B}_{\infty}(\frac{1}{2d},\vec{p})\) intersects at most \(d+1\) cubes in the partition (we called such partitions \((d+1,\frac{1}{2d})\)-secluded partitions). This immediately gives an SLKKM coloring of \([0,1]^{d}\) with the same property: define the coloring \(\chi:[0,1]^{d}\to\mathcal{P}_{d}\) by mapping each point in \([0,1]^{d}\) to the unique member of \(\mathcal{P}_{d}\) which it belongs to. This is an SLKKM coloring because no member of \(\mathcal{P}_{d}\) contains points \(\ell_{\infty}\) distance \(1\) apart, and so no points distance \(1\) apart are given the same color (i.e. no points on opposite faces are given the same color). In fact, this coloring uses exactly \(2^{d}\) colors14 and consists of color sets which are rectangles15. This demonstrates the existence of (finite) SLKKM colorings for which no closed \(\ell_{\infty}\)\(\varepsilon\)-ball intersects more than the \(d+1\) colors even when taking \(\varepsilon\) as large as \(\varepsilon=\frac{1}{2d}\). Once more for clarity: for any \(d\in\mathbb{N}\) and \(\varepsilon\in(0,\frac{1}{2d}]\) we have that \(\overline{K}(d,\varepsilon)=K^{\circ}(d,\varepsilon)=d+1\). Footnote 14: The range of \(\chi\) can be shown to have cardinality exactly \(2^{d}\). This is because for any \(X\in\mathcal{P}_{d}\) which intersects \([0,1]^{d}\), it follows by simple analysis that because \(X\) is a translate of \([0,1)^{d}\), it must be that \(X\) contains one of the corners of \([0,1]^{d}\). And since no member of \(\mathcal{P}_{d}\) contains two corners of \([0,1]^{d}\) (because any two corners of \([0,1]^{d}\) are \(\ell_{\infty}\) distance \(1\) apart, and no two points in a translate of \([0,1)^{d}\) are distance \(1\) apart), the subset of members of \(\mathcal{P}_{d}\) which intersect \([0,1]^{d}\) (i.e. the range of \(\chi\)) are in bijection with the \(2^{d}\) corners of \([0,1]^{d}\). Non-trivial upper bound:We can generalize the result above by utilizing our newer unit cube partitions [10, Theorem 1.9] rather than our initial constructions in [9]. Specifically, for each \(d,n\in\mathbb{N}\) there is a partition \(\mathcal{P}_{d,n}\) of \(\mathbb{R}^{d}\) by translates of \([0,1)^{d}\) which is \(\left(\frac{1}{2n},(n+1)^{\left\lceil\frac{d}{n}\right\rceil}\right)\)-secluded (i.e. for every \(\vec{p}\in\mathbb{R}^{d}\), the closed ball \(\overline{B}_{\infty}(\frac{1}{2n},\vec{p})\) intersects at most \((n+1)^{\left\lceil\frac{d}{n}\right\rceil}\)-many members of the partition)16. As in the prior paragraph, this immediately gives rise to an SLKKM coloring of \([0,1]^{d}\) such that for every \(\vec{p}\in[0,1]^{d}\), we have that \(\overline{B}_{\infty}(\frac{1}{2n},\vec{p})\) contains points of at most \((n+1)^{\left\lceil\frac{d}{n}\right\rceil}\) colors. Thus, (along with monotonicity) we have for all \(d,n\in\mathbb{N}\) and \(\varepsilon\in(0,\frac{1}{2n}]\) that \(K^{\circ}(d,\varepsilon)\leq\overline{K}(d,\varepsilon)\leq(n+1)^{\left\lceil \frac{d}{n}\right\rceil}\). Taking \(n=1\), we recover the trivial upper bound of \(2^{d}\) when \(\varepsilon\in(0,\frac{1}{2}]\) (which is tight for \(\overline{K}\) at \(\frac{1}{2}\)), and taking \(n=d\) we recover the upper bound of \(d+1\) when \(\varepsilon\in(0,\frac{1}{2d}]\) (which we have already said is tight on this whole interval). Since we can freely choose \(n\), this gives some non-trivial upper bound on \(\overline{K}\) and \(K^{\circ}\) for every choice of \(d\) and \(\varepsilon\). That is to say that for given \(\varepsilon\in(0,\frac{1}{2}]\) and \(d\in\mathbb{N}\), we take any \(n\in\mathbb{N}\cap[1,\frac{1}{2\varepsilon}]\) (in particular we can take the \(n\) in this range which minimizes \((n+1)^{\left\lceil\frac{d}{n}\right\rceil}\)) so that \(\varepsilon\in(0,\frac{1}{2n}]\) and thus by the above \(K^{\circ}(d,\varepsilon)\leq\overline{K}(d,\varepsilon)\leq(n+1)^{\left\lceil \frac{d}{n}\right\rceil}\). Based on the fact that for any \(d\in\mathbb{N}\) this bound is tight at the extremes \(\varepsilon\in(0,\frac{1}{2d}]\) and \(\varepsilon=\frac{1}{2}\), we wonder if this bound is nearly tight everywhere else. We quickly note that we really only need to consider \(n\in[d]\) and not \(n\in\mathbb{N}\) because for \(n>d\), we have \((n+1)^{\left\lceil\frac{d}{n}\right\rceil}>d+1\) so this is a worse upper bound than \(d+1\), but for \(\varepsilon\in(0,\frac{1}{2n}]\) (the domain of \(\varepsilon\) for which this bound applies) we already know by the "Non-trivial tight bound for small \(\varepsilon\)" paragraph that that \(\varepsilon\in(0,\frac{1}{2n}\subseteq(0,\frac{1}{2d}]\) so \(K^{\circ}(d,\varepsilon)\leq\overline{K}(d,\varepsilon)\leq d+1\). Thus, we get no new information from this bound when \(n>d\). We also emphasize that it is important for \(n\) to be an integer in the above bounds as is necessary in proving [10, Theorem 1.9]. Difference between \(K^{\circ}\) and \(\overline{K}\):The final thing we will say about these functions is that \(\overline{K}\) and \(K^{\circ}\) are not in general equal, and in fact they differ noticeably when \(\varepsilon=\frac{1}{2}\). We have already stated that \(\overline{K}(d,\frac{1}{2})=2^{d}\) (in the "Trivial tight bound for large \(\varepsilon\)" paragraph), and we will now argue that \(K^{\circ}(d,\frac{1}{2})\leq 2^{d-1}+1\). Consider first \(d=2\) along with Figure 3; it is obviously impossible for any positioning of the _open_\(\ell_{\infty}\)\(\frac{1}{2}\)-ball to intersect all \(2^{d}=4\) colors because it can't include both of the colors which are single points as those points are \(\ell_{\infty}\) distance \(1\) apart. This idea extends into higher dimensions by using an SLKKM coloring with \(2^{d}\) colors which are identified with the vertices of the cube. Each of the \(2^{d-1}\) colors in \(\{0,1\}^{d}\) with even hamming weight are used only on the corresponding vertex (e.g. in Figure 3 these are \(\langle 0,0\rangle\) (purple) and \(\langle 1,1\rangle\) (blue)). Every other point on the cube is given a color with odd hamming weight; this is possible because aside from vertices (which are given their own color), every other point of the cube belongs to a face which is a superset of an edge of the cube, and every edge contains both an even and odd hamming weight vertex (by the standard definition of the \(d\)-dimensional hypercube graph) so there is at least \(1\) odd hamming weight color which can be used. This results in a KKM-style cover (but not with closed sets) which is in fact an SLKKM coloring because each point can be given a color of one of the vertices on the smallest face to which it belongs, and thus points on opposite faces are not given the same color because opposite faces are necessarily disjoint. Such colorings have \(2^{d-1}\) colors which are single points, so even the open \(\ell_{\infty}\)\(\frac{1}{2}\)-ball cannot hit more than one of them because all of the vertices are distance \(1\) apart. Thus, the \(\varepsilon=\frac{1}{2}\)-ball can hit at most \(2^{d-1}+1\) colors (possibly all of the odd hamming weight colors and one even hamming weight color), so \(K^{\circ}(d,\frac{1}{2})\leq 2^{d-1}+1\). And similarly, for \(\varepsilon<\frac{1}{2}\) we have \(\overline{K}(d,\varepsilon)\leq K^{\circ}(d,\frac{1}{2})\leq 2^{d-1}+1\). In light of this, we believe it is interesting to ask for each dimension \(d\) what the ranges of the functions \(\overline{K}\) and \(K^{\circ}\) are (as functions of \(\varepsilon\)). **Question 4.1**.: _For each \(d\in\mathbb{N}\), what is \(\operatorname{range}(\overline{K}(d,\cdot))\) and \(\operatorname{range}(K^{\circ}(d,\cdot))\)? In particular, what are the cardinalities of these ranges?_ We know that the range is trivially a subset of the integers between \(d+1\) and \(2^{d}\) as already discussed, but now we see that it is in fact a proper subset because \(K^{\circ}(d,\cdot)\) and \(\overline{K}(d,\cdot)\) are monotonic, so the hamming coloring discussion above shows that neither range includes any values strictly between \(2^{d-1}+1\) and \(2^{d}\). We wonder if the functions \(K^{\circ}(d,\cdot)\) and \(\overline{K}(d,\cdot)\) are constant along these \(d\)-many open intervals: \[\left(0,\frac{1}{2d}\right),\quad\left(\frac{1}{2d},\frac{1}{2(d-1)}\right), \quad\left(\frac{1}{2(d-1)},\frac{1}{2(d-2)}\right),\quad\ldots,\quad\left( \frac{1}{8},\frac{1}{6}\right),\quad\left(\frac{1}{6},\frac{1}{4}\right),\quad \left(\frac{1}{4},\frac{1}{2}\right).\] If so, this might align nicely with the upper bounds that we obtained in the "Non-trivial upper bound" paragraph using [10, Theorem 1.9] which gave an separate upper bound on each such interval. We do at least know that \(K^{\circ}(d,\cdot)\) is broken into non-trivial intervals (i.e. intervals which are not singleton sets) because it is left continuous (justified as follows). For any SLKKM coloring of \([0,1]^{d}\) and \(\varepsilon\in(0,\infty)\), there is a point \(\vec{p}\in[0,1]^{d}\) such that \(B^{\circ}_{\infty}(\varepsilon,\vec{p})\) contains points of at least \(K^{\circ}(d,\varepsilon)\) different colors by definition. Pick \(K^{\circ}(d,\varepsilon)\)-many points--each of a distinct color that is included (which is a finite number of points because \(K^{\circ}(d,\varepsilon)\leq 2^{d}\)). Because each of these finite number of points is contained in the _open_ ball \(B^{\circ}_{\infty}(\varepsilon,\vec{p})\), then for each \(\varepsilon^{\prime}<\varepsilon\) sufficiently large, we have that \(B^{\circ}_{\infty}(\varepsilon^{\prime},\vec{p})\) contains all of these points, and so includes at least \(K^{\circ}(d,\varepsilon)\leq 2^{d}\) colors. Thus, \(\lim_{\varepsilon^{\prime}\uparrow\varepsilon}K^{\circ}(d,\varepsilon^{\prime} )\geq K^{\circ}(d,\varepsilon)\) and we get the other inequality by the monotonicity of \(K^{\circ}\) showing that \(K^{\circ}\) is right continuous: \[\lim_{\varepsilon^{\prime}\uparrow\varepsilon}K^{\circ}(d,\varepsilon^{\prime })=K^{\circ}(d,\varepsilon).\] Either by using a similar argument, or by noting that for all \(\varepsilon^{\prime}<\varepsilon\) we have \(K^{\circ}(d,\varepsilon^{\prime})\leq\overline{K}(d,\varepsilon^{\prime}) \leq K^{\circ}(d,\varepsilon)\) we have by a squeeze theorem that also \[\lim_{\varepsilon^{\prime}\uparrow\varepsilon}\overline{K}(d,\varepsilon^{ \prime})=K^{\circ}(d,\varepsilon).\] Furthermore, on the interior of any interval \((\varepsilon_{a_{1}}\varepsilon_{b}]\) for which \(K^{\circ}(d,\cdot)\) is constant, we also know that \(\overline{K}(d,\cdot)\) is constant because for \(\varepsilon\in(\varepsilon_{a},\varepsilon_{b})\) we have \(B^{\circ}_{\infty}(\varepsilon_{a},\vec{0})\subseteq\overline{B}_{\infty}( \varepsilon,\vec{0})\subseteq B^{\circ}_{\infty}(\varepsilon_{b},\vec{0})\) so \(K^{\circ}(d,\varepsilon_{a})\leq\overline{K}(d,\varepsilon)\leq K^{\circ}(d, \varepsilon_{b})\). Bounding the constant in the Neighborhood KKM-Lebesgue Theorem:A consequence of the "Difference between \(K^{\circ}\) and \(\overline{K}\)" paragraph is that any improvement to the Neighborhood KKM-Lebesgue Theorem using a bound of the form \((1+c\varepsilon)^{d}\) for some constant \(c\) must have the property that \((1+c\frac{1}{2})^{d}\leq 2^{d-1}+1\) for all \(d\). Solving for \(c\) gives \(c\leq 2\left(\left(2^{d-1}+1\right)^{1/d}-1\right)\). Graphing shows that \(d=3\) is the integer where this takes the smallest value and shows that \(c\leq 1.42\). This is not an asymptotic claim; we are only saying that if we want to replace the constant \(\frac{2}{3}\) in the Neighborhood KKM-Lebesgue Theorem with some other constant \(c\), it must be that \(c\leq 1.42\). In particular, we cannot hope to obtain the bound of \((1+2\varepsilon)^{d}\) (a bound which we were able to achieve in [10] for colorings (i.e. \begin{table} \begin{tabular}{|l|l|l|l|} \hline \(\varepsilon\) & \(K^{\circ}(d,\varepsilon)\) & \(\overline{K}(d,\varepsilon)\) & Reason \\ \hline \(\in(0,\infty)\) & \(\leq 2^{d}\) & \(\leq 2^{d}\) & Trivial \\ \(\in(\frac{1}{2},\infty)\) & \(=2^{d}\) & \(=2^{d}\) & Trivial \\ \hline \(=\frac{1}{2}\) & \(\leq 2^{d-1}+1\) & \(=2^{d}\) & Hamming coloring discussion (c.f. Figure 3) \\ \hline \(\in(0,\infty)\) & \(\geq d+1\) & \(\geq d+1\) & KKM-Lebesgue Theorem \\ \(\in(0,\infty)\) & \(\geq(1+\frac{\varepsilon}{1+\varepsilon})^{d}\) & \(\geq(1+\frac{\varepsilon}{1+\varepsilon})^{d}\) & Neighborhood KKM-Lebesgue Theorem \\ \(\in(0,\frac{1}{2}]\) & \(\geq(1+\frac{2}{3}\varepsilon)^{d}\) & \(\geq(1+\frac{2}{3}\varepsilon)^{d}\) & Neighborhood KKM-Lebesgue Theorem \\ \hline \(\in(0,\frac{1}{2d}]\) & \(=d+1\) & \(=d+1\) & [9, Theorem 7.20] \& KKM-Lebesgue Theorem \\ \(\in(0,\frac{1}{2n}],n\in\mathbb{N}\) & \(\leq(n+1)^{\left\lceil\frac{d}{n}\right\rceil}\) & \(\leq(n+1)^{\left\lceil\frac{d}{n}\right\rceil}\) & [10, Theorem 1.9] \\ \hline \end{tabular} \end{table} Table 1: Known information about the ideal functions \(K^{\circ}\) and \(\overline{K}\). Figure 3: An example of a coloring in which each even hamming weight vertex has a color only used at that vertex. It is impossible for an \(\varepsilon=\frac{1}{2}\) open \(\ell_{\infty}\) ball to contain more than one of the even hamming weight colors because all vertices are distance \(1\) apart. partitions) of \(\mathbb{R}^{d}\) in which all color sets (i.e. partition members) had at most unit outer measure or at most unit diameter). Conclusion:All of this demonstrates that the lower bound of \((1+\frac{\varepsilon}{1+\varepsilon})^{d}\) or \((1+\frac{2}{3}\varepsilon)^{d}\) in our Neighborhood KKM-Lebesgue Theorem is not in general tight. For \(\varepsilon\) tending to \(0\), this bound predicts \(1\) even though the exact bound is \(d+1\) for all \(\varepsilon\in\left(0,\frac{1}{2d}\right]\). Furthermore, our bound in the Neighborhood KKM-Lebesgue Theorem predicts only slightly more than \((1+\frac{1}{3})^{d}\) for \(\varepsilon\) slightly bigger than \(\frac{1}{2}\) when the tight bound is \(2^{d}\). For these reasons, we suspect that the lower bound of the Neighborhood KKM-Lebesgue Theorem is not tight for any value of \(\varepsilon\), and we hope that future work is able to establish better lower bounds on \(K^{\circ}(d,\varepsilon)\) and \(\overline{K}(d,\varepsilon)\). In addition, we hope that future work can either improve our upper bounds on these functions or prove that they are nearly tight. We believe it would be nice if all of this could be done with a single technique. For example, as summarized in Table 1, the current lower bounds on \(K^{\circ}\) and \(\overline{K}\) for small \(\varepsilon\) follow from the Cubical KKM Lemma or the Lebesgue Covering Theorem (summarized in the KKM-Lebesgue Theorem) while the current lower bounds on \(K^{\circ}\) and \(\overline{K}\) for larger \(\varepsilon\) given in this paper use very different techniques than those traditionally used to prove the Cubical KKM Lemma and the Lebesgue Covering Theorem. So there are two very different proof techniques to prove two regimes of the current bounds. It would be particularly satisfying if there was a single technique giving some "nice" lower bound expression \(k^{\circ}(d,\varepsilon)\) for \(K^{\circ}(d,\varepsilon)\) such that \[k^{\circ}(d,\varepsilon)\geq\max\begin{cases}\left[(1+\frac{\varepsilon}{1+ \varepsilon})^{d}\right]\\ d+1\end{cases}\] so that it supersedes both of the two bounds we have used. This would be nice because the KKM-Lebesgue Theorem would then follow from this result17; equivalently (as shown in Subsection A.1) the Lebesgue Covering Theorem and the Cubical KKM Lemma would follow from this result and so both the Lebesgue Covering Theorem and the Cubical KKM Lemma could be viewed as special cases of a more general result. Footnote 17: This would mean that for every \(\varepsilon\), there is a point \(\vec{p}\) where \(B^{\circ}_{\infty}(\varepsilon,\vec{p})\) intersects at least \(k^{\circ}(d,\varepsilon)\geq d+1\) colors. Consider a sequence \((\varepsilon^{(n)})_{n=1}^{\infty}\) such that \(\lim_{n\to\infty}\varepsilon^{(n)}=0\), and for each \(n\), let \(\vec{p}^{(n)}\in[0,1]^{d}\) be a point such that \(B^{\circ}_{\infty}(\varepsilon^{(n)},\vec{p}^{(n)})\) intersects points of at least \(d+1\) colors. By compactness, we may assume that \(\vec{p}^{(n)}\) converges (otherwise we just pick a convergent subsequence). Let \(\vec{p}\) be the point of convergence. Then for any \(\delta\in(0,\infty)\), it holds that \(B^{\circ}_{\infty}(\delta,\vec{p})\) intersects at least \(d+1\) colors because there is some \(N\in\mathbb{N}\) such that for all \(n\geq N\) we have \(\left\|\vec{p}-\vec{p}^{(n)}\right\|_{\infty}<\frac{\varepsilon}{2}\) and \(\varepsilon^{(n)}<\frac{\delta}{2}\) so \(B^{\circ}_{\infty}(\delta,\vec{p})\supseteq B^{\circ}_{\infty}(\varepsilon^{(n) },\vec{p}^{(n)})\) which intersects at least \(d+1\) colors. It is left to show that \(\vec{p}\) is in fact at the closure of some set of at least \(d+1\) colors, and finiteness is critical for this. Consider a sequence \(\langle\delta^{(n)}\rangle_{n=1}^{\infty}\) converging to \(0\) and the sequence \(\langle C^{(n)}\rangle_{n=1}^{\infty}\) where \(C_{n}\) is the set of colors intersected by \(B^{\circ}_{\infty}(\delta^{(n)},\vec{p})\). Because only finitely many colors are used by hypothesis of the KKM-Lebesgue Theorem, there are at most a finite number of distinct terms in \(\langle C^{(n)}\rangle_{n=1}^{\infty}\) as each term is a subset of the colors. Thus, there is some set of colors \(C\) which appears infinitely many times in the sequence, and since each \(C^{(n)}\) has cardinality at least \(d+1\), so does \(C\). That is, we have found a sequence of arbitrarily small open balls at \(\vec{p}\) which each intersect all \(d+1\) colors in \(C\), and so \(\vec{p}\) is in the closure of all colors in \(C\). ## 5 Acknowledgements Pavan's work is partially supported by NSF award 2130536. Vander Woude and Vinodchandran's work is partially supported by NSF award 2130608. Radcliffe's work is partially supported by Simons grant 429383. Part of the work was done when Pavan and Vinodchandran were visiting the Simons Institute for the Theory of Computing. Part of the work was done while Dixon was a postdoc at Ben-Gurion University of the Negev.
2304.04588
Composite Quantum Phases in Non-Hermitian Systems
Non-Hermitian systems have attracted considerable interest in recent years owing to their unique topological properties that are absent in Hermitian systems. While such properties have been thoroughly characterized in free fermion models, they remain an open question for interacting bosonic systems. In this work, we present a precise definition of quantum phases for non-Hermitian systems and propose a new family of phases referred to as composite quantum phases. We demonstrate the existence of these phases in a one-dimensional spin-$1$ system and show their robustness against perturbations through numerical simulations. Furthermore, we investigate the phase diagram of our model, indicating the extensive presence of these new phases in non-Hermitian systems. Our work establishes a new framework for studying and constructing quantum phases in non-Hermitian interacting systems, revealing exciting possibilities beyond the single-particle picture.
Yuchen Guo, Ruohan Shen, Shuo Yang
2023-04-10T13:47:05Z
http://arxiv.org/abs/2304.04588v2
# Composite Quantum Phases in Non-Hermitian Systems ###### Abstract Non-Hermitian systems have attracted considerable interest in recent years owing to their unique topological properties that are absent in Hermitian systems. While such properties have been thoroughly characterized in free fermion models, they remain an open question for interacting bosonic systems. In this Letter, we present a precise definition of quantum phases for non-Hermitian systems and propose a new family of phases referred to as composite quantum phases. We demonstrate the existence of these phases in a one-dimensional spin-1 system and show their robustness against perturbations through numerical simulations. Furthermore, we investigate the phase diagram of our model, indicating the extensive presence of these new phases in non-Hermitian systems. Our work establishes a new framework for studying and constructing quantum phases in non-Hermitian interacting systems, revealing exciting possibilities beyond the single-particle picture. _Introduction.--_ Non-Hermitian systems [1; 2], originally proposed as effective theories to describe open systems [3; 4; 5; 6], have received significant attention recently due to their unique properties and phenomena beyond the standard Hermitian formalism [7; 8; 9; 10; 11; 12]. Various studies have focused on the topological properties in free fermion models [13; 14], including the breakdown of the well-known bulk-edge correspondence [15], inspiring the revisit of the relationship between bulk topological invariants and edge states in non-Hermitian systems [16; 17; 18]. The celebrated Altland-Zirnbauer symmetry classification [19] has also been extended from 10 to 38 classes by considering additional sublattice symmetries and pseudo-Hermiticity, revealing a much richer phase diagram in non-Hermitian fermionic systems [20; 21]. Topological quantum phases have been extensively investigated in Hermitian interacting bosonic systems besides free fermion models, with a focus on their well-organized entanglement structure. Such long-range entanglement pattern is commonly referred to as topological order [22; 23; 24]. In addition, the manifold of symmetric Hamiltonians gives rise to more non-trivial quantum phases, including symmetry-breaking phases [25] and symmetry-protected topological (SPT) phases [26; 27; 28]. However, the classification of such phases in non-Hermitian interacting systems has not been completely understood. It is worth noting that in Hermitian systems, the equivalence between the classification of Hamiltonians and that of states [24] has greatly facilitated the exploration and construction of novel quantum phases. In contrast, in the non-Hermitian regime, such a duality between quantum states and Hamiltonians no longer holds. Therefore, recent studies showing that no new topological ground state can be realized in one-dimensional (1D) interacting non-Hermitian systems [29] do not preclude the possibility of investigating new phases of non-Hermitian Hamiltonians, which remains an open problem. This Letter proposes a new definition of quantum phases in non-Hermitian systems by starting from the equivalent classes of Hamiltonians. With this definition, we demonstrate a broad range of novel non-Hermitian quantum phases without Hermitian counterparts, which we denote as composite quantum phases. As an illustration, we employ the non-Hermitian parent Hamiltonian method [30] to construct a system belonging to this type of new phase and numerically confirm its robustness against perturbations using the multi-site infinite time-evolving block decimation (iTEBD) method [30; 31]. Our results suggest that non-Hermitian composite quantum phases are prevalent, as evidenced by the phase diagram of our model, indicating the existence of a vast and unexplored landscape of non-Hermitian topological phases beyond existing free fermion models. _Quantum phases in Hermitian systems.--_ Quantum phases are defined as the equivalent classes of Hamiltonians. For example, two local and gapped Hamiltonians in Hermitian systems \(H_{0}\) and \(H_{1}\) belong to the same phase iff there exists a set of \(H(g)\) connecting them, i.e., \(H(0)=H_{0}\) and \(H(1)=H_{1}\), such that the expectation value of any local observable for the ground state \(\left\langle O\right\rangle(g)\) is smooth along the path \(g\in[0,1]\). Thanks to the perturbation theory in Hermitian systems, the existence of an adiabatic path such that \(H(g)\) is gapped automatically ensures that \(\left\langle O\right\rangle(g)\) is smooth [24]. Nevertheless, whether such an adiabatic path exists is generally hard to determine, inspiring people to study the equivalent classes of ground states rather than Hamiltonians. It is proved by explicit constructions that the above condition is equivalent to the existence of a local unitary (LU) evolution connecting the respective ground states. This equivalent definition enables the classification of quantum phases by analyzing ground state properties, such as the entanglement spectrum (ES) [32; 33] or topological entanglement entropy [22; 23]. _Quantum phases in non-Hermitian systems.--_ We be gin by generalizing the definition of quantum phases as equivalent classes of Hamiltonians in non-Hermitian systems. _Definition 1_.: Two local, line gapped [34], non-Hermitian Hamiltonians \(H_{0}\) and \(H_{1}\) belong to the same quantum phase iff there exists a set of \(H(g)\) connecting them, i.e., \(H(0)=H_{0}\) and \(H(1)=H_{1}\), such that all local observables for the ground states \(\mathrm{Tr}[\rho(g)O]\), where \(\rho(g)=|R(g)\rangle\!\langle L(g)|\) is the density matrix for ground states after normalization \(\mathrm{Tr}[\rho(g)]=1\), are smooth along the path \(g\in[0,1]\). The first issue we encounter is whether our generalized definition is consistent with the conventional definition of quantum phases for Hermitian Hamiltonians. Specifically, we need to answer the following question. For two Hermitian Hamiltonians \(H_{0}\) and \(H_{1}\) belonging to different quantum phases defined conventionally, can we connect them without phase transitions in the extended manifold of non-Hermitian Hamiltonians? We will show that if restricted to a special class of systems whose ground states are guaranteed to be short-range correlated and satisfy the entanglement area law, we would not encounter conflicts. It is noteworthy that it is a condition automatically satisfied in Hermitian systems, but not necessarily in a general non-Hermitian system even with a line gap. Besides, due to the breakdown of the Lieb-Robinson bound, quantum phase transitions can occur without gap closing in non-Hermitian systems [11]. In other words, a finite gap along the path can no longer guarantee that two Hamiltonians belong to the same phase, hindering us from constructing LU evolution on corresponding ground states and deriving an equivalent definition as in Hermitian systems. Therefore, we need to consider how to provide a classification of non-Hermitian quantum phases via another easily implemented definition. The following theorem can answer the above two questions. _Theorem 1_.: For two local, line gapped, non-Hermitian Hamiltonians \(H_{0}\) and \(H_{1}\) whose ground states are short-range correlated and satisfy the entanglement area law, if they belong to the same quantum phase, their left and right ground states can be connected with LU evolutions respectively, i.e., \(|L_{0}\rangle\stackrel{{\mathrm{LU}_{1}}}{{\sim}}|L_{1}\rangle\) and \(|R_{0}\rangle\stackrel{{\mathrm{LU}_{r}}}{{\sim}}|R_{1}\rangle\). Proof.: Consider the adiabatic path \(H(g)\) connecting these two Hamiltonians, i.e., \(H(0)=H_{0}\) and \(H(1)=H_{1}\), where the smoothness of \(\mathrm{Tr}[|R(g)\rangle\!\langle L(g)|\,O]\) for any local operator requires all the local reduced density matrices of \(|L(g)\rangle\!\langle R(g)|\) be smooth, which further means that those of \(|L(g)\rangle\) and \(|R(g)\rangle\) are also smooth. For each \(g\), we can construct Hermitian parent Hamiltonians \(H_{L}(g)\) and \(H_{R}(g)\) for \(|L(g)\rangle\) and \(|R(g)\rangle\) respectively, which are local and gapped since both \(|L(g)\rangle\) and \(|R(g)\rangle\) are short-range correlated and satisfy the entanglement area law [35; 36]. In addition, each term of \(H_{L}(g)\) (\(H_{R}(g)\)), constructed from the local reduced density matrix of \(|L(g)\rangle\) (\(|R(g)\rangle\)), has a smooth dependence on \(g\) since the dimension of the local support space does not change. Therefore, we obtain two adiabatic paths in the manifold of Hermitian Hamiltonians connecting \(|L(0)\rangle\) to \(|L(1)\rangle\) and \(|R(0)\rangle\) to \(|R(1)\rangle\) respectively. Consequently, \(|L(0)\rangle\) and \(|L(1)\rangle\) (\(|R(0)\rangle\) and \(|R(1)\rangle\)) can be connected with LU evolutions following the construction in [24]. We note that the key point in this proof is to derive an LU evolution for each side, which can be accomplished by constructing accompanied Hermitian parent Hamiltonians. Therefore, the classification of non-Hermitian quantum phases is given by the direct product of the equivalent classes of its left and right ground states. _New phases in non-Hermitian systems.--_ Following the definition in the previous section, we can propose new quantum phases arising in non-Hermitian systems where the left and right ground states \(|L\rangle\) and \(|R\rangle\) belong to different phases, i.e., they cannot be connected by LU evolution. Hamiltonians in these new phases, which we denote as composite quantum phases, cannot be adiabatically connected to the conventional Hermitian quantum phases. The schematic diagram of composite quantum phases is shown in Fig. 1. In addition, when considering symmetric LU evolution, quantum states with non-trivial symmetry-protected topological (SPT) order also cannot be transformed into product states [26; 27]. We can, therefore, extend the above definition to define and study composite symmetry-protected topological (CSPT) orders in non-Hermitian systems. For instance, in the presence of on-site unitary symmetry defined by a finite group \(G\), the classification of SPT phases in \(d\)-dimensional Hermitian systems can be represented as \(\omega\in H^{d+1}(G,\mathbb{C})\)[27; 28]. Consequently, we can use \[\omega_{L}\times\omega_{R}\in H^{d+1}(G,\mathbb{C})\times H^{d+1}(G,\mathbb{C}) \tag{1}\] to label possible CSPT phases in non-Hermitian systems where on-site unitary symmetry \(G\) is imposed. In the presence of symmetries besides onsite unitary ones, such as time reversal (TR) or translational invariance (TI), Figure 1: Schematic diagram of composite quantum phases in non-Hermitian systems. In this case, left and right ground states can belong to different phases (e.g., \(|L\rangle\in\) phase \(A\) while \(|R\rangle\in\) phase \(B\)), resulting in composite quantum phases (e.g., the cloud labeled as \((A,B)\)). we can further construct additional CSPT phases from states with SPT order that are protected by these joint symmetries [37; 38]. A significant problem lies in the existence of such composite phases in the real world, i.e., whether we can construct a non-Hermitian Hamiltonian from given left and right ground states with different orders such that it remains gapped in the thermodynamic limit. In Hermitian systems, the existence is guaranteed by the parent Hamiltonian method [35; 36]. In the following, we adopt the recently proposed non-Hermitian parent Hamiltonian (nH-PH) approach [30] to construct and study composite phases in 1D. In this method, one starts from two different matrix product states (MPS) [35] and constructs a local Hamiltonian such that they serve as the zero-energy mode on each side [39]. Since there is no intrinsic topological order for any injective MPS [24], we will focus on two MPS with different SPT orders. _CSPT with \(D_{2h}\) symmetry.--_ We start from 1D quantum states with different SPT orders protected by the \(D_{2h}\) symmetry group, which is a joint symmetry group composed of the dihedral group \(D_{2}\) and the TR transformation \(\mathcal{T}\). Specifically, spin-1 Hermitian systems support four distinct SPT phases protected by \(D_{2h}\)[37], which can be realized analytically by MPS with bond dimension \(D=2\)[39], denoted as \(|\psi_{0}\rangle\), \(|\psi_{x}\rangle\), \(|\psi_{y}\rangle\), and \(|\psi_{z}\rangle\), respectively. Here \(|\psi_{0}\rangle\) corresponds to the conventional Affleck-Kennedy-Lieb-Tasaki (AKLT) state [40]. It is noteworthy that these four states all belong to the Haldane phase if \(D_{2}\) symmetry is the only constraint, while they can be further distinguished by different commutation relations between \(g\in D_{2}\) and \(\mathcal{T}\)[39]. We proceed to establish CSPT phases in non-Hermitian systems based on these states in the following. We construct a Hermitian parent Hamiltonian \(H_{00}\) to describe the ground state \(|\psi_{0}\rangle\) and a non-Hermitian parent Hamiltonian \(H_{x0}\), from \(|\psi_{0}\rangle\) and \(\langle\psi_{x}|\). We choose \(k=4\) in each construction, i.e., both \(H_{00}\) and \(H_{x0}\) involve 4-site interaction. In addition, both Hamiltonians preserve the \(D_{2h}\) symmetry inherited from \(|\psi_{0}\rangle\) and \(\langle\psi_{x}|\). Thus, \(H_{00}\) and \(H_{x0}\) are expected to belong to different non-Hermitian phases. To investigate the quantum phase transition between \(H_{00}\) and \(H_{x0}\), we consider the path of Hamiltonians given by \(H_{0}(\lambda)=(1-\lambda)H_{00}+\lambda H_{x0}\) for \(\lambda\in[0,1]\). We note that \(|\psi_{0}\rangle\) is always a zero-energy eigenstate of \(H_{0}(\lambda)\) for all \(\lambda\) since it is a co-eigenstate of both \(H_{00}\) and \(H_{x0}\) with energy \(E=0\). To compute the left and right ground states \(\langle L|\) and \(|R\rangle\) of \(H_{0}(\lambda)\), we employ the multi-site infinite time-evolving block decimation (iTEBD) method [30] with bond dimension \(D=32\) and unit cell length \(k=4\). We set the time step as \(\Delta\tau=1\times 10^{-2}\), and adopt the convergence criterion \(e=1\times 10^{-12}\), defined as \(e=\sum_{i=1}^{k}\sum_{j=1}^{D}[s_{ij}(\tau+\delta\tau)-s_{ij}(\tau)]^{2}\), where \(s_{ij}\) denotes the \(j\)-th Schmidt weight for site \(i\) in the unit cell. The resulting entanglement spectra (ES) of \(|L\rangle\) and \(|R\rangle\) are shown in Fig. 2(a)(b). Firstly, it is noted that the ES of \(|R\rangle\) undergoes an abrupt change when the parameter \(\lambda\) approaches the critical value \(\sim 0.6\). For \(\lambda\leq 0.6\), the ES of \(|R\rangle\) coincides with that of \(|\psi_{0}\rangle\). Consequently, \(|\psi_{0}\rangle\) is deemed to be a zero mode as well as the true ground state of the Hamiltonian \(H\) with associated energy \(E_{0}=0\). However, when \(\lambda\) surpasses 0.6, the ES of \(|R\rangle\) retains its two-fold degeneracy while exhibiting higher-order Schmidt weights. This result indicates that \(|R\rangle\) also possesses non-trivial SPT order [33] despite the fact that \(|\psi_{0}\rangle\) is no longer the ground state. Combined with the ground state energy per-site \(E_{0}/N\) calculated by exact diagonalization (ED) for finite systems with \(N=10\) under periodic boundary conditions (PBC) and by iTEBD for infinite systems, as illustrated in Fig. 2(c), we observe a first-order phase transition occurring at \(\lambda_{c1}\approx 0.608\), where the ground state and first excited state undergo a level crossing. The origin of this phase transition can be attributed to the breakdown of the variational principle in non-Hermitian systems, whereby the summation of ground state energy of each term in the Hamiltonian (which equals zero in this scenario) cannot be utilized to provide a lower bound for the spectrum of the entire Hamiltonian. Fig. 2(a)(b) demonstrates that both \(|L\rangle\) and \(|R\rangle\) exhibit non-trivial SPT order throughout the path \(\lambda\in[0,1]\). In order to identify the specific SPT phases to which they belong, and to further classify the non-Hermitian phase diagram of \(H_{0}(\lambda)\), we calculate the index \(\gamma(g_{z})\) for \(|L\rangle\) and \(|R\rangle\) along the path. This index describes the 'projective representation' between the element \(g_{z}\) in the group \(D_{2}\) and the TR operator \(\mathcal{T}\)[41; 38], Figure 2: iTEBD calculation for \(|L\rangle\) and \(|R\rangle\) of \(H_{0}(\lambda)\) with \(D=32\). (a)(b) Entanglement spectrum of \(|L\rangle\) and \(|R\rangle\) respectively. (c) The ground state energy persite \(E_{0}/N\) of \(H_{0}(\lambda)\), compared with the results calculated from ED with \(N=10\) and PBC. (d) Index \(\gamma(g_{z})\) to describe SPT order protected by \(D_{2h}\) of \(|L\rangle\) and \(|R\rangle\). defined as \[M^{-1}R(g)M=\gamma(g)\overline{R(g)},\quad g\in D_{2} \tag{2}\] where \(R(g)\) and \(M\) are the similarity transformations appearing at the virtual bond after the implementation of \(g\) and \(\mathcal{T}\), respectively. It should be noted that the index differs for \(\left|\psi_{0}\right\rangle\) (\(\gamma(g_{z})=-1\)) and \(\left|\psi_{x}\right\rangle\) (\(\gamma(g_{z})=1\)). Details on the definition for \(M\) and \(R(g)\), as well as the calculation method, can be found in Supplemental Material [39], while the results are presented in Fig. 2(d). Our analysis shows that \(\gamma(g_{z})=-1\) for \(\left|R\right\rangle\) throughout the path \(\lambda\in[0,1]\), which is the same as that of \(\left|\psi_{0}\right\rangle\). On the other hand, \(\gamma(g_{z})\) of \(\left|L\right\rangle\) transitions from \(-1\) to \(+1\) (the same as that of \(\left|\psi_{x}\right\rangle\)) at \(\lambda_{c2}\approx 0.973\), describing a phase transition of \(\left|L\right\rangle\) between different SPT phases. As a result, \(H_{0}(\lambda)\) for \(\lambda>\lambda_{c2}\) exhibits a non-trivial CSPT order, where \(\left|L\right\rangle\) and \(\left|R\right\rangle\) belong to different SPT phases protected by \(D_{2h}\). In other words, the topological property of the ground state undergoes a qualitative change at \(\lambda_{c2}\), suggesting a phase transition between the conventional Haldane phase and the newly-discovered CSPT phase without gap closing. This phase transition originates from the breakdown of the well-known Lieb-Robinson bound in non-Hermitian systems [11], where the Hermitian perturbation theory is not directly applicable. _CSPT under perturbation and the phase diagram.--_ The robustness of the CSPT phase constructed in the previous section against perturbations that preserve \(D_{2h}\) symmetry is demonstrated in this section. Specifically, we introduce an on-site potential energy term \(US^{z^{2}}\) to the Hamiltonian considered previously, i.e., \(H(\lambda,U)=H_{0}(\lambda)+\sum_{i}US_{i}^{z^{2}}\). We begin with the case where \(U\ll 1\), and we fix \(\lambda=1\) while gradually increasing \(U\). Our numerical simulations indicate that the constructed CSPT phase, identified by different \(\gamma(g_{z})\) values for \(\left|L\right\rangle\) and \(\left|R\right\rangle\), is robust at the appearance of \(U\) up to \(0.1\). This is shown in the rightmost column of the phase diagram depicted in Fig. 3(a), which will be discussed later. Another extreme limit is the onsite potential term dominating the Hamiltonian, i.e., \(U\gg 1\), where both \(\left|L\right\rangle\) and \(\left|R\right\rangle\) can be adiabatically connected to product states. This indicates that the Hamiltonian \(H(\lambda,U)\) belongs to a trivial symmetric phase for large values of \(U\). To visually represent these three quantum phases in a phase diagram, we introduce the conventional index that distinguishes the Haldane phase from the trivial symmetric phase [27], which is defined as follows \[R(g_{z})^{-1}R(g_{x})R(g_{z})=\omega R(g_{x}), \tag{3}\] where \(R(g)\) forms a projective representation of the \(D_{2}\) group. The trivial symmetric state corresponds to \(\omega=1\), while all four SPT phases protected by \(D_{2h}\) have \(\omega=-1\). Therefore, we can use \(\omega\), \(\gamma(g_{z})_{\left|L\right\rangle}\), \(\gamma(g_{z})_{\left|R\right\rangle}\) as the joint indicator to identify the three phases as listed below. In Fig. 3(a), we use three channels of RGB to demonstrate three indices to be considered, where the values of indices are rescaled from \([-1,+1]\) to \([0.4,0.8]\) as RGB values, i.e., \(\left[R,G,B\right]=\left[\gamma(g_{z})_{\left|L\right\rangle},\gamma(g_{z})_{ \left|R\right\rangle},\omega\right]/5+0.6\). Meanwhile, the residual error \(e\) of iTEBD after 50000 steps of iteration is shown in Fig. 3(b). Only gapped quantum phases with convergent ES were considered, while the non-convergent region was attributed to the gapless nature of the systems. The phase diagram clearly demonstrates the existence of three gapped quantum phases of \(H(\lambda,U)\) discussed above. Notably, the range of \(\lambda\) for the existence of CSPT is observed to increase with enhancing \(U\), indicating that the newly-established composite quantum phases can exist extensively in non-Hermitian systems. Another noteworthy observation in the phase diagram is the absence of a direct phase transition between the trivial symmetric phase and the CSPT phase. Further investigation is required to determine whether such a phase transition can exist and the underlying reasons. _Conclusions.--_ In this Letter, we clarify the definition of quantum phases and quantum phase transitions in non-Hermitian systems. Specifically, we prove that if two local, line gapped, non-Hermitian Hamiltonians belong to the same quantum phase, their left and right ground states can be adiabatically connected respectively. This holds true provided that the ground state manifold is short-range correlated and satisfies the entanglement area law. Based on this definition, we propose a novel class of quantum phases in non-Hermitian systems, denoted as composite quantum phases, whose left and right ground states belong to different phases. Furthermore, this definition can be extended to define composite symmetry Figure 3: Phase Diagram of \(H(\lambda,U)\). (a) RGB values assigned as \(\left[\gamma(g_{z})_{\left|L\right\rangle},\gamma(g_{z})_{\left|R\right\rangle}, \omega\right]/5+0.6\) respectively. (b) Residual error after 50000 steps of iteration in the iTEBD method. protected topological (CSPT) order subject to an additional symmetry restriction. The recently proposed parent Hamiltonian method for non-Hermitian systems has enabled us to construct a system that can realize CSPT phases protected by the \(D_{2h}\) symmetry group. Through numerical verification using the iTEBD algorithm, we demonstrate the existence of this type of new phase and investigate the phase diagram after introducing an on-site potential term that preserves symmetry. Our results show that the CSPT phase is not only robust against symmetric perturbations but also has a substantial region of existence in our phase diagram. This study provides a new perspective for the systematic understanding, classification, and construction of novel quantum phases in non-Hermitian systems. Moreover, these composite quantum phases lack Hermitian counterparts, suggesting a vast field for exploration in non-Hermitian many-body physics. _Acknowledgments.--_ This work is supported by the National Natural Science Foundation of China (NSFC) (Grant No. 12174214 and No. 92065205), the National Key R&D Program of China (Grant No. 2018YFA0306504), the Innovation Program for Quantum Science and Technology (Grant No. 2021ZD0302100), and the Tsinghua University Initiative Scientific Research Program.
2309.02105
Improving Query-Focused Meeting Summarization with Query-Relevant Knowledge
Query-Focused Meeting Summarization (QFMS) aims to generate a summary of a given meeting transcript conditioned upon a query. The main challenges for QFMS are the long input text length and sparse query-relevant information in the meeting transcript. In this paper, we propose a knowledge-enhanced two-stage framework called Knowledge-Aware Summarizer (KAS) to tackle the challenges. In the first stage, we introduce knowledge-aware scores to improve the query-relevant segment extraction. In the second stage, we incorporate query-relevant knowledge in the summary generation. Experimental results on the QMSum dataset show that our approach achieves state-of-the-art performance. Further analysis proves the competency of our methods in generating relevant and faithful summaries.
Tiezheng Yu, Ziwei Ji, Pascale Fung
2023-09-05T10:26:02Z
http://arxiv.org/abs/2309.02105v1
# Improving Query-Focused Meeting Summarization with ###### Abstract Query-Focused Meeting Summarization (QFMS) aims to generate a summary of a given meeting transcript conditioned upon a query. The main challenges for QFMS are the long input text length and sparse query-relevant information in the meeting transcript. In this paper, we propose a knowledge-enhanced two-stage framework called Knowledge-Aware Summarizer (KAS) to tackle the challenges. In the first stage, we introduce knowledge-aware scores to improve the query-relevant segment extraction. In the second stage, we incorporate query-relevant knowledge in the summary generation. Experimental results on the QMSum dataset show that our approach achieves state-of-the-art performance. Further analysis proves the competency of our methods in generating relevant and faithful summaries. ## 1 Introduction Meetings are an essential part of human collaboration and communication. Especially in recent years, the outbreak of Covid-19 has led people to meet online, where most meetings are automatically recorded and transcribed. Query-Focused Meeting Summarization (QFMS) (Zhong et al., 2021) aims to summarize the given meeting transcript conditioned upon a query, which helps people efficiently catch up to the specific part of the meeting they want to know. There are two main challenges for QFMS: Firstly, meeting transcripts can be so long that current deep learning models cannot encode them at once. Even for models (Beltagy et al., 2020; Xiong et al., 2022) that accept long text input, the cost of computational complexity is enormous. Secondly, query-relevant content is sparsely scattered in the meeting transcripts, meaning a significant part of the transcripts is noisy information when given a particular query. Therefore, the models need to reduce the noisy information's impact effectively. In this paper, we focus on the two-stage framework, extracting query-relevant segments from the meeting transcripts and then generating the summary based on the selected content. Compared to the end-to-end approaches (Zhu et al., 2020; Pagnoni et al., 2022) that directly encode the entire meeting transcripts, the two-stage framework is better at keeping computational efficiency and easier to scale up to longer inputs. Specifically, we propose Knowledge-Aware Summarizer (KAS) that incorporates query-relevant knowledge in both stages. In the first stage, we extract knowledge triples from the text segments and introduce knowledge-aware scores to improve segment ranking. In the second stage, the extracted query-relevant knowledge triples are utilized as extra input for the summary generation. We conduct experiments on a QFMS dataset named QMSum (Zhong et al., 2021) and achieve state-of-the-art performance. We further investigate how the different numbers of extracted segments affect the final performance. In addition, we manually evaluate the generation quality regarding fluency, relevance and factual correctness. Our contributions in this work are threefold: (1) Our work demonstrates the effectiveness of leveraging query-relevant knowledge in QFMS. (2) We propose KAS, a two-stage framework incorporating query-relevant knowledge for QFMS. (3) Ex Figure 1: An example of query-relevant knowledge triple extracted from meeting transcripts. The knowledge triple can be used in query-relevant segment extraction as well as summary generation. perimental results show that our approach achieves state-of-the-art performance on a QFMS dataset (QMSum). Further analysis and human evaluation indicate the advantage of our method. ## 2 Related Works Existing QFMS methods can be divided into two categories Vig et al. (2022): two-stage approaches and end-to-end approaches. Two-stage approaches Zhong et al. (2021); Vig et al. (2022); Zhang et al. (2022) first extract query-relevant snippets and then generate the summary upon the extracted snippets, while end-to-end approaches Zhu et al. (2020); Zhong et al. (2022); Pagnoni et al. (2022) directly generate summaries from the whole meeting transcripts. However, both types of methods have their disadvantages. For example, most of the two-stage approaches select query-relevant content based on utterance, which ignores the contextual information between multiple utterances. As for the end-to-end approaches, the computational and memory requirements will increase rapidly when the input text becomes longer, making models challenging to adapt to long meeting transcripts. To our knowledge, we are the first to incorporate query-relevant knowledge in QFMS and demonstrate its effectiveness. Besides, we include multiple utterances in each segment to preserve the contextual information, and our approach can easily extend to process long input text. ## 3 Methodology This section presents our two-stage framework. Firstly, we introduce the extractor, which extracts query-relevant segments and knowledge from the source document. Then, a generative model synthesizes the query, extracted segments, and knowledge into the final summary. ### Knowledge-aware Extractor Meeting Transcripts SegmentationTo preserve the contextual information between utterances, we split the meeting transcripts into segments, and each segment could contain multiple utterances. To do so, given an input meeting transcripts \(T\), we will separate it into \(n\) segments \(S=\{S_{1},S_{2},...,S_{n}\}\) where each \(S_{i}\) is fewer than \(l\) tokens. Specifically, we feed meeting transcripts to the segment utterance by utterance until it reaches \(l\). Knowledge-aware RankingThe knowledge-aware ranking approach will select top-\(k\) segments according to a combination of semantic search scores and knowledge-aware scores. We apply Multi-QA MPNet Song et al. (2020) to calculate semantic search scores. Multi-QA MPNet is trained on 215M question-answer pairs from various sources and domains, including Stack Exchange, MS MARCO Nguyen et al. (2016), WikiAnswers Fader et al. (2014) and many more. Given the query and the segments, the model outputs 768-dimension vectors to represent them. The semantic search score for each segment is computed according to the cosine similarity: \[\small Score_{se}=\frac{MPNet(Q)\cdot MPNet(S_{i})}{\|MPNet(Q)\|\,\|MPNet(S_{i}) \|} \tag{1}\] To compute the knowledge-aware score for each segment, we first use OpenIE Angeli et al. (2015) to extractive knowledge triples. Then, to filter out the triples irrelevant to the query, we only keep the triples that contain overlapping words with the Figure 2: An overview of our proposed framework. We first extract query-relevant knowledge triples based on the query and meeting transcripts and select top-\(k\) segments through knowledge-aware ranking. Then we generate the query-focused meeting summary by FiD-BART from query, knowledge and selected meeting transcripts. query. The knowledge-aware score is obtained by L2 normalizing the number of the remaining triples \(m_{i}\) in each segment: \[Score_{ka}=\frac{m_{i}}{\sqrt{\sum_{i=1}^{n}|m_{i}|^{2}}} \tag{2}\] Finally, we calculate the ranking score by summing the semantic search score and the knowledge-aware score: \[Score_{rank}=Score_{se}+Score_{ka} \tag{3}\] The segment with the top-\(k\) ranking score will be selected for the next stage of summary generation. We denote the remaining segments as \(S=\{S_{1},S_{2},...,S_{k}\}\). ### Generator We choose BART-large (Lewis et al., 2020), a transformer-based (Vaswani et al., 2017) generative pre-trained language model, as our backbone model for the generator because of its remarkable performance on text summarization benchmarks. Following the idea of Fusion-in-Decoder (FiD) and its applications in generation tasks (Izacard and Grave, 2021; Su et al., 2022; Vig et al., 2022), we employ FiD-BART for encoding multiple segments independently in the encoder and fuse information from all segments in the decoder jointly through the encoder-decoder attention. To incorporate the extracted knowledge in the summary generation, we use the knowledge as extra inputs other than the query and segments. In detail, we remove the stop words in the knowledge triples and then merge all the remaining words of knowledge triples in each segment as a set of knowledge phrases. The concatenation of each segment \(S_{i}\), knowledge phrases \(K_{i}\) and the query \(Q\) will be processed by FiD-BART encoder: \[h^{i}_{enc}=Encoder(Q\oplus K_{i}\oplus S_{i}) \tag{4}\] Finally, the decoder performs encoder-decoder attention over the concatenation of all segments' encoder outputs. In this way, the computational complexity grows linearly with the number of segments rather than quadratically, while jointly processing all segments in the decoder enables the model to aggregate information from multiple segments. ## 4 Experiments ### Experimental Setup We choose the top 12 text segments in the segment selection, with each fewer than 512 tokens. For the summary generator, inspired by the effectiveness of pre-finetuning models on relevant datasets to transfer task-related knowledge in the abstractive summarization (Yu et al., 2021; Vig et al., 2022), we initialize our BART-large model using the checkpoint 1 pre-finetuned on WikiSum (Liu et al., 2018). See Appendix A and B for more details of the experimental setup and baselines. Footnote 1: [https://github.com/salesforce/query-focused-sum](https://github.com/salesforce/query-focused-sum) We evaluate the models on the QMSum (Zhong et al., 2021) dataset, which consists of 1,808 query \begin{table} \begin{tabular}{l c c c c} \hline \hline Model & ROUGE-1 & ROUGE-2 & ROUGE-L & Average \\ \hline \multicolumn{5}{c}{_Two-stage_} \\ MaRGE (Xu and Lapata, 2021) & 31.99 & 8.97 & 27.93 & 22.96 \\ DYLE (Mao et al., 2022) & 34.42 & 9.71 & 30.10 & 24.74 \\ SUMM\({}^{N}\)(Zhang et al., 2022) & 34.03 & 9.28 & 29.48 & 24.26 \\ RelReg (Vig et al., 2022) & 34.91 & 11.91 & 30.73 & 25.85 \\ RelReg-W (Vig et al., 2022) & 36.45 & 12.81 & 32.28 & 27.18 \\ \hline \multicolumn{5}{c}{_End-to-end_} \\ LED (Beltagy et al., 2020) & 31.60 & 7.80 & 20.50 & 19.97 \\ DialogLM (Zhong et al., 2022) & 34.50 & 9.92 & 30.27 & 24.90 \\ SegEnc-W (Vig et al., 2022) & 37.80 & 13.43 & 33.38 & 28.20 \\ BART-LS (Xiong et al., 2022) & 37.90 & 12.10 & 33.10 & 27.70 \\ SOCRATIC (Pagnoni et al., 2022) & 38.06 & 13.74 & 33.51 & 28.44 \\ \hline KAS (Ours) & **38.80** & **14.01** & **34.26** & **29.02** \\ \hline \hline \end{tabular} \end{table} Table 1: Results on QMSum test set. The previous works can be divided into two-stage and end-to-end approaches. summary pairs over 232 meetings from product design, academic, and political committee domains. We report ROUGE 2(Lin, 2004) as the automatic evaluation results. Footnote 2: [https://github.com/pltdry/files2rouge](https://github.com/pltdry/files2rouge) ### Main Results We compare our proposed model with strong baselines and previous state-of-the-art models. As shown in Table 1, our approach outperforms both the two-stage and end-to-end methods by a large margin on all evaluation metrics. We also investigate how the number of selected segments in the first stage affects the final performance. As shown in Table 2, when the number of selected segments grows, the quality of the final summary also improves. Previous end-to-end approaches usually outperform two-stage approaches because they encode the entire meeting transcripts, which conduct lots of computing resources. Our approach performs better by encoding 12 segments of input (6,144 tokens) in the second stage generation than the state-of-the-art end-to-end approach that directly encodes 16384 tokens. Therefore, we strike a balance between performance and efficiency, which is essential in real-world applications. Besides, our approach can easily extend to processing long input text since the ranking method is unsupervised. ### Ablation Study We conduct an ablation study to investigate the contribution of the knowledge-aware modules by removing the knowledge-based scoring in the ranking (w/o KA in ranking) and the knowledge input in the summary generation (w/o KA in generator), respectively. Besides, we evaluate the entity-level factual consistency (Nan et al., 2021) of the summary to test the effectiveness of our knowledge-aware modules in keeping the knowledge entities in generated summaries. Specifically, we report the F-1 scores of the entity overlap (Entity F-1) between the source and the generated summary. Table 3 presents that the model's performance decreases in both metrics, especially the Entity F-1, without the knowledge-aware module, which indicates the effectiveness of our method. ### Human Evaluation We further conduct a human evaluation to assess the models on fluency, relevance, and factual correctness. We randomly select 50 samples from the QMSum test set and ask three annotators to score the summary from one to five, with higher scores being better. We compare SegEnc-W because it is the best publicly available model. Table 4 illustrates our approach slightly outperforms SegEnc-W according to fluency and relevance and achieves a significantly better score than SegEnc-W on factual correctness with p-value\(<\)0.05. Since SegEnc-W and our methods both use BART based generators, it is in line with our expectation that we reach similar fluency and relevance scores. Meanwhile, our proposed query-relevant knowledge can help reduce hallucination and generate more factual correctness summaries. More details of the human evaluation setup are in Appendix C. ## 5 Conclusion In this paper, we propose a two-stage framework named KAS that incorporates query-relevant knowledge in both stages. Extensive experimental results on the QMSum dataset show the effectiveness of our method. We further conduct detailed analysis and human evaluation to prove our method's capacity to generate fluency, relevant and faithful summaries. \begin{table} \begin{tabular}{l c c c} \hline \hline model & Fluency & Relevance & Factual Correctness \\ \hline SegEnc-W & 4.68 & 4.60 & 3.76 \\ KAS & 4.79 & 4.68 & 4.47 \\ Ground truth & **4.92** & **4.82** & **4.60** \\ \hline \hline \end{tabular} \end{table} Table 4: Human evaluation results on QMSum test set. \begin{table} \begin{tabular}{l c c c} \hline \hline model & ROUGE-1 & ROUGE-2 & ROUGE-L \\ \hline KAS (\(k\)=4) & 36.46 & 12.06 & 31.75 \\ KAS (\(k\)=8) & 37.86 & 13.75 & 33.30 \\ KAS (\(k\)=12) & **38.80** & **14.01** & **34.26** \\ \hline \hline \end{tabular} \end{table} Table 2: Results Our KAS on the QMSum test set with different number of \(k\). \(k\) denotes the number of segments that are selected in the first stage. \begin{table} \begin{tabular}{l c c c} \hline \hline model & R-1 & R-2 & R-L & Entity F-1 \\ \hline KAS & **38.80** & **14.01** & **34.26** & **35.59** \\ w/o KA in ranking & 37.71 & 12.96 & 32.94 & 34.03 \\ w/o KA in generator & 37.52 & 13.57 & 33.29 & 35.06 \\ \hline \hline \end{tabular} \end{table} Table 3: Ablation results on the QMSum test set without knowledge-Aware (KA). Our KA modules improve the performance of KAS on both stages. ## 6 Limitations Our method is trained and tested on the only publicly available QFMS dataset named QMSum. QMSum consists of three different domains (Academic, Committee and Product), which makes the evaluation robust. However, QMSum only contains 1,808 samples, which is relatively small. We hope larger QFMS datasets be proposed to accelerate the development of this field. In the first stage of our approach, we extract a fixed length (6,144 tokens) of the meeting transcripts as the second stage input text. Therefore, the model's performance could be affected since some query-relevant information could be cut off in the first stage. ## 7 Ethics Statement Although our approach can generate query-focused meeting summaries and achieve a much better factual correctness score than other models, we can not avoid generating hallucinated content from the generative models. We recommend that when our approach is deployed in real-world applications, additional post-processing should be carried out to remove unreliable summaries. In addition, we will indicate that the summary we provide is for reference only, and users need to check the original meeting transcripts to get accurate information.
2303.03268
Single Qubit Error Mitigation by Simulating Non-Markovian Dynamics
Quantum simulation is a powerful tool to study the properties of quantum systems. The dynamics of open quantum systems are often described by Completely Positive (CP) maps, for which several quantum simulation schemes exist. We present a simulation scheme for open qubit dynamics described by a larger class of maps: the general dynamical maps which are linear, hermitian preserving and trace preserving but not necessarily positivity preserving. The latter suggests an underlying system-reservoir model where both are entangled and thus non-Markovian qubit dynamics. Such maps also come about as the inverse of CP maps. We illustrate our simulation scheme on an IBM quantum processor by showing that we can recover the initial state of a Lindblad evolution. This paves the way for a novel form of quantum error mitigation. Our scheme only requires one ancilla qubit as an overhead and a small number of one and two qubit gates.
Mirko Rossini, Dominik Maile, Joachim Ankerhold, Brecht I. C Donvil
2023-03-06T16:35:44Z
http://arxiv.org/abs/2303.03268v1
# Single Qubit Error Mitigation by Simulating Non-Markovian Dynamics ###### Abstract Quantum simulation is a powerful tool to study the properties of quantum systems. The dynamics of open quantum systems are often described by Completely Positive (CP) maps, for which several quantum simulation schemes exist. We present a simulation scheme for open qubit dynamics described by a larger class of maps: the general dynamical maps which are linear, hermitian preserving and trace preserving but not necessarily positivity preserving. The latter suggests an underlying system-reservoir model where both are entangled and thus non-Markovian qubit dynamics. Such maps also come about as the inverse of CP maps. We illustrate our simulation scheme on an IBM quantum processor by showing that we can recover the initial state of a Lindblad evolution. This paves the way for a novel form of quantum error mitigation. Our scheme only requires one ancilla qubit as an overhead and a small number of one and two qubit gates. pacs: 03.65.Yz, 42.50.Lc _Introduction.-_ Quantum computing has created a computational paradigm that may lead to the development of new and powerful solutions to computational tasks. A prominent application of digital quantum computers is their ability to simulate other quantum systems, already on the level of noisy intermediate scale (NISQ) quantum platforms [1]. Although there already exists a wide range of quantum simulation methods for closed quantum systems, see e.g. [2; 3; 4; 5], the simulation of an open quantum system is a more arduous task. Since it is often not possible to simulate the complete system plus environment dynamics, simulation methods mainly focus on realizing the reduced effective dynamics of the open quantum system. To find reduced descriptions for the evolution of open quantum systems, one typically assumes that system and environment are initially in a product state. In this case, the evolution is guaranteed to be described by a Completely Positive (CP) map [6] acting on the initial system state, which can be obtained by resorting to numerical methods (such as [7; 8; 9; 10; 11]) or perturbative schemes [12]. A CP map is said to be CP-divisible if it can be divided into small arbitrary parts that are themselves CP. In this case, the evolution of the system can be described by the Lindblad-Gorini-Kossakowski-Sudarshan equation [13; 14]. When a significant amount of entanglement accumulates between the system and the environment, the use of CP maps no longer makes sense [15]. Quantum simulation methods for Lindblad-like dynamics have been extensively studied [16; 17; 18; 19; 20; 21] and experimentally implemented [22]. Recently the authors of [23] simulated Lindbladian dynamics on an IBM quantum processor by exploiting error mitigation. An efficient simulation scheme for any CP qubit map was developed by [24], based on the results of [25; 26]. The scheme by [24] uses a single ancillary qubit, single qubit rotations and CNOT gates and was experimentally realised on a superconducting circuit [27]. In this Letter we propose a novel simulation scheme for general qubit dynamical maps going beyond CP divisible maps [24; 25; 26]. General dynamical maps are trace-preserving and hermicity-preserving, but not necessarily positivity-preserving. They naturally arise as the inverse of CP maps, and thus the ability to simulate them allows to revert noise effects on the system. In Fig. 1 we recover several initial states of a qubit disturbed by a thermal Lindblad equation on an IBM quantum processor. General dynamical maps also arise in the context of Figure 1: (Top) Reversing the direction of time of a dissipative system brings it back to its initial state. (Bottom) We recover the initial state of a Lindblad evolution (6) by simulating its time-reversed evolution \(\frac{d}{dt}\rho_{t}=-\mathcal{L}_{t}(\rho_{t})\), for different initial states. For \(t\leq t^{*}\) the points show the forward evolution simulated on an IBM quantum computer, for \(t>t^{*}\) the backward evolution. The lines are the numerical integration of the master equations (6) and their time-reversed versions. \(\mathcal{F}\) gives the fidelity between the final recovered state and the initial state. The parameters of the master equations are \(\beta=\omega=\gamma=1\). general time-local master equations in which the initial state is _not_ assumed to be in a product state, but has already created some system-bath entanglement. Their derivation is based on tracing out environmental degrees of freedom in Gaussian boson [28; 29], fermion [30; 31] or other exactly solvable models [32], or via time convolutionless perturbation theory methods [12; 33]. For our proposed scheme, we exploit the fact that general dynamical maps acting on finite dimensional systems can always be decomposed as the difference of two CP maps [34; 35]. We demonstrate that the decomposition can be brought into a suitable form for quantum simulations, namely, as a weighted difference of two completely positive trace-preserving (CPTP) maps. Another, more general avenue for the simulation of open quantum systems on a quantum computer are collision methods [36; 37]. Here the open quantum system repeatedly interacts or "collides" with ancillary systems. Non-Markovian dynamics can be implemented by allowing the ancillas to interact amongst themselves in between system-ancilla collisions [38; 39; 40]. Such models were successfully implemented on IBM quantum processors [41]. Other methods for quantum simulation of open quantum systems have been proposed in the context of non-equilibrium systems [42] and quantum thermodynamics [43]. In contrast to collision-based methods such as [38; 39; 40], the main advantage of the algorithm we present here is its problem-agnostic applicability. No specific design is required for each dynamics one aims to simulate. In fact, we are able to simulate the dynamics of a given system from any point in time to any later time, whereas collision-based methods require the protocol to always start from the initial time of the evolution and rely on the Trotterization of the dynamics. As an additional benefit, our proposed scheme is resource efficient as the computational overhead does not grow with the simulated time. We illustrate these features by implementing two paradigmatic examples on IBM quantum processors which demonstrate the ability to simulate the time evolution from any intermediate point in time, even when the evolution map is not CP and the ability to recover the initial state of a Lindbladian evolution [44], see Fig. 1. This last example shows that one of the promising applications of the new scheme is to perform error mitigation on a NISQ platform by recovering the typically unknown undisturbed initial state. _Theoretical framework.-_ A finite dimensional linear map \(\Lambda\) is CP if and only if its action on a state \(\rho\) can be written in terms of a set of matrices \(\{K_{j}\}_{j}\), often referred to as Kraus operators: \(\Lambda(\rho)=\sum_{j}K_{j}\rho K_{j}^{\dagger}\). The map is trace preserving iff. \(\sum_{j}K_{j}^{\dagger}K_{j}=\mathbb{I}\), where \(\mathbb{I}\) is the identity on the appropriate Hilbert space, see e.g. [6; 12]. The results of [25; 26] prove that any CPTP qubit map \(\Lambda\) is equal to the convex sum of two extremal CP maps \(\Lambda_{1}\) and \(\Lambda_{2}\) that are both realised by a pair of Kraus operators. Concretely, they showed that for every CPTP qubit map \(\Lambda\) there exist two pairs of unitaries \(U_{j}\), \(V_{j}\) and two pairs of Kraus operators \(F_{i,j}\) (with \(i,j\in\{1,2\}\)) defining the extremal maps \[\Lambda_{j}(\rho)=U_{j}\left(\sum_{i=1}^{2}F_{i,j}(V_{j}\rho V_{j}^{\dagger}) F_{i,j}^{\dagger}\right)U_{j}^{\dagger} \tag{1}\] such that \[\Lambda(\rho)=\frac{1}{2}\Lambda_{1}(\rho)+\frac{1}{2}\Lambda_{2}(\rho). \tag{2}\] The authors of [24] devised a simple circuit shown in Fig. 2 (b) to realise the action of the \(\Lambda_{j}\) using just one ancillary qubit and CNOT gates. General dynamical maps are linear, trace-preserving and self-adjoint but not necessarily positivity preserving. For finite dimensional systems such maps can always be written as the difference of two CP maps [35] \[\Sigma(\rho)=\Lambda_{+}(\rho)-\Lambda_{-}(\rho)=\sum_{j}K_{j} \rho K_{j}^{\dagger}-\sum_{j}M_{j}\rho M_{j}^{\dagger}. \tag{3}\] The map \(\Sigma\) is trace preserving under the condition \(\sum_{j}K_{j}^{\dagger}K_{j}-\sum_{j}M_{j}^{\dagger}M_{j}=\mathbb{I}\). Since \(\Lambda_{\pm}\) are bounded, there exists a positive number \(p\) such that \(\sum_{j}M_{j}^{\dagger}M_{j}\leq p\,\mathbb{I}\). We define the semi-positive definite operator \(D=\sqrt{p\,\mathbb{I}-\sum_{j}M_{j}^{\dagger}M_{j}}\) and write \[\Sigma(\rho)=(1+p)\Lambda_{+}^{*}-p\Lambda_{-}^{*}, \tag{4}\] where \(\Lambda_{+}^{*}=\frac{\sum_{j}K_{j}\rho K_{j}^{\dagger}+D\rho D^{\dagger}}{1+p}\) and \(\Lambda_{-}^{*}=\frac{\sum_{j}M_{j}\rho M_{j}^{\dagger}+D\rho D^{\dagger}}{p}\). It is straightforward to check that both maps \(\Lambda_{\pm}^{*}\) are trace preserving and CP. The above equation is our first main result. It shows that any general dynamical map \(\Sigma\) can be decomposed as the weighted difference of two CPTP maps. The above decomposition is an alternative to the decomposition in terms of CPTP maps by [45] which involves applying positive, non-unitary transformations to the quantum state. The latter makes our decomposition more attuned for simulating the action of \(\Sigma\) on an experimental platform. _Algorithm and circuit schemes.-_ The simulation scheme we propose for general dynamical maps is illustrated in Fig. 2(a). First, a classical random number generator is used to choose the branch \(\Lambda_{+}^{*}\) or \(\Lambda_{-}^{*}\) corresponding to equation (4), with probabilities \(\frac{1+p}{1+2p}\) and \(\frac{p}{1+2p}\), respectively. Then one of the two extremal maps (2) is selected with probability \(1/2\) and realized by the circuit representation for the extremal maps of [24] shown in Fig. 2(b). Next, a measurement of an observable is performed and the outcomes within the plus and minus branch are summed. Finally, the measurement result is rescaled by \(1+2p\) to restore normalization, and the results of both branches are subtracted. The scheme depicted in Fig. 2 can be straightforwardly be implemented on a quantum computational platform. Particularly, we use the Ehningen IBM quantum device. Generating the dynamics of a system using the algorithm described above requires eight different quantum circuits as shown in Fig.2(b). Every circuit requires single-qubit unitary gates \(U_{j}\), \(V_{j}\), \(R_{1,j}\) and \(R_{2,j}\), which we construct explicitly in [35], are realized via a universal set of single-qubit gates. Beyond single-qubit unitary gates, CNOT gates and a measurement operation on the ancilla are performed. With this circuit representation any single qubit map \(\Lambda\) can be simulated with an error \(\leq\varepsilon\) using a computer time of \(O(\text{polylog}(1/\varepsilon))\)[24]. In order to minimize the noise effects of the quantum device, we implemented standard methods of quantum error mitigation [35]. We select the qubits and connections on the platform showing the least error rate for each circuit implementation and optimise the specific gate protocol to minimise the number CX gates, being most prone to generate errors. We make use of readout error mitigation with an exploratory run on the device to uncover its systematic readout error and apply this to correct the measurement results. The data points in Figs. 1 and 4 are each averaged over ten runs of each 10000 shots, i.e. 10000 circuits are implemented according to the probability distribution in Fig. 2(a). Errors bars are within the size of the data points. Therefore, the final infidelity is mostly due to systematic errors in the quantum gates and the measurement scheme within a specific circuit calibration [46]. _Simulating General Time Local Master Equations.-_General trace-preserving time-local master equations are of the form \[\frac{d}{dt}\rho_{t}=\mathcal{L}_{t}(\rho_{t})\\ =-i[H_{t},\rho_{t}]+\sum_{k}\Gamma_{k,t}(L_{k}\rho_{t}L_{k}^{ \dagger}-\frac{1}{2}\{L_{k}^{\dagger}L_{k},\rho_{t}\}), \tag{5}\] where the \(L_{k}\) are operators and the \(\Gamma_{k}(t)\) are scalar weight functions. The above equation has the appearance of a Lindblad equation except for the fact that the weight functions \(\Gamma_{k}(t)\) are not assumed to be positive definite. General time-local master equations describe the evolution of a wide class of open quantum systems, as they can be derived from the Nakajima-Zwanzig equation [47] when its solution has an inverse that exists during a finite time interval [48; 49; 50; 51; 52]. For an initial condition \(\rho_{0}\) the formal solution of (5) is \(\rho_{t}=\Lambda_{t,0}(\rho_{0})=T\exp\left(\int_{0}^{t}ds\,\mathcal{L}_{s} \right)\rho_{0}\), where the map \(\Lambda_{t,0}\) is guaranteed to be CP if the underlying system-environment model is in a product state. The master equation (5) generates maps that satisfy the semi-group property: For \(\Lambda_{t,s}\) being the map that evolves a state from time \(s\) to time \(t\), then \(\Lambda_{t,s}=\Lambda_{t,u}\Lambda_{u,s}\) for \(t\geq u\geq s\). This property is very convenient since we can split up the evolution into smaller segments that evolve the density matrix from one time to the next. However, complete positivity of \(\Lambda_{t,0}\) does _not_ necessarily guarantee that all intermediate maps \(\Lambda_{s,u}\) are CP. In fact, if this is the case, all weights \(\Gamma_{k,t}\) are positive definite and (5) reduces to the conventional Lindblad form. If intermediate \(\Lambda_{u,s}\) are not positivity preserving, this implies that not all quantum states are mapped into quantum states, i.e. some quantum states are mapped into operators with negative eigenvalues. Indeed, weight factors \(\Gamma_{k,t}\) taking negative values capture an underlying system-environment model with meaningful entanglement built up between them. In this case, the reduced system state operator at an instant of time is no longer sufficient to describe the subsequent time evolution, as one requires knowledge of the history of the system-environment interaction. To illustrate this, we consider a qubit master equation with four operators and their respective weight functions, i.e. \(L_{1}=\sigma_{-}\), \(L_{2}=\sigma_{+}\), \(L_{3}=\tau_{-}\) and \(L_{4}=\tau_{+}\) with \(\sigma_{\pm}\) being the raising and lowering operators of \(\sigma_{z}\) and \(\tau_{\pm}\) of \(\sigma_{x}\), respectively. As weight factors we choose a typical non-Markovian model with oscillations to negative values that exponentially decay, which mimics resonance with an environmental mode [53]. Figure 3 displays these weight functions, where the grey zone indicates the times Figure 2: (a) Representation of the algorithm simulating general dynamical maps decomposed as the weighted difference of two CPTP maps 4. At each branching point a choice is made with a classical random number generator with the indicated probability. At the dashed line, the measurement of some observable is performed. Finally, the outcomes are rescaled and subtracted from one another. (b) The circuit by [24] to realise the extremal maps (2). at which they are all negative. If the evolution of the density according to (5) starts in this time interval, for short times it will _not_ be positivity preserving. Therefore, the solution of the master equation from these times has to be described within the above framework of general dynamical map. The excited state population according to (5) is shown in Fig. 4. Results obtained with the CP map \(\Lambda_{t,0}[\rho_{0}]\) with \(\rho_{0}=(1+\sigma_{z})/2\) are shown as (green) full line for a direct integration of (5) together with those from simulations on the IBM device (squares) following the recipe outlined above. Starting at the intermediate time \(t^{\prime}=0.2\) in the grey area of Fig. 3, diamonds display IBM simulations with the general dynamical map \(\Sigma\equiv\Lambda_{t,t^{\prime}}[\rho_{tr}]\). Since at \(t^{\prime}\) all weight functions are negative definite, the solution (for short times, at least) is not CP. In contrast, when forgetting about the past interaction with the environment according to an evolution with \(\Lambda_{t-t^{\prime},0}[\rho_{tr}]\) (dashed yellow), the correct dynamics of (5) is not recovered. Having access to the intermediate evolution maps starting from \(t^{\prime}>0\) has great advantages. For example, we are able to evolve a state from \(0\) to \(t^{\prime}\), perform a quantum operation on it and then evolve it further with a completely bounded evolution. The (pink) full line in Fig. 4 displays this situation after applying the unitary transformation \(\sigma_{x}\) to the state at \(t^{\prime}=0.2\), while the (green) triangles show the corresponding IBM simulations of the general dynamical map according to the new scheme. _Quantum State Recovery.-_ Another intriguing consequence of the new simulation method is that one can recover the initial state of a Lindblad evolution obtained on a quantum device by implementing its time reversed master equation [44]. Concretely, we consider a master equation for a qubit weakly coupled to a thermal reservoir \(\frac{d}{dt}\rho_{t}=\mathcal{L}_{t}(\rho_{t})\) with \[\mathcal{L}_{t}(\rho_{t})= -i\omega[\sigma_{z}/2,\rho_{t}]+\gamma e^{\beta\omega}(\sigma_{-} \rho\sigma_{+}\frac{1}{2}\{\sigma_{+}\sigma_{-},\rho_{t}\})\] \[+\gamma(\sigma_{+}\rho\sigma_{-}\frac{1}{2}\{\sigma_{-}\sigma_{+ },\rho_{t}\}) \tag{6}\] and its time reversed evolution \(\frac{d}{dt}\rho_{t}=-\mathcal{L}_{t}(\rho_{t})\). Thus, evolving a state for a time \(t\) with (6) and then for a time \(s\leq t\) with its time reversed evolution results in the state obtained by just evolving with (6) for a time \(t-s\). We implement both the forwards and backwards evolution on the IBM device with our simulation scheme (4). In Fig. 1 data points for \(t\leq t^{*}\) and various initial states reflect the thermalization dynamics of (6). At \(t=t^{*}\) the recovery sets in to approach earlier states in the dissipative time evolution. Note that the recovery is performed by mapping the state \(\rho(t=t^{*})\) directly to each recovered state with only one algorithm run per state. _Outlook.-_ We have shown that the class of general qubit dynamical maps can be straightforwardly simulated using just four extremal CP maps each consisting of two pairs of Kraus operators. Environmental noise on a quantum system is generally described by a CP map. As the inverse of a CP map is a general dynamical map, one of the promising applications of our simulation scheme is to Figure 4: Excited state population according to solutions of (5) (lines) and IBM simulations (symbols), for an initial state \(\rho_{0}=(1+\sigma_{z})/2\). In the shaded region all weights are negative, so the evolution starting from there is ensured to be non-positivity preserving. The dynamical map \(\Lambda_{t,0}\) is CP, while starting at a later time \(t^{\prime}=0.2\), the map \(\Sigma\equiv\Lambda_{t,t^{\prime}}\) is only ensured to be a general dynamical map. The (yellow) line shows the evolution of the state from time \(t^{\prime}=0.2\) when forgetting about the past interaction with the environment. The purple line (green triangles) show the evolution from \(t^{\prime}=0.20\) after the unitary transformation \(\sigma_{x}\) was applied to the state. perform error mitigation by recovering the typically unknown bare qubit state in absence of any environmental coupling. We prove the viability of this in Fig. 1 on an IBM quantum processor. A next step is to determine the noise CP map of a single qubit of a quantum processor and to revert it using our simulation scheme and thus performing genuine quantum error mitigation. _Acknowledgements_ We thank P. Muratore-Ginanneschi, M. Donvil and J. Stockburger for valuable discussions. Financial support through the WM-BW within the Quantum Computing Competence Network BW (SiQuRe), the BMBF within QSens (QComp), and QSolid (BMBF) is gratefully acknowledged.
2303.14714
On lower bounds of the order of $k$-chromatic unit distance graphs
Here we give refined numerical values for the minimum number of vertices of $k$-chromatic unit distance graphs in the Euclidean plane.
Aubrey D. N. J. de Grey, Jaan Parts
2023-03-26T13:02:20Z
http://arxiv.org/abs/2303.14714v1
# On lower bounds of the order ###### Abstract Here we give refined numerical values for the minimum number of vertices of \(k\)-chromatic unit distance graphs in the Euclidean plane. ## Background In the previous issue of _Geombinatorics_, an article [4] was published, in which Haydn Gwyn and Jacob Stavrianos obtained new estimates for the _minimum_ number of vertices \(v_{k}\) and edges \(e_{k}\) for arbitrary \(k\)-chromatic1_unit distance graphs_ in the plane for \(k=5\) and \(k=6\). Footnote 1: For clarity, we emphasize that here \(k\) means the number of colors in the proper vertex coloring of the _graph_ and differs by one from similar designations in [4] and other publications mentioned below, where it is associated with the _tiling_ of the plane. It is known that \(v_{3}=3\) and \(v_{4}=7\) (provided by the unit triangle and the Moser graph). For \(k\geq 5\), exact values of \(v_{k}\) are not known. Initial _lower bounds_\(v_{5}>12\), \(v_{7}>6197\) were obtained by Dan Priitikin [7]. For \(k=5\), the finite _upper bounds_ are2\(v_{5}\leq 509\), \(e_{5}\leq 2406\), see [5]. Footnote 2: In [5] the corresponding 509-vertex graph has 2442 edges, but is not edge-critical, which allows us to reduce \(e_{5}\). We were able to discard 36 edges, but we didn’t perform an exhaustive search, so further improvements are possible. Priitikin's approach is based on finding a proper tiling of the plane using \(k-1\) colors, which covers most of the plane, so that the average _density_\(\delta\) of the _uncolored_ area is minimal. Thus, \(k-1\) colors are enough for any unit distance graph with at most \(\lfloor 1/\delta\rfloor\) vertices, and a \(k\)-chromatic graph must have at least one more vertex. In [6] we slightly improved the lower bounds to \(v_{6}>24\) and \(v_{7}>6992\) by modifying the tiling construction. The main idea of Gwyn-Stavrianos' approach is to first estimate the minimum number of edges \(e_{k}\), and then proceed to the number of vertices \(v_{k}\) using known relations. To do this, a tiling of the plane in \(k-1\) colors is found, minimizing the _probability_\(p_{k-1}\) of the formation of a _mono-chromatic edge_ (such that both vertices have the same color). The value \(1/p_{k-1}\) gives a lower bound on the number of edges \(e_{k}\). For \(k=5\), [4] uses a tiling by regular hexagons of four colors with a diameter of about \(1.1335\). For \(k=6\), it is overlaid with a set of disks of unit diameter, located at a unit distance from each other with an average density \(\delta=\pi/(8\sqrt{3})\) and colored in the fifth color, and a lemma is used that allows passing from \(p_{4}\) to \(p_{5}\): \(p_{k}\leq(1-2\delta)\)\(p_{k-1}\). According to the calculations of Gwyn and Stavrianos, \(e_{5}\geq 98\), \(v_{5}\geq 22\), \(e_{6}\geq 180\), \(v_{6}\geq 32\). In this note, we offer several refinements that allow us to slightly increase these numerical values. ## 2 Improvements First, note that in [4], to go from the number of edges \(e_{k}\) to the number of vertices \(v_{k}\), the relation \(e_{k}<v_{k}^{3/2}\) was used, which was obtained by Paul Erdos in [3]. Peter Agoston and Domotor Palvolgyi recently obtained tighter bounds [1]: \(e_{k}\leq\sqrt[3]{29/4}\)\(v_{k}^{4/3}\). In the range \(20\leq v_{k}<521\), better estimates can be derived from the formula3[1]: \(v_{k}\geq 2\,r-11+24/v_{k}+(1-s)\binom{\lfloor r\rfloor}{2}+s\binom{ \lceil r\rceil}{2}\), where \(r=2e_{k}/v_{k}\), \(s=r-\lfloor r\rfloor\). Applying these relations to \(e_{k}\), we get a noticeable improvement in the \(v_{k}\) estimates. Footnote 3: For completeness, note that according to [1] for \(1\leq v_{k}\leq 15\) the minimum number of edges \(e_{k}\) is known exactly: 0, 1, 3, 5, 7, 9, 12, 14, 18, 20, 23, 27, 30, 33, and 37; while for \(16\leq v_{k}\leq 19\) the upper bounds on \(e_{k}\) are 42, 47, 52, and 57. For the next \(20\leq v_{k}<60\), using the formula, we get the following upper bounds on \(e_{k}\): 63, 68, 72, 77, 82, 87, 92, 97, 102, 108, 113, 119, 124, 130, 136, 142, 148, 154, 160, 166, 172, 179, 185, 192, 198, 205, 212, 218, 225, 232, 239, 246, 254, 261, 268, 276, 283, 291, 298, 306. In [4] in the case of \(k=6\), the plane was partially covered by disks with density \(\delta=\pi/(8\sqrt{3})\approx 0.226725\). Instead, one can use Croft's tiling [2] with rounded 12-gons and a density of about \(\delta\approx 0.229365\), which slightly improves the \(e_{6}\) estimate. Also note that in [4] rounding down was used to get the \(e_{k}\) estimates. It is more correct to use rounding up \(e_{k}\geq\lceil 1/p_{k-1}\rceil\), which gives an increase by one more. In [4], a statistical method was used to find the value of \(p_{k}\): a repeating section \(S\) of the tiling was selected, on which a random unit edge was repeatedly superimposed, defined by the co of its first vertex and by the orientation angle \(\phi\) of its second one, and the average value of the binary function \(M_{k}(\sigma,\phi)\) was determined, which takes the value \(1\) or \(0\) depending on whether an edge is mono-chromatic or not. Instead, we took an algebraic method, calculating the integral \(p_{k}=\frac{1}{2\pi S}\int_{S}d\sigma\int_{0}^{2\pi}d\phi\,M_{k}(\sigma,\phi)\) in the Mathematica package, which allowed us to refine the optimal tiling parameters. However, we encountered some problems with the numerical integration function NIntegrate in the general case, and were forced to limit ourselves to relatively simple tilings. As a result, we get the bounds: \(e_{5}\geq 99\), \(v_{5}\geq 28\), \(e_{6}\geq 182\), \(v_{6}\geq 42\). We also tried Gwyn-Stavrianos' approach for the case \(k=7\), and obtained the following estimates for a tiling close to Pritikin's construction using pentagonal tiles: \(e_{7}\geq 232646\), \(v_{7}\geq 6456\). (For the construction using hexagonal tiles, we were able to get only \(e_{7}\geq 69451\) and \(v_{7}\geq 2608\).) This is better than [7], but falls short of [6]. However, there is still some hope of beating the latter one by using tiles with curved borders (the so-called "wavy edges" used in [6]).
2301.05060
Evaluating the Fork-Awareness of Coverage-Guided Fuzzers
Fuzz testing (or fuzzing) is an effective technique used to find security vulnerabilities. It consists of feeding a software under test with malformed inputs, waiting for a weird system behaviour (often a crash of the system). Over the years, different approaches have been developed, and among the most popular lies the coverage-based one. It relies on the instrumentation of the system to generate inputs able to cover as much code as possible. The success of this approach is also due to its usability as fuzzing techniques research approaches that do not require (or only partial require) human interactions. Despite the efforts, devising a fully-automated fuzzer still seems to be a challenging task. Target systems may be very complex; they may integrate cryptographic primitives, compute and verify check-sums and employ forks to enhance the system security, achieve better performances or manage different connections at the same time. This paper introduces the fork-awareness property to express the fuzzer ability to manage systems using forks. This property is leveraged to evaluate 14 of the most widely coverage-guided fuzzers and highlight how current fuzzers are ineffective against systems using forks.
Marcello Maugeri, Cristian Daniele, Giampaolo Bella, Erik Poll
2023-01-12T14:57:34Z
http://arxiv.org/abs/2301.05060v1
# Evaluating the Fork-Awareness of Coverage-Guided Fuzzers ###### Abstract Fuzz testing (or fuzzing) is an effective technique used to find security vulnerabilities. It consists of feeding a software under test with malformed inputs, waiting for a weird system behaviour (often a crash of the system). Over the years, different approaches have been developed, and among the most popular lies the coverage-based one. It relies on the instrumentation of the system to generate inputs able to cover as much code as possible. The success of this approach is also due to its usability as fuzzing techniques research approaches that do not require (or only partial require) human interactions. Despite the efforts, devising a fully-automated fuzzer still seems to be a challenging task. Target systems may be very complex; they may integrate cryptographic primitives, compute and verify check-sums and employ _forks_ to enhance the system security, achieve better performances or _mamage different connections at the same time_. This paper introduces the _fork-awareness_ property to express the fuzzer ability to manage systems using forks. This property is leveraged to evaluate 14 of the most widely coverage-guided fuzzers and highlight how current fuzzers are ineffective against systems using _forks_. Fuzzing, Fork, Security Testing, Software Security ## 1 Introduction In the last years, plenty of fuzzers have been developed to deal with sophisticated software and nowadays it is extremely common that network systems employ forks to deal with different connections at the same time. This leads to 1) the need to devise accurate and ad-hoc fuzzers and 2) the need to evaluate these fuzzer according to their ability to cope with such advanced systems. Unfortunately, as pointed out in [1], it is not easy to benchmark all of them since the fuzzers are very different from each other. Metzman et al. faced this problem by devising _FuzzBench_[11], an open-source service for the evaluations of stateless fuzzers. Later, Natella and Pham presented _ProFuzzBench_[20], which similarly to _FuzzBench_ provides a service to evaluate stateful fuzzers. Although FuzzBench includes a sample of real word programs and ProFuzzBench includes different network systems (i.e. systems that often employ forks to deal with multiple connections [15]), they do not evaluate the ability of the fuzzers to cope with programs that use forks. Despite _forks_ representing the only way to create a new process [15], experimental results have shown that current fuzzers cannot deal with forked processes. The existing approach merely relies on code modifications to remove the forks. Unfortunately, this approach goes against the willingness to reduce manual work and improve automation during a fuzzing campaign. [1]. In this work, we explore and classify the limitations current fuzzers exhibit in front of forking programs. In summary, this paper: 1. devises a novel property capturing the ability of fuzzers to deal with forks appropriately; 2. evaluates 14 coverage-guided fuzzers based on this property; 3. proposes possible improvements to the current state-of-the-art and future directions. The paper is organised as follows. Section 2 describes the relevant background, Section 3 presents our contributions to knowledge, Section 4 shows the existing approaches that try to cope with the fork problem and, eventually, Section 5 discuss the results and propose possible future directions. ## 2 Background ### Fuzz testing Fuzzing is an automated testing technique pioneered by Miller et al. [14] in 1990 to test UNIX utilities. As outlined in Figure 1, coverage-guided fuzzing is composed at least of seed selection, input generation and system execution. #### 2.1 Seeds selection The user must provide some input messages (seeds) representative of some usual inputs for the system. #### 2.2 Input generation The core of every fuzzer is the generation of slightly malformed input messages to forward to the software under test. A fuzzer is as efficient as the generated inputs are able to break the system. According to the approach used to generate the messages, the fuzzers may be classified into: * _dumb_: generate random strings (as the first fuzzer [14] did); * _dumb mutational_: blindly mutate seed messages provided by the user; * _grammar-based_: leverage the grammar of the system to craft the input messages; * _smart mutational_ (often called _evolutionary_): require a sample of inputs and leverage _feedback mechanisms_ to craft system-tailored messages. An example of feedback mechanisms is the code coverage feedback, explored in Section 2.2. #### 2.3 System execution Each execution of the fuzzer involves three components: * _Bugs detector_: it reports eventual bugs. The majority of the bugs detectors only report crashes, however for many systems, also a weird deviation from the happy flow of the protocol may represent significant security issues; * _Hangs detector_: it detects program execution hangs; * _Code coverage detector_: as further explained in Section 2.2, the code coverage represents one of the feedbacks the fuzzer leverages to improve the quality of the input messages. ### Coverage-Guided Fuzzing _Smart mutational_ fuzzers use feedback mechanisms to steer the generation of the messages. Different types of feedback mechanisms exist [15], and often different terms are used to express the same idea. To avoid further noise, in this work we use the term _code coverage_ to express the lines of code that are reached by a specific message. Code coverage fuzzers need to recompile the code with ad-hoc compilers (e.g. the AFL compiler) to instrument the code and obtain run-time information. AFL [11], for example, instruments the code to fill a bitmap that represents the lines of the code covered by the inputs. Later, it uses this bitmap to assign a higher score to messages able to explore previously unseen lines of code. Figure 1: Coverage-guided fuzzing process ### Inter-Process Communication Operating systems provide _system calls_ to perform different tasks (e.g. writing and reading files, accessing hardware services, creating and executing new processes). On UNIX systems, new processes are created by using the _fork system call_[15]. In short, the first process, called _parent process_, generates a clone, called _child process_, that is an exact copy of the parent process. After the fork, file descriptors and registers are duplicated, thus a change in one of the processes does not affect the other one. Also, the parent and child process will follow _separate execution paths_. ## 3 Our contribution This paper aims to understand how the state-of-the-art coverage-guided fuzzers deal with software under tests containing forks. It was not obvious to come up with a way to compare and contrast the various tools. We devised a novel property, the _fork awareness_, that must be satisfied when a fuzzer deals with forks effectively and efficiently. As we shall see below, fork awareness rests upon three aspects representing the ability to deal with child processes. Also, we evaluate the novel property over the most widely used fuzzers from two benchmark frameworks, reaching a total of 14 evaluated tools, 11 drawn from FuzzBench and 3 from ProFuzzBench. ### Fork-awareness Abstractly, fork awareness insists that every fuzzer should address the child process as the parent one. During the system execution, the system monitor should detect bugs or hangs regardless of their location and the coverage should be measured also in the child process. This is formalised through Definition 1. **Definition 1**.: A coverage-guided fuzzer is _fork-aware_ if it can detect bugs and hangs and measure coverage in the same way for both the child and the parent's branch. The three aspects in this definition are called: **(C.1) Child bugs detection**: any anomaly is reported also if it occurs in child processes; **(C.2) Child hangs detection**: any infinite hang is reported also if it occurs in child processes; **(C.3) Child code coverage**: code coverage is measured also for child processes. ### Example challenges We wrote three simple C programs to use as challenges for the fuzzers, namely to test whether the fuzzers satisfy the aspects given above. 1. _Bugs detection challenge:_ 1 if(fork()==0){ //Child process 2 raise(SISEGV); //Simulated crash 3 } else{ //Parent process 4 wait(NULL); //Waiting child 5 //termination 6 } The snippet sends a _SISEGV_ signal to simulate a bug in the child process. This signal is used to report a segmentation fault, i.e. a memory access violation, which is common in programs written in low-level languages. The fuzzer must detect this bug also after the parent's termination. 2. _Hangs detection challenge_: 1 if(fork()==0){ //Child process 2 while(1){ ; //Simulation of 3 //blocking code 4 } The snippet simulates an infinite loop in the child process. The fuzzers must report processes still in execution after the loop and must kill child processes at the end of the fuzzing campaign, avoiding pending process executions. 3. _Code coverage challenge_: 1 pid_t pid = fork(); 2 if(pid==0){ //Child process 3 if(data %2 == 0){ do_something(); } 4 else{ do_something(); } 5 if(data %3 ==0){ do_something(); } 6 else{ do_something(); } 7 if(data %5 ==0){ do_something(); } 8 else{ do_something(); } 9 if(data %7 == 0){ do_something(); } 10 else{ do_something(); } 11 } 12 else{ //Parent process 13 wait(NULL); //Waiting child 14 //termination 15 } This snippet simulates a child with several branches. A fuzzer must cover and consider every child's branches. We run the 14 fuzzers over these challenges and organised the results in Table 1. We noticed that none of the fuzzers succeeded through all three challenges. ### Testbed We decided to analyse only the coverage-guided fuzzers present in FuzzBench [11] and ProFuzzBench [12] even though the property applies to every coverage-guided fuzzer. All fuzzers were executed on an _Ubuntu 20.04_ server machine and all our source codes are freely available online4 so that our experiments are fully reproducible. Footnote 4: [https://github.com/marcellomaugeri/forks-break-afl](https://github.com/marcellomaugeri/forks-break-afl) ### Fuzzers evaluation We run all selected fuzzers against our three example challenges. Table 1 summarises our findings. All the fuzzers based on AFL use _POSIX signals_ and a bitmap respectively to report bugs and keep track of the code coverage. As shown in the Table 1, while the bitmaps are able to keep track of the child's code coverage, bugs triggered in the child's processes are not detected since AFL catches signals from the main process only, as pointed out in the documentation5. The only fuzzers able to detect bugs in the child process are LibFuzzer6, Entropic [1] and Hongfuzz7, as discussed in more detail below: Footnote 5: [https://github.com/google/AFL/blob/master/README.md](https://github.com/google/AFL/blob/master/README.md) Footnote 6: [https://llvm.org/docs/LibFuzzer.html](https://llvm.org/docs/LibFuzzer.html) * _LibFuzzer_8 and _Entropic_[1] employ a set of sanitizers9 to report bugs. These mechanisms make the fuzzers able to find the bug in Challenge 1 and measure the different code paths in Challenge 3, thereby satisfying challenges C.1 and C.3, as seen above. Unfortunately, challenge C.2 is not satisfied since the fuzzer cannot detect hangs in the child process. Footnote 9: AddressSanitizer, UndefinedBehaviorSanitizer and MemorySanitizer * _Honggfuzz_ supports different software/hardware feedback mechanisms and a low-level interface to monitor targets. When executed on Linux machines, Honggfuzz uses the _ptrace_ system call to manage processes. This mechanism allows the fuzzer to capture a wide range of signals. As shown in Table 1, the use of ptrace (along with the _SanitizerCoverage_) allows the fuzzer to detect bugs and to consider coverage also in the child process. Unfortunately, neither this mechanism is able to detect hangs in the child process. In summary, while all selected fuzzers detect the code coverage (C3), none detect hangs (C2) and only a few detect bugs (C1) in the child process. The evaluation underlines that: \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{Fuzzer} & \multirow{2}{*}{Based on} & \multirow{2}{*}{Monitor technique} & Bugs & Hangs & Code \\ & & & Detection & Detection & coverage \\ & & & (C1) & (C2) & (C3) \\ \hline AFL [13] & - & POSIX signals & \(\times\) & \(\times\) & \(\checkmark\) \\ \hline AFL++ [14] & AFL & POSIX signals & \(\times\) & \(\times\) & \(\checkmark\) \\ \hline AFLFFast [1] & AFL & POSIX signals & \(\times\) & \(\times\) & \(\checkmark\) \\ \hline AFLSmart [1] & AFL & POSIX signals & \(\times\) & \(\times\) & \(\checkmark\) \\ \hline Eclipser [1] & AFL & POSIX signals & \(\times\) & \(\times\) & \(\checkmark\) \\ \hline FairFuzz [15] & AFL & POSIX signals & \(\times\) & \(\times\) & \(\checkmark\) \\ \hline Iafintel [1] & AFL & POSIX signals & \(\times\) & \(\times\) & \(\checkmark\) \\ \hline AFLnwe1 & AFL & POSIX signals & \(\times\) & \(\times\) & \(\checkmark\) \\ \hline AFLNet [10] & AFL & POSIX signals & \(\times\) & \(\times\) & \(\checkmark\) \\ \hline MOpt-AFL [18] & AFL & POSIX signals & \(\times\) & \(\times\) & \(\checkmark\) \\ \hline StateAFL [12] & - AFLNet & POSIX signals & \(\times\) & \(\times\) & \(\checkmark\) \\ \hline LibFuzzer2 & - & - UBSAN & & & \\ & - ASAN & \(\checkmark\) & \(\times\) & \(\checkmark\) \\ & & - MSAN & & & \\ \hline Entropic [1] & LibFuzzer & - UBSAN & & & \\ & - ASAN & \(\checkmark\) & \(\times\) & \(\checkmark\) \\ & - MSAN & & & \\ \hline Honggfuzz3 & - & ptrace (Linux) & \(\checkmark\) & \(\times\) & \(\checkmark\) \\ \hline \end{tabular} \end{table} Table 1: Coverage guided fuzzers evaluation * _Loops detection challenge_ is the most difficult because fuzzers do not wait for all the child processes but only for the main one; * _Code coverage challenge_ is the easiest because the instrumentation allows measuring coverage from the execution, regardless of the process involved; * _Bug detection challenge_ depends on the technique used to observe bugs, as well as the use of sanitisers. We interpret this general outcome as a clear call for future research and developments. ## 4 Existing solutions Nowadays the only solutions to fuzz programs that use forks are manually modifying the code or breaking the multi-process nature of the system (by employing tools like defork10) in order to get rid of the forks. Footnote 10: [https://github.com/zardus/preeny/blob/master/src/defork.c](https://github.com/zardus/preeny/blob/master/src/defork.c) Unfortunately, making modifications to the code, as pointed out in the AFLNet documentation 11, to remove all the forks is a challenging and error-prone task and break the multi-process nature of the system often leads to weird system behaviours. The only solution, therefore, remains to modify the fuzzers. Footnote 11: [https://github.com/afnet/afnet](https://github.com/afnet/afnet) ## 5 Conclusions This paper analyses the fork awareness of the coverage-guided fuzzers using three different aspects. The analysis conducted on 14 well-known fuzzers highlights that while is it clear how important is to handle multi-process programs, the majority of the fuzzers overlook the problem. 11 of 14 fuzzers are not able to detect bugs in the child process. The intuition behind these outcomes is related to the way these fuzzers detect bugs. All the AFL-derived fuzzers use signals (SIGSEGV, SIGABRT, etc) to detect bugs and this mechanism misses bugs in child processes. We noticed that dealing with forks is not the only problem and other issues may be related to the IPC scheduling. For example, the IPC may influence the success of the fuzzing process since some bugs may be triggered only after a specific process schedule and only after access to a particular cell of memory. We believe this paper represents a first step towards the devising of fuzzers aware of the eventual multiprocess nature of the software. The first step to achieve this goal might be the implementation of a _loop detector_ at an early stage, e.g. by leveraging a dynamic library to keep track of all process identifiers of forked processes. To summarise, this work not only provides the first concrete way to evaluate the fuzzers according to their fork awareness but sheds light for the first time on a class of problems that have been ignored until now, showing interesting future directions.
2307.01188
Dark energy, D-branes, and Pulsar Timing Arrays
Several pulsar timing array (PTA) collaborations recently announced the first detection of a stochastic gravitational wave (GW) background, leaving open the question of its source. We explore the possibility that it originates from cosmic inflation, a guaranteed source of primordial GW. The inflationary GW background amplitude is enhanced at PTA scales by a non-standard early cosmological evolution, driven by Dirac-Born-Infeld (DBI) scalar dynamics motivated by string theory. The resulting GW energy density has a broken power-law frequency profile, entering the PTA band with a peak amplitude consistent with the recent GW detection. After this initial DBI kination epoch, the dynamics starts a new phase mainly controlled by the scalar potential. It provides a realization of an early dark energy scenario aimed at relaxing the $H_0$ tension, and a late dark energy model which explains the current cosmological acceleration with no need of a cosmological constant. Hence our mechanism - besides providing a possible explanation for the recent PTA results - connects them with testable properties of the physics of the dark universe.
Debika Chowdhury, Gianmassimo Tasinato, Ivonne Zavala
2023-07-03T17:53:37Z
http://arxiv.org/abs/2307.01188v3
###### Abstract ###### Abstract Several pulsar timing array (PTA) collaborations recently announced the first detection of a stochastic gravitational wave (GW) background, leaving open the question of its source. We explore the possibility that it originates from cosmic inflation, a guaranteed source of primordial GW. The inflationary GW background amplitude is enhanced at PTA scales by a non-standard early cosmological evolution, driven by Dirac-Born-Infeld (DBI) scalar dynamics motivated by string theory. The resulting GW energy density has a broken power-law frequency profile, entering the PTA band with a peak amplitude consistent with the recent GW detection. After this initial DBI kination epoch, the dynamics starts a new phase mainly controlled by the scalar potential. It provides a realization of an early dark energy scenario aimed at relaxing the \(H_{0}\) tension, and a late dark energy model which explains the current cosmological acceleration with no need of a cosmological constant. Hence our mechanism - besides providing a possible explanation for the recent PTA results - connects them with testable properties of the physics of the dark universe. **Dark energy, D-branes, and Pulsar Timing Arrays** Debika Chowdhury \({}^{1}\), Gianmassimo Tasinato \({}^{2,1}\), Ivonne Zavala \({}^{1}\) \({}^{1}\) Department of Physics, Swansea University, SA2 8PP, United Kingdom \({}^{2}\) Dipartimento di Fisica e Astronomia, Universita di Bologna, Italia emails: debika.chowdhury, g.tasinato, e.i.zavalacarrasco at swansea.ac.uk ## 1 Introduction Cosmic inflation, well tested at large CMB scales, is a guaranteed source of primordial tensor fluctuations [1, 2, 3, 4, 5]. However, their amplitude is typically too small to be directly detected by gravitational wave (GW) experiments. In this work we explore a post-inflationary mechanism capable of enhancing the size of inflationary tensor fluctuations at frequencies detectable by pulsar timing arrays (PTA) (for other examples of post-inflationary mechanisms, see [6, 7, 8]). The GW energy density parameter \(\Omega_{\rm GW}\) depends on the early-time behavior of the cosmological scale factor, which can be influenced by the presence of stiff matter [9, 10, 11], or scalar fields coupled conformally [12, 13, 14, 15, 16, 17] or disformally [18, 19] to the metric. Extending the results of [19], we show that an early epoch of scalar domination - motivated by a quantum gravity approach to early-universe cosmology - induces a non-standard cosmological evolution, which enhances the amplitude of \(\Omega_{\rm GW}\) at frequencies around \(10^{-8}\) Hz. Our framework provides a possible contribution towards the explanation of the recent detection of a stochastic GW background, as announced by several PTA collaborations: NANOGrav [20], PPTA [21], EPTA [22], CPTA [23] (see also e.g. [24, 25] for some of their own preliminary interpretations in terms of early universe sources). At the same time, it may ameliorate some well-known cosmological problems related with the Hubble tension (see [26, 27] for reviews), and the nature of dark energy. We start with a theoretical section 2 developing our string-motivated set-up based on the dynamics of a D-brane in a higher-dimensional space-time. The D-brane motion through extra-dimensional angular directions is described in terms of an axionic field, coupled to four-dimensional gravity, and to matter living on the brane through a disformal coupling [28]. The scalar-tensor action includes a kinetic term of Dirac-Born-Infeld (DBI) form, which is instrumental in controlling an initial phase of kinetically-driven scalar evolution that enhances the GW spectrum. Moreover, the Lagrangian includes a potential term, responsible for driving both an early and a late dark energy dominated epoch (for an example of a cascading dark energy scenario, see [29]). We continue with a phenomenological section 3, which starts examining how the initial scalar dynamics drives an initial epoch of coupled DBI kination. This phase affects the early evolution of the Hubble parameter, enhancing the size of the inflationary SGWB spectrum, which acquires a broken power-law profile with a peak amplitude well within the sensitivity curves of PTA experiments. Interestingly, the characteristic scale controlling the DBI kination is comparable with the scale of QCD transition. We compare the peak amplitude of our profile with recent data from NANOGrav collaboration [20], finding overall agreement. After examining the consequences of our set-up for GW physics, in section 3.2 we discuss how the initial DBI kination phase is connected to a stage where the scalar dynamics is mainly controlled by a string-motivated axion potential. The structure of the axion potential is determined by the isometries of the extra-dimensional space, as well as non-perturbative effects. It can be sufficiently rich to first drive a phase characterized by early dark energy - which can address the Hubble tension - followed by a late dark energy epoch, which can explain the current acceleration of our universe. Interestingly, the corresponding axion decay constants acquire sub-Planckian values. They might represent an acceptable set-up for building models of dark energy within string theory (see e.g. [30] for a review). ## 2 Our set-up In this section we discuss the effective action for a string-motivated D-brane system. It can be described in terms of a scalar-tensor theory with interesting consequences for cosmology and the physics of gravitational waves. Our scenario is motivated by D-brane scalar-tensor theories as discussed in [31]: we consider a (stack of) D-brane(s) moving along the angular direction of an internal warped compactification. The axion field \(\phi\) is associated with the angular position of the brane through the extra-dimensional space. The system is described by a scalar-tensor theory characterized by disformal couplings to matter fields on the brane [28] (see [32] for a review). The reader interested in more general scenarios - including conformal couplings between the scalar and the metric - can consult [17, 18, 19, 33]. The scalar-tensor action we consider is: \[S_{\rm tot}\,=\,S_{\phi}+S_{\rm m}, \tag{2.1}\] where \[S_{\phi} = \int d^{4}x\,\sqrt{-g}\,\left[\frac{R}{2\kappa^{2}}-M^{4}\,\sqrt{ 1+\frac{(\partial\phi)^{2}}{M^{4}}}+M^{4}-V(\phi)\right]\,, \tag{2.2}\] \[S_{\rm m} = -\int d^{4}x\,\sqrt{-g}\,\mathcal{L}_{\rm m}(\tilde{g}_{\mu\nu})\,. \tag{2.3}\] \(\kappa^{2}=M_{\rm pl}^{-2}=8\,\pi\,G\), and \(M\) is a mass scale entering in the scalar kinetic term, related to the (possibly warped) tension of the D-brane, as well as other properties as warping and fluxes. Matter fields contained in action (2.3) are conformally coupled to the metric \(g_{\mu\nu}\) entering in eq (2.2) via \[\tilde{g}_{\mu\nu}\,=\,g_{\mu\nu}+\frac{\partial_{\mu}\phi\,\partial_{\nu}\phi }{M^{4}}\,. \tag{2.4}\] We assume that matter is a perfect fluid, characterized by pressure and energy density only (see [31] for details). The scalar kinetic terms have the characteristic Dirac-Born-Infeld (DBI) form of D-brane actions [34]. The scalar potential \(V(\phi)\) in eq (2.2) contains various contributions, which are periodic functions of the size of the internal angular dimensions. We focus on the case of a D-brane fixed at a specific value of the radial component along the (warped) extra dimensions. Its position is also fixed within four internal angular directions, leaving the object free to move along only one angular direction in the warped compact space. The motion of the brane through this angular direction is controlled by the axion field \(\phi\) appearing in the scalar-tensor actions (2.2). See e.g. [35, 36] for a scenario where this idea is explicitly explored to realise natural inflation with D3 and D5 branes in a warped resolved conifold geometry [37, 38, 39]. Let us be a bit more explicit about the structure of the axion potential. The D-brane scalar potential entering eq (2.2) acquires the schematic form [35, 40]: \[V(\theta)=\bar{V}(\rho_{0})+\delta\left(\overline{\Phi}_{-}(\rho_{0})+\Phi_{h} (\rho_{0},\theta)\right)\,, \tag{2.5}\] with \(\theta\) the angular direction along which the D-brane moves, that - upon canonical normalization - will give rise to the scalar field \(\phi\) appearing in action (2.2). The quantity \(\delta\) is a constant parameter depending on the type of D-brane considered. \(\rho\) is the radial coordinate fixed at \(\rho=\rho_{0}\). \(\bar{V}\) and \(\overline{\Phi}_{-}\) represent contributions to the D-brane potential depending only on the fixed radial coordinate \(\rho_{0}\). Instead, \(\Phi_{h}\) depends on the angular coordinate as well, and for a warped resolved conifold is given by [35]: \[\Phi_{h}(\rho_{0},\theta,\phi_{0})=\sum_{l=0}^{\infty}\sum_{m=-l}^{m=l}\left[a _{l}H_{l}^{A}(\rho_{0})+b_{l}H_{l}^{B}(\rho_{0})\right]Y_{lm}(\theta,\phi_{0})\,. \tag{2.6}\] The indexes \((l,m)\) denote quantum numbers associated to isometries of the internal geometry. The specific form of \(H_{l}^{A},H_{l}^{B}\) is not too important, as they are functions of the fixed radial D-brane position \(\rho_{0}\). Considering for definiteness the values \((l,m)=(0,0),(1,0),(2,0),(3,0)\), we get 1 Footnote 1: In [35, 36], only the solutions for \((l,m)=(0,0),(1,0)\), were kept, so that \(\Phi_{h}\) took the form \(\Phi_{h}=A_{1}(\rho_{0})+A_{2}(\rho_{0})\,\cos\theta\). \[\Phi_{h}=A_{1}(\rho_{0})+A_{2}(\rho_{0})\,\cos\theta+A_{3}(\rho_{0})\,\cos^{2} \theta+A_{4}(\rho_{0})\,\cos^{3}\theta\,. \tag{2.7}\] For suitable choices of the coefficients, the potential for the canonically normalized field \(\phi\) can be arranged to become \[V_{1}(\phi)=V_{0_{\rm ede}}\left(1-\cos[\kappa\,\phi/f_{1}]\right)^{3}\,. \tag{2.8}\] The quantity \(f_{1}\) is an axion decay constant: we express it in terms of a dimensionless number, pulling out a factor of \(\kappa\) in eq (2.8). This is the structure of the potential recently introduced for the so called _early dark energy_[41, 42] to relax the \(H_{0}\) discrepancy. We will discuss this topic in more detail in section 3.2. Besides the term in eq (2.8), there may be additional non-perturbative contributions to the effective potential, originating from bulk physics, which generate extra terms periodic in \(\phi\). We assume that such contributions are present, and we include an additional term to the total axion potential, as [43, 44]: \[V_{2}(\phi)=V_{0_{\rm de}}(1-\cos[\kappa\phi/f_{2}])\,. \tag{2.9}\] with \(f_{2}\) an extra dimensionless decay constant. To summarize, in what follows we consider the following total potential for the scalar field as a sum of two independent contributions \[V(\phi) = V_{1}(\phi)+V_{2}(\phi)\,, \tag{2.10}\] \[= V_{0_{\rm ede}}\left(1-\cos[\kappa\phi/f_{1}]\right)^{3}+V_{0_{ \rm de}}\left(1-\cos[\kappa\phi/f_{2}]\right)\,. \tag{2.11}\] We plug the potential (2.11) in the total action (2.2), and study the consequences of this system for the cosmological evolution of our universe. ### Evolution equations The equations of motion for the scalar field and the scale factor in a Friedmann-Lemaitre-Robertson-Walker metric, as derived from the Einstein-frame action (2.2), (2.3), are obtained to be [17, 18, 19]: \[H^{2} = \frac{\kappa^{2}}{3}\,\frac{(1+\lambda)}{B}\rho\,, \tag{2.12a}\] \[H_{{}_{N}} = -H\left[\frac{3\,B}{2(1+\lambda)}\,(1+w)+\frac{\varphi_{{}_{N}}^{ 2}}{2}\,\gamma\right], \tag{2.12b}\] \[\varphi_{{}_{NN}}\left[1+\frac{\gamma^{-1}}{M^{4}}\,\frac{3\,B\,H^{2} }{\kappa^{2}(1+\lambda)}\right]+3\,\varphi_{{}_{N}}\left[\gamma^{-2}-\frac{w}{M^ {4}\,\gamma}\,\frac{3\,B\,H^{2}}{\kappa^{2}(1+\lambda)}\right]\] \[\qquad\qquad\qquad\qquad\qquad\qquad+\frac{H_{{}_{N}}}{H}\, \varphi_{{}_{N}}\left[1+\frac{\gamma^{-1}}{M^{4}}\,\frac{3\,B\,H^{2}}{\kappa^{ 2}(1+\lambda)}\right]+\frac{3\,B\,\lambda}{\gamma^{3}\,(1+\lambda)}\,\frac{V_ {,\varphi}}{V}=0, \tag{2.12c}\] where the subindex \(N\) indicates derivatives with respect to the number of e-folds \(dN=Hdt\). For convenience we make use of the dimensionless scalar quantity \(\varphi\,\equiv\,\kappa\,\phi\). We introduce the scalar-dependent Lorentz factor \(\gamma\), the quantity that characterize DBI models: \[\gamma^{-2}=1-\frac{H^{2}}{M^{4}\kappa^{2}}\,\varphi_{{}_{N}}^{2}, \tag{2.13}\] as well as the combinations \[B = 1-\frac{\gamma^{2}\,\varphi_{{}_{N}}^{2}}{3\,(\gamma+1)}\,, \tag{2.14}\] \[\lambda = \frac{V}{\rho}. \tag{2.15}\] The parameter \(\lambda\) is the characteristic quantity that controls the size of the axionic potential term with respect to the total energy density. Since we are interested in comparing the expansion rates between our modified cosmological evolution and standard \(\Lambda\)CDM cosmology, we work in the Jordan frame where energy-momentum and entropy are conserved. (See the discussion in [19].) The energy density, pressure and equation of state in this frame read \[\tilde{\rho}=\gamma^{-1}\,\rho,\qquad\tilde{p}=\gamma\,P,\qquad\tilde{w}=w\, \gamma^{2}, \tag{2.16}\] where the non-tilded quantities are computed in the Einstein frame. Moreover, the quantity \(\tilde{\rho}=\sum_{i}\tilde{\rho}_{i}\), is the total background energy density: the index \(i\) runs over matter and radiation, and \(\tilde{\omega}\) takes into account of the degrees of freedom at a given temperature during cosmic evolution (see e.g. [19]). The departure from the standard cosmological evolution can thus be parameterized by the ratio of Hubble parameters in our scalar-tensor gravity, with respect to its value in General Relativity (GR) (i.e. in absence of scalar contributions): \[\frac{\tilde{H}}{H_{GR}}=\frac{\gamma^{3/2}(1+\lambda)^{1/2}}{B^{1/2}}\,. \tag{2.17}\] The evolution equations (2.12a)-(2.12c) contain contributions that modify the evolution equation with respect to \(\Lambda\)CDM. Any early-universe modification from the standard cosmological evolution should not occur during or after Big-Bang Nucleosynthesis (BBN), to avoid spoiling its successful predictions. Hence we require that any deviations from \(\Lambda\)CDM terminate before the onset of BBN: i.e., when the temperature of the universe reaches values around \(T\sim\) MeV. We also investigate, however, how our system leads to late scalar-tensor contribution closely mimicking the cosmological constant of \(\Lambda\)CDM, driving the current acceleration of our universe. The structure of evolution equations (2.12a)-(2.12c) is such that any departure from the equation-of-state value \(\tilde{w}=1/3\) does not give a noteworthy effect in the scalar evolution 2. In fact, in our case the most important effects in the very early evolution of the axion field \(\phi\) - when the potential term contributions are negligible - are associated with the DBI form of its kinetic terms 3. During this phase, the amplitude of primordial gravitational waves is enhanced. As the size of the scalar potential \(V(\phi)\) becomes important, additional interesting phenomenological consequences occur. These topics are the subjects of next section. Footnote 2: In contrast to the conformal case [13, 45]. Footnote 3: The initial phase of DBI kination is distinct from kination models considered in the literature (see e.g. [46, 47, 48, 49]). ## 3 Phenomenology of the scalar-tensor theory The cosmological evolution of the axionic field \(\phi\) has several interesting phenomenological consequences. We assume that the initial conditions for \(\phi\) are set at very high temperatures, some time after cosmic inflation. We select them such that the scalar has an initial kinetic-driven dynamics, capable of enhancing the spectrum of gravitational waves - see section 3.1. The DBI kination phase is smoothly followed by a potential-driven phase, during which the scalar dynamics source phases of early and late dark energy domination - see section 3.2. ### Enhancing the gravitational wave signal at PTA scales We show how an early modification of standard evolution associated with the DBI-type action (2.2) amplifies the amplitude of inflationary gravitational waves at PTA scales. We exploit the fact that this quantity is affected by an early-time non-standard cosmological Figure 1: **Left panel:** Evolution of the Hubble parameter in our scalar-tensor set-up (red line) and in \(\Lambda\)CDM case (blue line), plotted as functions of temperature. **Right panel:** Evolution of the Lorentz factor \(\gamma\) starting from the initial conditions in Table 1. evolution. The scalar-tensor dynamics, as controlled by the evolution equations of section 2, starts soon after inflation ends. We assume that initial conditions are chosen such that initially the scalar is driven by the DBI kinetic terms only, as appearing in action (2.2), with negligible contribution from the potential terms (see e.g. [17, 18, 19]). The parameter controlling the kinetic part of the action is \(M\), whose value can chosen together with the initial conditions for \(H_{i}\), \(\varphi_{i}\), \(\varphi_{N}^{i}\) - the last quantity entering in the initial value \(\gamma_{i}\) for the DBI Lorentz parameter of eq (2.13). The initial value for \(H\) is determined as described in Appendix A, starting from an initial value for the DBI parameter \(\gamma_{i}\) of order \(\mathcal{O}(1)\). Once \(\varphi_{N}^{i}\) is fixed along with an initial temperature \(T_{i}\) - which is associated with the initial value of the scale factor - the value of \(M\) is bounded from below by requiring that the solutions for \(H^{2}\) be real and positive and such as not to spoil the predictions of BBN. Assuming that entropy is conserved, the relation between the temperature of the universe \(T\) and the scale factor \(a\) is \[\frac{a}{a_{0}}\,=\,\left(\frac{g_{\ast s,0}}{g_{\ast s}}\right)^{1/3}\,\frac{ T_{0}}{T}\,, \tag{3.1}\] where the index \(0\) indicates quantities evaluated today. We select initial conditions as in Table 1. The initial conditions for \(\varphi\) and the Hubble parameter are chosen such as to lead to an initial steady growth of the DBI Lorentz factor \(\gamma\) of eq (2.13), and a transitory large deviation of the Hubble parameter from its GR value. See Fig 1. We select the parameter \(M\) demanding that the scalar evolution does not interfere with BBN, which happens around 1 MeV - see Fig 1. The value of \(M\) turns out to be of the order of the QCD scale of 170 MeV. Recall that we work in the Jordan frame - see section 2 - hence with tilded quantities. The initial enhancement of the Lorentz factor, as well as the early modifications of the Hubble parameter, leads to an amplification of inflationary gravitational waves. The fractional energy density of primordial gravitational waves measured today is given by (we follow the treatments in [50, 51, 52, 53]): \[\tilde{\Omega}_{\rm GW}^{0}(k) \equiv \frac{1}{\rho_{c}^{0}}\,\frac{d\,\tilde{\rho}_{\rm GW}^{0}(k)}{d \ln k} \tag{3.2}\] \[\simeq \frac{1}{24}\,\mathcal{P}_{T}(k)\,\left(\frac{\tilde{a}_{\rm hc} }{\tilde{a}_{0}}\right)^{4}\,\left(\frac{\tilde{H}_{\rm hc}}{\tilde{H}_{0}} \right)^{2} \tag{3.3}\] where \(\mathcal{P}_{T}\) is the primordial inflationary tensor spectrum, and the suffix 'hc' indicates horizon crossing time for the mode \(k\). The quantity \(\mathcal{P}_{T}\) is \[\mathcal{P}_{T}(k)\,=\,\frac{2\,H^{2}}{\pi^{2}\,M_{\rm Pl}^{2}}\Big{|}_{k=aH}\,, \tag{3.4}\] \begin{table} \begin{tabular}{|c|c|c||c|c|} \hline \(\varphi_{i}\) & \(\varphi_{N}^{i}\) & \(H_{i}\) & \(T_{i}\) & \(M\) \\ \hline \hline 0.2 & \(3.9\times 10^{-7}\) & \(3.75359\times 10^{-13}\) GeV & 499.8043 GeV & 765 MeV \\ \hline \end{tabular} \end{table} Table 1: Initial conditions and disformal scale (recall that \(\varphi\) is dimensionless and measured in Planck units). and we take its amplitude at CMB scales to be \({\cal P}_{T}\,=\,rA_{S}\), with \(A_{S}=2.1\times 10^{-9}\). For simplicity we assume that \(r\) saturates the current upper bound \(r=0.036\) provided by the BICEP/Keck collaboration [54]. Formula (3.3) indicates that any deviation of the cosmological evolution from standard \(\Lambda\)CDM can change the predictions for \(\tilde{\Omega}^{0}_{\rm GW}(k)\), and possibly amplifies the spectrum of inflationary GW. In fact, we make use of the evolution equations (2.12a)-(2.12c), and re-express \(\tilde{\Omega}^{0}_{\rm GW}\) as \[h^{2}\,\tilde{\Omega}^{0}_{\rm GW}\,=\,\left(\frac{{\cal P}_{T}}{24}\right)\, \left(\frac{\tilde{a}}{\tilde{a}_{0}}\right)^{4}\,\frac{\gamma^{3}\,H_{\rm GR} ^{2}}{B\,(H_{0}/h)^{2}}\,. \tag{3.5}\] The quantity \(\gamma\) is given in eq (2.13), while \(B\) in eq (2.14). \(H_{\rm GR}\) corresponds to the GR Hubble parameter in absence of scalar field contributions. Expression (3.5) shows that an enhancement of the DBI Lorentz factor \(\gamma\), and a modification of the Hubble parameter with respect its GR value influence the scale-dependence of \(\Omega^{0}_{\rm GW}\). We can express this quantity as function of frequency \(f=2\pi\,k\,a_{0}\), through the formula [51, 52, 55] \[f\,=\,2.41473\times 10^{23}\,\left(\frac{T_{0}}{T_{\rm hc}}\right)\,\left( \frac{g_{*s,0}}{g_{*s,{\rm hc}}}\right)^{1/3}\,\sqrt{\frac{8\pi\rho_{\rm hc}}{ 3M_{\rm Pl}^{2}}}\,{\rm Hz}\,, \tag{3.6}\] where recall that hc is the horizon crossing scale of the mode \(k\). We represent in Fig 2 the GW spectrum obtained by numerically solving the evolution equations of section 2, and plugging the results in eq (3.5). The initial conditions in Table 1 lead to a rapid, transient increase of \(\gamma\), and allow us to amplify the GW signal at PTA frequencies. In fact, the energy density associated with primordial gravitational waves is raised by several orders of magnitude with respect to its standard value, for a frequency around the \(10^{-9}-10^{-8}\) Hz band that is probed by PTA experiments. The frequency profile of the spectrum acquires a broken power-law shape. It initially raises as \(f^{2}\), to then grow as \(f^{5}\) up to the peak, and then decreases as \(f^{-3}\). The peak amplitude is of the same order as the value detected by the NANOGrav collaboration [20]. However, the NANOGrav value we compare with is based on a fiducial power-law model, and a more sophisticated data analysis would be needed for comparing our broken power-law shape with the amplitude obtained by PTA data. In fact, [20] also provides a brief analysis of broken power-law models, providing the best-fit value for the break of the frequency profile: our result is consistent with their value. Figure 2 also contains the sensitivity curves for the NANOGrav experiment, as well as other detectors for reference. The sensitivity curves are built with the broken power-law sensitivity (BPLS) curve technique, introduced in [19] as an extension of the traditional power-law sensitivity curves of [58]. (Our definition and methods to obtain BPLS is slightly different from [59].) The BPLS curve allows one to visually realise whether a broken power-law signal can be detected by a given experiment: our profile for \(\tilde{\Omega}_{\rm GW}h^{2}\) enters into the sensitivity curve of NANOGrav, showing that the scalar-tensor theory we described allows us to amplify the primordial spectrum of inflationary tensor fluctuations at a level detectable by PTA experiments. We conclude that our signal might contribute to the stochastic GW background recently detected by PTA experiments [20, 21, 22, 23]. Figure 2: The profile of the primordial GW energy density of eq (3.5) in our DBI-driven scalar-tensor system (red line), as compared with the standard prediction in \(\Lambda\)CDM (blue line). The dashed vertical line indicates the frequency corresponding to the initial temperature \(T_{i}\) at which the scalar evolution starts. The sensitivity curves for various experiments are plotted using the notion of broken power-law sensitivity (BPLS) curve [19] for ET-D, LISA, BBO, and DECIGO experiments with the condition of \(\text{SNR}\geq 10\). For NANOGrav, the sensitivity curves have been constructed using the corresponding noise spectra for the stochastic gravitational wave background [56, 57]. The light violet shaded portion corresponds to the BPLS sensitivity region with \(\text{SNR}\geq 5\), while the dashed violet line denotes the boundary of the power-law sensitivity (PLS) region with \(\text{SNR}\geq 5\). The black point along with the y-axis error bar denotes the value \(\Omega_{{}_{\text{GW}}}\,h^{2}=2.6^{+4.5}_{-1.7}\times 10^{-8}\) obtained with a model of the timing-residual power spectral density with variable power-law exponent, as described by the NANOGrav collaboration [20]. This value is derived from a fiducial power-law model, which explains why it lies outside the BPLS region, but within the allowed PLS region. The x-axis error bar denotes the 90% confidence region of \(3.2^{+5.4}_{-1.2}\times 10^{-8}\) Hz for the break frequency in a broken power-law model as obtained by the NANOGrav collaboration [20]. The peak of the GW spectrum reaches a maximum value of \(\sim 1.7\times 10^{-8}\), in agreement with the above bound, while the peak in the profile occurs at around \(\sim 1.02\times 10^{-8}\) Hz. In order to compare the results of our D-brane disformal scenario with the recent NANOGrav 15-year data, we model the peak of the broken power-law GW spectrum as obtained in Fig. 2 as follows [20, 60]: \[\tilde{\Omega}_{\rm GW}^{0}\,h^{2}=A_{\rm b}^{2}\,f^{2}\left(\frac{f}{f_{\rm yr }}\right)^{\sigma-3}\left[1+\left(\frac{f}{f_{\rm b}}\right)^{1/\varepsilon} \right]^{\varepsilon(\mu-\sigma)}, \tag{3.7}\] where \(f_{\rm yr}=1/\)year in Hz. On fitting this model to our numerical results, we find the following values of the parameters: \(\log_{10}(A_{\rm b}/{\rm s})=5.279\), \(\log_{10}(f_{\rm b}/{\rm Hz})=-8.048\), \(\sigma=6.0\), \(\varepsilon=0.25\), and \(\mu=-2.0\). Keeping the spectral indices fixed, we vary \(\log_{10}(A_{\rm b}/{\rm s})\) and \(\log_{10}(f_{\rm b}/{\rm Hz})\) to arrive at the best-fit values for these quantities by using the publicly available code PTArcade [61, 62, 63]. The results have been plotted in Fig. 3. According to these results, the Bayesian estimates for the best-fit values are: \(\log_{10}(A_{\rm b}/{\rm s})=5.242\pm 0.191\) and \(\log_{10}(f_{\rm b}/{\rm Hz})=-8.149\pm 0.144\). By comparing these limits with our aforementioned fitting parameters, we find that they are well in agreement with the NANOGrav data. Figure 3: The lower left panel in this plot shows the 68%, 95%, and 99% confidence levels of the 2-dimensional posterior distribution for the parameters of the broken power-law fitting function: \(\log_{10}(A_{\rm b}/{\rm s})\) and \(\log_{10}(f_{\rm b}/{\rm Hz})\). The other two panels show the 1-dimensional marginalized posterior distributions for these parameters. The dashed vertical lines in the plots for the 1-dimensional distributions indicate the 68%, 95%, and 99% credible intervals. The uniform priors used in this analysis are: \(\log_{10}(A_{\rm b}/{\rm s})\in[2.0,8.0]\) and \(\log_{10}(f_{\rm b}/{\rm Hz})\in[-10.0,-6.0]\). We also assume Hellings and Downs correlations between the pulsars. ### Early and late dark energy Following the initial phase of DBI kination domination enhancing the GW spectrum (see Fig 2), the DBI Lorentz factor \(\gamma\) returns to small values (see Fig 1). The subsequent scalar dynamics is smoothly matched to a second phase, mainly controlled by the potential terms of eq (2.8). The potential-dominated phase allows us to realize a scenario including both an early dark energy (EDE) epoch, as well a late dark energy (LDE) phase which explains the present day acceleration of the universe4. Footnote 4: Recently, a connection between EDE and stochastic GW signals has been explored in [64]. Their framework differs from ours, since we use the properties of DBI kinetic terms for enhancing the GW spectrum – see section 2 – while they make use of the properties of the EDE axionic potential. The peak of their GW signal is far from PTA scales. The scalar potential (2.8) has been recently considered in [41, 42] as a possible way to relax the so called \(H_{0}\) tension in cosmology, via the injection of an early period of dark energy domination, driven by the scalar \(\phi\), called _early dark energy_ (EDE) (see [65, 66] for early EDE models). The \(H_{0}\) tension refers to the discrepancy between local measurements of the Hubble parameter today from supernovae, and its inferred value from cosmic microwave background (CMB) measurements assuming the standard \(\Lambda\)CDM cosmological model (for recent reviews on the \(H_{0}\) tension and solutions see [26, 27]. For recent reviews on EDE models see [67, 68]). As discussed in section 2, in our case the potential also includes an additional term, leading to a LDE domination driven by the axion [69]. In total, the potential expressed in terms of the dimensionless scalar field \(\varphi\) can be written as5 (recall that the decay constants are dimensionless and given in Planck units): Footnote 5: Possible embeddings of EDE from string theory based on closed string moduli has been discussed recently in [70, 71]. \[V(\varphi)=V_{0_{\rm dee}}\left(1-\cos[\varphi/f_{1}]\right)^{3}+V_{0_{\rm dee} }\left(1-\cos[\varphi/f_{2}]\right)\,. \tag{3.8}\] The first term is responsible for the EDE epoch, and we select \(V_{0_{\rm dee}}\sim\) eV\({}^{4}\); the second term leads to late time acceleration, hence \(V_{0_{\rm dee}}\sim(0.002\,\)eV\({}^{4}\). The axionic evolution controlled by the potential terms is preceded by the DBI kinetic evolution of section 3.1: the requirement of matching the two phases partially determines the parameters of the model, as in Table 2. Interestingly, this matching connects the early DBI kination phase which enhances the GW signal, to a later evolution contributing to the physics of the dark universe. The value for the EDE decay constant \(f_{1}\) turns out to be sub-Planckian, while we choose the value of the LDE decay constant \(f_{2}\) as \[f_{2}\simeq\frac{2q}{2m+1}\,f_{1}\,, \tag{3.9}\] \begin{table} \begin{tabular}{|c||c|c|c|} \hline \(V_{0_{\rm dee}}\) & \(V_{0_{\rm dee}}\) & \(f_{1}\) & \(f_{2}\) \\ \hline \hline \((6.32\times 10^{-10}\,\)GeV\()^{4}\) & \(1.45247\times 10^{-47}\,\)GeV\({}^{4}\) & 0.3569 & 0.1044 \\ \hline \end{tabular} \end{table} Table 2: The values of the parameters in the scalar potential (3.8). with \(q,m\) being integers. The quantity \(q\) is determined by the periodicity of the EDE potential, \(2\pi f_{1}\), and the value of \(\varphi\) is determined at an epoch where the DBI kinetic effects end. By choosing a sufficiently large \(m\), the value of \(f_{2}\) relevant for LDE can be made sufficiently small in Planck units potentially satisfying the constraints from the weak gravity conjecture for EDE [72]. For the values in Table 2 we have \(q=6\) and we choose \(m=20\). Figure 4 shows the evolution of the scalar field as a function of the universe temperature, starting from the initial conditions after inflation of Table 1. During the initial epoch of DBI-kination, the field is unaffected by the potential. The axion potential becomes relevant at around \(T\sim\) eV, where the field value remains frozen by the Hubble friction acting as a cosmological constant: this corresponds to the phase of EDE domination. In order to satisfy current constraints, and help in ameliorating the Hubble tension, the fractional energy density contributed by the EDE component, \[f_{\rm EDE}(z)\equiv\frac{\rho_{\rm EDE}}{\rho_{T}}\,, \tag{3.10}\] should not exceed values around 12% (see e.g. [67, 68]). This requirement is satisfied in our set-up, see Fig 5. It would nevertheless be important to analyse in more detail whether our modified evolution equation is in agreement with current CMB constraints: this goes beyond the scope of this work, and we defer it to a separate study. As the temperature of the universe decreases, the axion field rolls to the minimum of the EDE potential, around which it oscillates until it feels the second LDE potential in eq (3.8). This leads to the current cosmological acceleration. In Figure 6 we plot the evolution of the separate contributions to the total energy density of the universe from radiation, matter, and the axion field. The figure shows three epochs driven by the D-brane axionic field: DBI kination, EDE, and LDE. During the first phase of DBI kination Figure 4: The evolution of the scalar field \(\varphi\) as a function of temperature in our set-up. The initial conditions and parameter values can be found in Tables 1 and 2. the scalar contribution briefly dominates the energy density; in the phase of EDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of LDE the scalar energy density is almost constant, and subdominant; in the last phase of L the scalar energy density is almost constant, and subdominant; in the last scalar energy density is again constant, and eventually starts to dominate the total energy density. ## 4 Discussion The recent detection of a stochastic GW background by several PTA collaborations raises the question of its origin. In this work we explored the possibility that it is sourced by cosmic inflation, a well-known source of primordial GW. The inflationary GW background amplitude is enhanced at PTA scales by a non-standard early cosmological evolution, driven by a kinetically-dominated scalar-tensor dynamics motivated by string theory. The resulting GW energy density has a distinctive broken power-law frequency profile, entering the PTA band with a peak amplitude consistent with the recent GW detections. Our framework - besides providing a possible explanation for the recent PTA results - also sheds light on other well-known cosmological problems. It provides a realization of an early dark energy scenario aimed to address the \(H_{0}\) tension, and a late dark energy model which explains the current cosmological acceleration with no need of a cosmological constant. Several questions are left for further exploring this connection between GW in the PTA band and the physics of the dark universe. It will be interesting to test in detail our specific broken power-law frequency profile for the GW energy density against the actual PTA data that are being currently released. It would also be important to develop more careful tests of the early dark energy scenario we propose, for example comparing it against constraints from CMB observations, to further restrict our parameter space. This analysis would require to modify existing CMB codes to accommodate the details of our set-up. Since the characteristic energy scale of the mechanism we propose is set around the scale of QCD transition, our scenario can have further consequences for the predictions of dark matter relic density and sterile neutrinos (see e.g. [14]). We leave some of these investigations to a forthcoming work. ### Acknowledgments We are partially funded by the STFC grant ST/T000813/1. For the purpose of open access, the authors have applied a Creative Commons Attribution licence to any Author Accepted Manuscript version arising. ## Appendix A Determining the initial condition for \(H\) In this appendix we briefly describe how we fix the initial conditions for the Hubble parameter in our analysis. We define the Hubble parameter in the standard \(\Lambda\)CDM scenario as \[H_{\Lambda\rm CDM}^{2}=\frac{\kappa^{2}}{3}\,\left(\tilde{\rho}_{\rm R}+ \tilde{\rho}_{\rm M}+\tilde{\rho}_{\Lambda}\right)=\frac{\kappa^{2}}{3}\, \tilde{\rho}_{\rm bg}.\] (A.1) The Hubble parameter in the disformal DBI case can be expressed as \[H^{2} =\frac{\kappa^{2}}{3}\,\left(\rho_{{}_{\varphi}}+\rho_{\rm bg}\right)\] \[=\frac{\kappa^{2}}{3\,B}\,\left(\,\gamma\,\tilde{\rho_{\rm bg}}+V \right).\] (A.2a) Using the expression ( 2 ), we can write \[H^{2}\left[1-\frac{\gamma^{2}\,\varphi_{{}_{N}}^{2}}{3\,(\gamma+1)}\right]= \frac{\kappa^{2}}{3}\,\left(\,\gamma\,\tilde{\rho_{\rm bg}}+V\right).\] (A.3) Substituting the relation ( 2 ), we get the following quartic equation in \[\gamma\] : \[\gamma^{4}\left(M^{-4}\,\varphi_{{}_{N}}^{2}\,\,\tilde{\rho}_{\rm bg }+\varphi_{{}_{N}}^{2}\right) +\,\gamma^{3}\left(M^{-4}\,\varphi_{{}_{N}}^{2}\,\,V-3\, \right)+\gamma^{2}\left(M^{-4}\,\varphi_{{}_{N}}^{2}\,\,\tilde{\rho}_{\rm bg} -3\,-\,\varphi_{{}_{N}}^{2}\right)\] \[+\,\gamma\left(M^{-4}\,\varphi_{{}_{N}}^{2}\,V+3\,\right)+3\,=0.\] (A.4) This equation can be solved to obtain four initial values of \(\gamma\) for a given set of initial conditions. For our system, we choose the value of \(\gamma_{i}\) to be real and of order one. With this choice of \(\gamma_{i}\), we can then obtain the value of \(H_{i}\) from the relation (2.13).
2308.10600
Fixed-Parameter Algorithms for Computing RAC Drawings of Graphs
In a right-angle crossing (RAC) drawing of a graph, each edge is represented as a polyline and edge crossings must occur at an angle of exactly $90^\circ$, where the number of bends on such polylines is typically restricted in some way. While structural and topological properties of RAC drawings have been the focus of extensive research, little was known about the boundaries of tractability for computing such drawings. In this paper, we initiate the study of RAC drawings from the viewpoint of parameterized complexity. In particular, we establish that computing a RAC drawing of an input graph $G$ with at most $b$ bends (or determining that none exists) is fixed-parameter tractable parameterized by either the feedback edge number of $G$, or $b$ plus the vertex cover number of $G$.
Cornelius Brand, Robert Ganian, Sebastian Röder, Florian Schager
2023-08-21T09:56:56Z
http://arxiv.org/abs/2308.10600v2
# Fixed-Parameter Algorithms for Computing RAC Drawings of Graphs ###### Abstract In a right-angle crossing (RAC) drawing of a graph, each edge is represented as a polyline and edge crossings must occur at an angle of exactly \(90^{\circ}\), where the number of bends on such polylines is typically restricted in some way. While structural and topological properties of RAC drawings have been the focus of extensive research, little was known about the boundaries of tractability for computing such drawings. In this paper, we initiate the study of RAC drawings from the viewpoint of parameterized complexity. In particular, we establish that computing a RAC drawing of an input graph \(G\) with at most \(b\) bends (or determining that none exists) is fixed-parameter tractable parameterized by either the feedback edge number of \(G\), or \(b\) plus the vertex cover number of \(G\). Keywords:RAC drawings fixed-parameter tractability vertex cover number feedback edge number ## 1 Introduction Today we have access to a wealth of approaches and tools that can be used to draw planar graphs, including, e.g., Fary's Theorem [29] which guarantees the existence of a planar straight-line drawing for every planar graph and the classical algorithm of Fraysseix, Pach and Pollack [28] that allows us to obtain straight-line planar drawings on an integer grid of quadratic size. However, much less is known about the kinds of drawings that can be achieved for non-planar graphs. The study of combinatorial and algorithmic aspects of such drawings lies at the heart of a research direction informally referred to as "beyond planarity" (see, e.g., the relevant survey and book chapter [21, 18]). An obvious goal when attempting to visualize non-planar graphs would be to obtain a drawing which minimizes the total number of crossings. This question is widely studied within the context of the crossing number of graphs, and while obtaining such a drawing is \(\mathsf{NP}\)-hard [33] it is known to be fixed-parameter tractable when parameterized by the total number of crossings required thanks to a seminal result of Grohe [34]. However, research over the past twenty years has shown that drawings which minimize the total number of crossings are not necessarily optimal in terms of human readability. Indeed, the topological and geometric properties of such drawings may have a significantly larger impact than the total number of crossings, as was observed, e.g., by the initial informal experiment of Mutzel [41] and the pioneering set of user experiments carried out by the graph drawing research lab at the University of Sydney [36, 38, 37]. The latter works demonstrated that "large-angle drawings" (where edge crossings have larger angles) are significantly easier to read than drawings where crossings occur at acute angles. Motivated by these findings, in 2011 Didimo, Eades, and Liotta investigated graph drawings where edge crossings are only permitted at \(90^{\circ}\) angles [20] (see Figure 1 for an illustration). Today, these _right-angle crossing_ (or _RAC_) drawings are among the best known and most widely studied beyond-planar drawing styles [21, 18], with the bulk of the research to date focusing on understanding necessary and sufficient conditions for the existence of such drawings as well as the space they require [3, 4, 16, 17, 8, 1, 2, 27]. A prominent theme in the context of RAC drawings concerns the number of times edges are allowed to be bent: it has been shown that every graph admits a RAC drawing if each edge can be bent 3 times [20], and past works have considered straight-line RAC drawings as well as RAC drawings where the number of bends per edge is limited to 1 or 2. And yet, in spite of the considerable body of work concentrating on combinatorial and topological properties of such drawings, so far almost nothing is known about the complexity of computing a RAC drawing of a given graph. Indeed, while the problem of determining whether a graph admits a straight-line RAC drawing is NP-hard [4] and was recently shown to be \(\exists\mathbb{R}\)-complete [44], there is a surprising lack of known algorithms that can compute such drawings for special classes of graphs or, more generally, parameterized algorithms that exploit quantifiable properties of the input graph to guarantee the tractability of computing RAC drawings (either without or with limited bends). This gap in our understanding starkly contrasts the situation for so-called 1-planar drawings--another prominent beyond-planar drawing style for which a number of fixed-parameter algorithms are known [6, 25, 24]--as well as recent advances mapping the boundaries of tractability for other graph drawing problems [35, 9, 10]. Figure 1: Examples of RAC drawings. Contribution.We initiate an investigation of the parameterized complexity of determining whether a graph \(G\) admits a RAC drawing. Given the well-motivated focus of previous works on limiting the amount of bends in such drawings, an obvious first choice for a parameterization would be to consider an upper bound \(b\) on the total number of bends permitted in the drawing. However, on its own such a parameter cannot suffice to achieve fixed-parameter tractability in view of the NP-hardness of the problem for \(b=0\), i.e., for straight-line RAC drawings. Hence, we turn towards identifying structural parameters of \(G\) that guarantee fixed-parameter RAC drawing algorithms. While established decompositional parameters such as treewidth [43] and clique-width [14] represent natural choices of parameterizations for purely combinatorial problems, the applicability of these parameters in solving graph drawing problems is complicated by the inherent difficulty of performing dynamic programming when the task is to obtain a drawing of the graph. This is why the parameters often used in this setting are non-decompositional, with the most notable examples being the _vertex cover number_**vcn** (i.e., the size of a minimum vertex cover) and the _feedback edge number_**fen** (i.e., the edge deletion distance to acyclicity); further details are available in the overview of related work below. As our main contributions, we provide two novel parameterized algorithms: 1. a fixed-parameter algorithm for determining whether \(G\) admits a RAC drawing with at most \(b\) bends when parameterized by \(\mbox{\bf fen}(G)\); 2. a fixed-parameter algorithm for determining whether \(G\) admits a RAC drawing with at most \(b\) bends when parameterized by \(\mbox{\bf vcn}(G)+b\); Both of the presented algorithms are constructive, meaning that they can also output a RAC drawing of the graph if one exists. The core underlying technique used in both proofs is that of _kernelization_, which relies on defining reduction rules that can provably reduce the size of the instance until it is upper-bounded by a function of the parameter alone. While kernelization is a well-established and generic technique, its use here requires non-trivial insights into the structural properties of optimal solutions in order to carefully identify parts of the graph which can be simplified without impacting the final outcome. We prove that both algorithms in fact hold for the more general case where each edge is marked with an upper bound on the number of bends it can support, allowing us to capture the previously studied 1- and 2-bend RAC drawings. Moreover, we show that the latter algorithm can be lifted to establish fixed-parameter tractability when parameterized by \(b\) plus the _neighborhood diversity_ (i.e., the number of maximal modules) of \(G\)[40, 30, 39]. In the concluding remarks, we also discuss possible extensions towards more general parameterizations and apparent obstacles on the way to such results. Related Work.Didimo, Eades and Liotta initiated the study of RAC drawings by analyzing the interplay between the number of bends per edge and the total number of edges [20]. Follow-up works also considered extensions and variants of the initial concept, such as upward RAC drawings [3], 2-layer RAC drawings [16, 17] and 1-planar RAC drawings [8]. More recent works investigated the existence of RAC drawings for bounded-degree graphs [2], and RAC drawings with at most one bend per edge [1]. It is known that every graph admits a RAC drawing with at most three bends per edge [20], and that determining whether a graph admits a RAC drawing with zero bends per edge is \(\NP\)-hard [4]. The vertex cover number has been used as a structural graph parameter to tackle a range of difficult problems in graph drawing as well as other areas. Fixed-parameter algorithms for drawing problems based on the vertex cover number are known for, e.g., computing the obstacle number of a graph [5], computing the stack and queue numbers of graphs [9, 10], computing the crossing number of a graph [35] and 1-planarity testing [6]. Similarly, the feedback edge number (sometimes called the _cyclomatic number_) has been used to tackle problems which are not known to be tractable w.r.t. treewidth, including 1-planarity testing [6] and the Edge Disjoint Paths problem [32] (see also Table 1 in [31]). These two parameterizations are incomparable: there are problems which remain \(\NP\)-hard on graphs of constant vertex cover number while being \(\FPT\) when parameterized by the feedback edge number (such as Edge Disjoint Paths[26, 32]), and vice-versa. That being said, the existence of a fixed-parameter algorithm parameterized by the feedback edge number is open for a number of graph drawing problems that are known to be \(\FPT\) w.r.t. the vertex cover number; examples include computing the aforementioned stack, queue and obstacle numbers. ## 2 Preliminaries We assume familiarity with standard concepts in graph theory [22]. All graphs considered in this manuscript are assumed to be simple and undirected. **RAC Drawings.** Given a graph \(G=(V,E)\) on \(n\) vertices with \(m\) edges, a _drawing_ of \(G\) is a mapping \(\delta\) that takes vertices \(V\) to points in the Euclidean plane \(\mathbb{R}^{2}\), and assigns to every edge \(e=uv\in E\) the image of a simple plane curve \([0,1]\rightarrow\mathbb{R}^{2}\) connecting the points \(\delta(u),\delta(v)\) corresponding to \(u\) and \(v\). We require that \(\delta\) is injective on \(V\), and furthermore that for all vertices \(v\) and edges \(e\) not incident to \(v\), the point \(\delta(v)\) is not contained in \(\operatorname{int}(\delta(e))\), where \(\operatorname{int}(\delta(e))\) is the image of \((0,1)\) under \(\delta\). A _polyline drawing_ of \(G\) is a drawing such that for each edge \(e\in E\), \(\delta(e)\) can be written as a union \(\delta(e)=\lambda_{1}^{e}\cup\cdots\cup\lambda_{t}^{e}\) of closed straight-line segments \(\lambda_{1}^{e},\ldots,\lambda_{t}^{e}\) such that: * for each \(1\leq i\leq t-1\), the segments \(\lambda_{i}^{e}\) and \(\lambda_{i+1}^{e}\) intersect in precisely one of their shared end-points and moreover close an angle different than \(180^{\circ}\), and * every other pair of segments is disjoint. The shared intersection points between consecutive segments are called the _bends_ of \(e\) in the drawing \(\delta\). For two edges \(e\) and \(f\), their set of _crossings_ in the drawing \(\delta\) is the set \(\operatorname{int}(\delta(e))\cap\operatorname{int}(\delta(f))\). We will assume without loss of generality that any drawing \(\delta\) of \(G\) has a finite number of crossings. The central type of drawing studied in this paper are those that allow only _right-angle crossings_ between edge drawings (so-called _RAC drawings_): We say that the edges \(e,f\in E\) have a _right-angle crossing_ in a polyline drawing \(\delta\) of \(G\) if the crossing lies in the relative interiors of the respective line segments defining \(\delta(e)\) and \(\delta(f)\), and most crucially, the intersecting line segments of \(\delta(e)\) and \(\delta(f)\) are orthogonal to each other (i.e., they meet at a right angle). Let \(\delta\) be a polyline-drawing of a graph, \(\beta:E\mapsto\{0,1,2,3\}\) a mapping, and \(b\in\mathbb{N}\) a number. If every crossing of \(\delta\) is a right-angle crossing, the number of bends counted over _all_ edges is at most \(b\), and every edge itself has at most \(\beta(e)\) bends, \(\delta\) is called a _b-bend \(\beta\)-restricted RAC drawing_ of \(G\). We note that * \(0\)-bend RAC drawings are straight-line RAC drawings (for any choice of \(\beta\)), * \(m\) and \(2m\)-bend drawings with \(\beta(e)=1\) or \(\beta(e)=2\) for each edge \(e\) gives the usual notion of \(1\)-bend and \(2\)-bend RAC drawings, respectively, and * similarly, \(3m\)-bend drawings with \(\beta(e)=3\) for each edge \(e\) gives rise to the notion of \(3\)-bend RAC drawings, which exist for every graph [20]. Based on the above, we can now formally define our problem of interest: \begin{tabular}{|l|} \hline \multicolumn{2}{|l|}{Bend-Restricted RAC Drawing (BRAC)} \\ **Input:** A graph \(G\), an integer \(b\geq 0\), and an edge-labelling \(\beta:E\mapsto\{0,1,2,3\}\). \\ **Question:** Does \(G\) admit a \(b\)-bend \(\beta\)-restricted RAC drawing? \\ \hline \end{tabular} It has been shown that \(b\)-bend \(\beta\)-restricted RAC Drawing is \(\exists\mathbb{R}\)-complete [44, 11] even when restricted to the case where \(b=0\). Without loss of generality, we will assume that the input graph \(G\) is connected. We remark that while BRAC is defined as a decision problem, every algorithm provided in this paper is constructive and can output a drawing as a witness for a yes-instance. **Parameterized Algorithms.** We will not need a lot of the machinery of parameterized algorithms to state our results. However, as it will turn out, our tractability results all come under the guise of so-called _kernelization_, which requires some context. A _parameterized problem_ is an ordinary decision problem, where each instance \(I\) is additionally endowed with a _parameter_\(k\). Given such a parameterized problem \(\Pi\), we then say that a problem is _fixed-parameter tractable_ (\(\mathsf{FPT}\)) if there is an algorithm that, upon the input of an instance \((I,k)\) of \(\Pi\), decides whether or not \((I,k)\) is a yes-instance in time \(f(k)\cdot n^{\mathcal{O}(1)}\), where \(f\) is any computable function, and \(n=|I|\) is the encoding length \(|I|\) of the (parameter-free) instance \(I\). This should be contrasted with parameterized problems that require time, say, \(n^{k}\) to solve, which are not fixed-parameter tractable. For instance, we may ask if a graph has a vertex-cover of size at most \(k\), and declare \(k\) the parameter of the instance. In this case, the problem is solvable in time \(2^{k}\cdot n^{\mathcal{O}(1)}\), and hence \(\mathsf{FPT}\); in contrast, asking for a dominating set of size \(k\) (under some complexity assumptions) requires time \(n^{k}\) for every \(k\). Closer to the problems treated in this paper are structural parameterizations in the following sense: Suppose we are given a graph \(G\) and a number \(k\) such that \(G\) has a vertex-cover of size at most \(k\). Can we leverage this information to solve some (other) graph problem at hand? In this case, we say that we parameterize the problem _by the vertex cover number_. When using such parameterizations in our results, we will crucially rely on the following notion: A _kernelization_ (or kernel, for short) of \(\Pi\) is a polynomial-time algorithm (in \(n\), and we may assume \(k\leq n\) holds) that takes an instance \((I,k)\) as input, and produces as output another instance \((I^{\prime},k^{\prime})\) with the following properties: there is some computable function \(g\) such that both \(k^{\prime}\) and \(|I^{\prime}|\) are bounded from above by \(g(k)\), and \((I,k)\) is a yes-instance of \(\Pi\) if and only if \((I^{\prime},k^{\prime})\) is. That is, a kernelization algorithms preprocesses instances of arbitrary size into instances that are "parameter-sized," and in particular (assuming \(\Pi\) was decidable), this implies an algorithm running in time \(n^{\mathcal{O}(1)}+h(g(k))\) for some function \(h\) (where \(h(g(k))\) is the running time of any algorithm solving instances of \(\Pi\) of size \(g(k)\)). This means in particular that \(\Pi\) is fixed-parameter tractable (and, as a standard result in parameterized algorithms, the converse of this claim holds as well). We refer to the standard textbooks [23, 15] for a general treatment of parameterized algorithms. The _feedback edge number_ of a graph \(G\), denoted \(\mathbf{fen}(G)\), is the size of a minimum edge set \(F\) such that \(G-F\) is acyclic. It is well-known that such a set \(F\) (and hence also the feedback edge number) can be computed in linear time, since \(G-F\) is a spanning tree of \(F\). The _vertex cover number_ of \(G\), denoted \(\mathbf{vcn}(G)\), is the size of a minimum vertex cover of \(G\), i.e., of a minimum set \(X\) such that \(G-X\) is edgeless. Such a minimum set \(X\) can be computed in time \(\mathcal{O}(1.2738^{|X|}+|X|\cdot|V(G)|)\)[13], and a vertex cover of size at most \(2|X|\) can be computed in linear time by a trivial approximation algorithm. The third structural parameter considered here is the _neighborhood diversity_\(\mathbf{nd}(G)\) of \(G\), which is the minimum size of a partition \(\mathcal{P}\) of \(V(G)\) such that for each \(a,b\) in the same part of \(\mathcal{P}\) it holds that \(N(a)\setminus\{b\}=N(b)\setminus\{a\}\). It is well known that each part in such a partition \(\mathcal{P}\) must be either a clique or an independent set, and such a minimum partition can be computed in polynomial time [40]. ## 3 An Explicit Algorithm for BRAC As already pointed out above, our results for fixed-parameter tractability come as kernels. While there is a generic formal equivalence between the existence of a kernel and a decidable problem being fixed-parameter tractable, this doesn't by itself yield explicit bounds on the running time of the algorithm that results from this generic strategy. In order to derive concrete upper bounds on the running time of our algorithms, we provide an algorithm that solves _\(b\)-bend \(\beta\)-restricted RAC drawing_ with a specific running time bound. We do so via a combination of branching and an encoding in the existential theory of the reals. Theorem 3.1: _An instance \((G,b,\beta)\) of BRAC can be solved in time \(m^{\mathcal{O}(m^{2})}\), where \(m\) is the number of edges of \(G\)._ Proof: Observe that, without loss of generality, we may assume that \(b\leq 3m\). We begin by a branching step in which we exhaustively consider all possible allocations of the bends to edges, resulting in a total number of at most \(4^{m}\) branches (some of which will be discarded due to exceeding the bound \(b\) or violating \(\beta\)). In each branch, we alter the graph \(G\) by subdividing each edge precisely the number of times it is assumed to be bent in that branch. At this point, it remains to decide whether this new graph \(G^{\prime}\) admits a straight-line RAC drawing, where \(G^{\prime}\) has \(\mathcal{O}(m)\) edges and vertices, and we denote these \(m^{\prime}\) and \(n^{\prime}\), respectively. To do this, one can construct a sentence in the existential theory of the reals that is true if and only if \(G^{\prime}\) admits such a drawing. The variables of the sentence will consist of \(n^{\prime}\) variable pairs \((x_{v_{1}},y_{v_{1}}),\ldots,(x_{v_{n^{\prime}}},y_{v_{n^{\prime}}})\), encoding the coordinates of the drawing of the vertices in \(\mathbb{R}^{2}\). Furthermore, for every pair of edges with endpoints \(u,v\) and \(u^{\prime},v^{\prime}\), we can formulate a condition \(\sigma(u,v,u^{\prime},v^{\prime})\Rightarrow\tau(u,v,u^{\prime},v^{\prime})\), where \(\sigma\) is a polynomial condition in \(x_{u},x_{v},x^{\prime}_{u},x^{\prime}_{v}\) encoding whether the straight-line segments corresponding to \(uv\) and \(u^{\prime}v^{\prime}\) intersect, and \(\tau\) is a polynomial condition in \(x_{u},x_{v},x^{\prime}_{u},x^{\prime}_{v}\) encoding whether these straight-line segments are perpendicular. Indeed, the former requires an addition of another \(m^{\prime 2}\) auxiliary variables in the worst case, but both conditions can be expressed by polynomials of degree two. This encoding is described in full detail by Bieker [11]. To conclude the proof, we note that an existential sentence over the reals in \(N\) variables over \(M\) polynomials of maximal degree \(D\) can be decided in time \((M\cdot D)^{\mathcal{O}(N)}\) (see, e.g., [7, Theorem 13.13]). Note that, within essentially the same running time bound, one can also construct a representation of a solution for this system [7, Theorem 13.11]. ## 4 A Fixed-Parameter Algorithm via \(\mathsf{fen}(G)\) We begin our investigation by establishing a kernel for Bend-Restricted RAC Drawing when parameterized by the feedback edge number. Our kernel is based on the exhaustive application of two reduction rules. Let us assume we are given an instance \((G,b,\beta)\) of BRAC and that we have already computed a minimum feedback edge set \(F\) of \(G\) in linear time. The first reduction rule is trivial: we simply observe that vertices of degree one can always be safely removed since they never hinder the existence of a RAC drawing. **Observation 2**: _Let \(v\in V(G)\) be a vertex with degree one. \(G-\{v\}\) admits a \(b\)-bend \(\beta\)-restricted RAC drawing if and only if \(G\) does as well._ Proof: Clearly, if \(G\) admits a \(b\)-bend \(\beta\)-restricted RAC drawing, then \(G-\{v\}\) does as well (one may simply remove \(v\) and its incident edge from the drawing). On the other hand, if \(G-\{v\}\) admits a \(b\)-bend \(\beta\)-restricted RAC drawing then we can extend this drawing to one for \(G\) by placing \(v\) sufficiently close to its only neighbor in a way which does not induce any additional crossings. Iteratively applying the reduction rule provided by Observation 2 results in a graph of the form \(G^{\prime}=(V^{\prime},E^{\prime}\cup F)\), where \(T:=(V^{\prime},E^{\prime})\) is a tree with at most \(2\cdot\mbox{\bf fen}(G)\) leaves and where each leaf of \(T\) is incident to at least one edge in \(F\). We mark a vertex in \(T\) as _special_ if it is an endpoint of an edge in \(F\) or if it has degree at least \(3\) in \(T\) (see Figure 2 for an illustration). Note that the total number of special vertices is upper-bounded by \(4\cdot\mbox{\bf fen}(G)\): the total number of endpoints of edges in \(F\) is bounded by \(2\cdot\mbox{\bf fen}(G)\), and since this also upper-bounds the number of leaves this implies that there can be at most \(2\cdot\mbox{\bf fen}(G)\) vertices of degree at least \(3\) in \(T\). In order to define the crucial second reduction rule, we will partition the edges of \(T\) into edge-disjoint paths such that each special vertex can only appear as an endpoint in such paths. Definition 3: We define the _path partition_ of \(T\) in \(G^{\prime}\) as the unique partition \(P_{1}\mathbin{\dot{\cup}}\cdots\mathbin{\dot{\cup}}P_{\ell}=E^{\prime}\) such that all \(P_{i}\) are pairwise edge-disjoint paths in \(T\) whose endpoints are both special vertices, but with no special vertices in their interior. We call \(\ell\) the size of the path partition. An illustration is provided in Figure 3. Given the established bound on the number of special vertices, the size of the path partition is bounded by \(4\cdot\mbox{\bf fen}(G)\). At this point, let us assume that we have a path partition \(P_{1}\mathbin{\dot{\cup}}P_{2}\mathbin{\dot{\cup}}\cdots\mathbin{\dot{\cup}}P_ {\ell}\) of \(T\) in \(G^{\prime}\), where we index the paths in increasing order of length. Our next task is to divide these paths into short and long paths by identifying whether there exists a large gap in the lengths of these paths. Definition 4: Define \(p_{i}:=|P_{i}|\) for \(i=1,\ldots,\ell\), and moreover define \(P_{0}:=F\) and \(p_{0}:=|F|\). Let \(i_{0}\) be the minimal \(i=1,\ldots,\ell\) such that \(p_{i}>9\ell\cdot p_{i-1}\), if one such \(i\) exists, otherwise we set \(i_{0}:=\ell\). We call all paths \(P_{i}\) with \(1\leq i\leq i_{0}\)_short_ and all other paths _long. Then we define the subgraph \(G_{\mathrm{short}}\) as the edge-induced subgraph of \(\bigcup_{i=0}^{i_{0}}P_{i}\) of \(G^{\prime}\) (i.e., \(G_{\mathrm{short}}\) arises by removing all long paths from \(G^{\prime}\)). Figure 2: Reduction rule one: Degree-one vertices can be pruned. Orange lines represent feedback edges, dashed black lines represent long paths and special vertices are marked in turquoise. Our aim is now to argue that if \(\delta_{\mathrm{short}}\) is a RAC drawing of \(G_{\mathrm{short}}\), then we can always extend \(\delta_{\mathrm{short}}\) to a RAC drawing of \(G^{\prime}\). Without loss of generality we assume that all vertices in \(V(G^{\prime})\) have already been drawn in \(\delta_{\mathrm{short}}\). First we create an intermediate drawing \(\delta^{\prime}\) of \(G^{\prime}\), which will in general not be a RAC drawing. We define \(\delta^{\prime}\) as an extension of \(\delta_{\mathrm{short}}\), where each long path \(P\) with endpoints \(s\) and \(t\) is represented as a simple straight-line segment from \(\delta_{\mathrm{short}}(s)\) to \(\delta_{\mathrm{short}}(t)\) with all interior vertices distributed arbitrarily along that line segment. Doing this will in general violate the RAC property of \(\delta^{\prime}\), hence in the next step we need to alter this straight-line segment in order to ensure that the drawing of \(P\) crosses only at right angles. For this we observe that any vertex on \(P\) can be moved to effectively act as a bend in a polyline drawing of \(P\). We show that these "additional bends" can be used to turn all crossings into right-angle crossings. Lemma 5: _Let \(P\) be a long path with endpoints \(s\) and \(t\) and consider its straight-line representation \(L\) in \(\delta^{\prime}\). Assume \(L\) intersects \(k\) straight-line segments in \(\delta^{\prime}\). Then, there exists a polyline segment \(L^{\star}\) from \(\delta^{\prime}(s)\) to \(\delta^{\prime}(t)\) with at most \(3k\) bends that intersects precisely the line segments intersected by \(L\), where each such segment is crossed precisely once and at a right angle._ Proof: For the purposes of this proof, it will be useful to treat each bend as an auxiliary vertex in \(\delta^{\prime}\) and treat straight-line segments as edges. Let \(e\in E(G_{\mathrm{short}})\) be an edge such that \(\delta^{\prime}(e)\) is crossed by \(L\) at the point \(x\). We now distinguish the following cases of how \(L\) intersects the straight-line segments in \(\delta^{\prime}\), and deal with each case separately: 1. \(x\in\mathrm{int}(\delta^{\prime}(e))\) and no other edge crosses through \(x\). 2. \(\exists\,f\in E(G_{\mathrm{short}}):f\neq e\wedge x\in\mathrm{int}(\delta^{ \prime}(e))\cap\mathrm{int}(\delta^{\prime}(f))\). 3. \(\exists\,v\in V(G_{\mathrm{short}}):x=\delta^{\prime}(v)\). We show how to deal with each of these three cases below: Figure 3: Path partition of \(T\) with feedback edges in orange. 1. Let \(\varepsilon>0\) such that \(B_{\varepsilon}(x)=\{y\in\mathbb{R}^{2}:|x-y|<\varepsilon\}\) contains no vertices and intersects no other edges outside of \(L\) apart from \(e\). We convert the intersection at \(x\) to a right-angle crossing by introducing three bends on the boundary \(\partial B_{\varepsilon}(x)\) as illustrated in Figure 4: Put two vertices \(v_{1}\), \(v_{3}\) on the intersection of \(L\) with \(\partial B_{\varepsilon}(x)\) to maintain the position of \(L\) outside of \(B_{\varepsilon}(x)\). Construct the middle vertex \(v_{2}\) by taking the intersection of \(\partial B_{\varepsilon}(x)\) with the normal line to \(e\) going through \(v_{3}\). Therefore we obtain our new polyline \(L^{\star}\) by joining the parts of \(L\) outside the \(\varepsilon\)-neighborhood with the polyline connecting the three vertices on the boundary of the \(\varepsilon\)-neighborhood. Since we chose \(\varepsilon\) such that there are no further intersections within this \(\varepsilon\)-neighborhood and the polyline remains unchanged outside of this neighborhood, we are guaranteed to not introduce any new crossings nor alter existing ones. 2. Generalizing the first case, we now consider an intersection point \(x\) with multiple edges crossing it. We denote with \(C=\{e_{1},\ldots,e_{t}\}\) the set of edges intersecting at \(x\). Let \(\varepsilon>0\) such that \(B_{\varepsilon}(x)\) contains no vertices and intersects no other edges apart from the edges in \(C\). Again, two entry and exit nodes \(v_{1}\), \(v_{t+1}\) are added on the boundary of \(B_{\varepsilon}(x)\) to preserve the original position of \(L\) outside of \(B_{\varepsilon}(x)\). Assume the edges in \(C\) are ordered clockwise with \(e_{1}\) being the closest edge to \(v_{1}\). Now we iteratively construct the next vertex \(v_{i+1}\) by taking the intersection of the angular bisector between \(e_{i}\) and \(e_{i+1}\) with the normal line to \(e_{i}\) going through \(v_{i}\). If this point happens to lie outside of \(B_{\varepsilon}(x)\) we take the intersection of the normal line with \(\partial B_{\varepsilon}(x)\) instead. We refer to Figure 5 for an illustration. In total, we need at most \(t+1\leq 3t\) vertices. 3. In the third case, the straight-line segment \(L\) intersects with a vertex \(v\) at \(x=\delta^{\prime}(v)\). If \(L\cap\delta^{\prime}(e)=\delta^{\prime}(v)\), i.e. \(L\) does not run parallel to \(\delta^{\prime}(e)\), we can simply take care of this equivalently as if \(L\) would cross \(\delta^{\prime}(e)\) in the interior. If otherwise \(L\cap\delta^{\prime}(e)\supsetneq\delta^{\prime}(v)\), we observe that \(L\) must necessarily contain both endpoints of \(\delta^{\prime}(e)\), since the endpoints \(\delta^{\prime}(s)\) and \(\delta^{\prime}(t)\) of \(L\) cannot lie in the interior of \(\delta^{\prime}(e)\). At the first of these endpoints encountered by \(L\), say \(w\), we again proceed analogously to the second case (i.e., as if \(w\) was a crossing point), however instead of exiting the circle surrounding \(w\) directly opposite Figure 4: Dealing with a single crossing. to the entry point we exit at a distance of \(\varepsilon^{\prime}\) from there (for a sufficiently small \(\varepsilon^{\prime}\)) and from there draw \(L\) in parallel to \(e\). Since these three cases are exhaustive, the lemma follows. Lemma 6: _Each long path intersects at most \(3\ell\cdot p_{i_{0}}\) straight-line edge segments in \(\delta^{\prime}\)._ Proof: Since each long path is represented as a straight-line segment in \(\delta^{\prime}\), it can cross every other long path at most once. Since every edge \(e\in E(G_{\mathrm{short}})\) can be bent at most \(\beta(e)\leq 3\) times, \(L\) can cross \(e\) at most three times. Let \(t\) be the number of long paths, then \(L\) intersects no more than \[t+3\sum_{i=0}^{i_{0}}p_{i}\leq t+3(\ell-t)\cdot p_{i_{0}}\leq 3\ell\cdot p_{i_{0}}\] straight-line edge segments. Theorem 4.1: \(b\)-bend \(\beta\)-restricted RAC Drawing _admits a kernel of size at most \((36\cdot\mathsf{fen}(G))^{4\cdot\mathsf{fen}(G)}\). The kernel can be constructed in linear time._ Proof: Consider an input \((G=(V,E),b,\beta)\) with feedback edge set \(F\). In the first step according to Observation 2 we iteratively prune all vertices of degree one and obtain the reduced graph \(G^{\prime}=(V^{\prime},E^{\prime}\cup F)\). Next, we construct a path partition \(\mathcal{P}=(P_{1},\ldots,P_{\ell})\) of the tree \(T=(V^{\prime},E^{\prime})\) of size at most \(4\cdot\mathsf{fen}(G)\). Then we split the paths in \(\mathcal{P}\) into short and long paths. We define the subgraph \(G_{\mathrm{short}}\) as the graph obtained by removing all long paths from \(G^{\prime}\) and show that it is a kernel. For that consider a RAC drawing \(\delta_{\mathrm{short}}\) of \(G_{\mathrm{short}}\). We show that we can construct a RAC drawing \(\delta\) of \(G^{\prime}\) that extends \(\delta_{\mathrm{short}}\) (see Figure 5(b)). To achieve this, we first define the intermediate drawing \(\delta^{\prime}\), which extends \(\delta_{\mathrm{short}}\) by simply drawing all long paths as straight-line segments. To obtain \(\delta\) from \(\delta^{\prime}\) we now Figure 5: Crossing multiple edges at once. iteratively replace the straight-line representations by the polyline constructions described in Lemma 5. Let \(P_{i_{0}+1}\) be the first long path with straight-line drawing \(L_{i_{0}+1}\). According to Lemma 6 it is involved in at most \(3\ell\cdot p_{i_{0}}\) crossings in \(\delta^{\prime}\). By the definition of long paths we have \(|P_{i_{0}+1}|>9\ell\cdot p_{i_{0}}\) and thus we have enough vertices per crossing to construct \(L_{i_{0}+1}^{*}\) via Lemma 5. Define \(\delta_{i_{0}+1}\) by replacing \(L_{i_{0}+1}\) with \(L_{i_{0}+1}^{*}\) in \(\delta^{\prime}\). Since \(L_{i_{0}+1}^{*}\) does not introduce any new crossings, we can repeat this process for all further long paths until we obtain \(\delta:=\delta_{\ell}\) as our final RAC drawing of \(T\). Thus, we obtain \(G_{\text{short}}\) as a kernel of our original instance with \(b\) and \(\beta\) unchanged, since we did not use any additional bends in the construction of \(\delta\). Next, we show that \(G_{\text{short}}\) can be constructed in linear time. We already observed that a Feedback Edge Set can be constructed in linear time. Pruning vertices of degree one can be done in linear time as well, while the task of finding a path partition of \(T\) can be achieved by depth-first search in linear time as well. Finally, the size of \(G_{\text{short}}\) can be bounded by \[\sum_{i=0}^{i_{0}}p_{i} \leq\sum_{i=0}^{\ell}p_{0}(9\ell\cdot\text{\bf{fen}}(G))^{i}\] \[\leq 2\cdot\text{\bf{fen}}(G)(9\ell\cdot\text{\bf{fen}}(G))^{\ell}\] \[\leq 2(36\cdot\text{\bf{fen}}(G))^{4\cdot\text{\bf{fen}}(G)}.\qed\] Using Theorem 4.1, the runtime guarantee given by Theorem 4.1 and the fact that a feedback edge set of size \(\text{\bf{fen}}(G)\) can be computed in linear time, we obtain: Corollary 8: \(b\)-bend \(\beta\)-restricted RAC Drawing _is fixed-parameter tractable parameterized by \(\text{\bf{fen}}(G)\), and in particular can be solved in time \(2^{\text{\bf{fen}}(G)^{\mathcal{O}(\text{\bf{fen}}(G))}}+\mathcal{O}(|V(G)|)\)._ Figure 6: Extending the RAC drawing. Proof: After constructing the kernel in \(\mathcal{O}(|V(G)|)\) time, we apply the generic branching algorithm to the kernel with a runtime of \[((36\cdot\operatorname{\mathbf{fen}}(G))^{4\cdot\operatorname{ \mathbf{fen}}(G)})^{\mathcal{O}\left((36\cdot\operatorname{\mathbf{fen}}(G))^{8 \cdot\operatorname{\mathbf{fen}}(G)}\right)} =(36\cdot\operatorname{\mathbf{fen}}(G))^{\mathcal{O}\left((36 \cdot\operatorname{\mathbf{fen}}(G))^{8\cdot\operatorname{\mathbf{fen}}(G)+1} \right)}\] \[\leq 2^{(\log\operatorname{\mathbf{fen}}(G))\cdot\operatorname{ \mathbf{fen}}(G)^{\mathcal{O}\left(\operatorname{\mathbf{fen}}(G)\right)}}\] \[=2^{\operatorname{\mathbf{fen}}(G)^{\mathcal{O}\left( \operatorname{\mathbf{fen}}(G)\right)}},\] which concludes the proof. ## 5 Fixed-Parameter Tractability via \(\operatorname{\mathbf{vcn}}(G)\) As in Section 4, the core tool used to establish fixed-parameter tractability for this parameterization is a kernelization procedure, although the ideas and reduction rules used here are very different. Let us assume we are given an instance \((G,b,\beta)\) of BRAC; as our first step, we compute a vertex cover \(C\) of size \(k\leq 2\cdot\operatorname{\mathbf{vcn}}(G)\) using the standard approximation algorithm. We now partition the vertices of our instance \(G\) outside of the vertex cover \(C\) into _types_, as follows. Two vertices in \(G\setminus C\) are of the same type if they have the same set of neighbors in \(C\); observe that the property of "being in the same type" is an equivalence relation, and when convenient we also use the term _type_ to refer to the equivalence classes of this relation. To avoid any confusion, we explicitly remark that two vertices may have the same type even when their incident edges are assigned different values by \(\beta\). The number of types is upper-bounded by \(2^{k}\). We distinguish types by the number of neighbors in \(C\); an illustration is provided in Figure 7. Let a _member_ of a type \(T\) be defined as a vertex in \(T\) as well as its incident edges. By an exhaustive application of the first reduction rule introduced in Section 4 (cf. Observation 2), we may assume that there is no type with less than \(2\) neighbors in \(C\). Turning to types with at least \(3\) neighbors in \(C\), we provide a bound on the size of each such type in a Figure 7: A graph split into its vertex cover \(C\) (in turquoise) and its different types \(T_{1},\ldots,T_{4}\) (in orange). Lemma 9: _If \((G,b,\beta)\) is a yes-instance of BRAC, then each type \(T\) with \(i\geq 3\) neighbors in \(C\) has at most \(\max(2,7-i)+b\) members._ Proof: Didimo, Eades and Liotta showed that no complete bipartite graph \(K_{c,d}\) with \(c+d>7\) and \(\min(c,d)>2\) admits a straight-line RAC drawing [19]. Hence, if vertices in \(T\) have 3 neighbors in \(C\) then a \(b\)-bend \(\beta\)-restricted RAC drawing of \(G\) can contain at most 4 members of \(T\) without bends; otherwise, the drawing of 5 members of \(T\) and their 3 neighbors in \(C\) would contradict the first sentence. Similarly, if vertices in \(T\) have at least 5 neighbors in \(C\) then a \(b\)-bend \(\beta\)-restricted RAC drawing of \(G\) cannot contain 3 members of \(T\) without bends. Lemma 9 implies that we can immediately reject instances containing types with more than 3 neighbors whose cardinality is greater than \(4+b\) (or, for the purposes of kernelization, one may replace these with trivial no-instances). Hence, it now remains to deal with types with precisely two neighbors in \(C\). We say that two edges \(uv\) and \(uv^{\prime}\) form a _fan anchored_ at \(u\). It is easy to observe that if an edge \(e\) crosses both \(uv\) and \(uv^{\prime}\) in a \(b\)-bend \(\beta\)-restricted RAC drawing, then at least one of these three edges must have a bend [3]. Lemma 10: _Consider a \(b\)-bend \(\beta\)-restricted RAC drawing \(\delta\) of \(G\), and let \(T\) be a type containing vertices with precisely two neighbors in \(C\). Let \(T^{\prime}\) be the subset of \(T\) containing all members of \(T\) which do not have bends in \(\delta\). Then \(T^{\prime}\) contains at most four members involved in crossings with other members of \(T^{\prime}\) in \(\delta\)._ Proof: Let \(u\) and \(v\) be the neighbors of \(T^{\prime}\) in the vertex cover \(C\). Let us consider the vertices lying on one side of the half plane induced by the line \(\overleftarrow{\delta(u)\delta(v)}\) going through \(u\) and \(v\). According to Thales's theorem, every right-angle crossing formed by two edges originating in \(u\) and \(v\) respectively, has to lie on the semicircle with diameter \(\overleftarrow{\delta(u)\delta(v)}\). Suppose the edges \((u,x)\) and \((v,w)\) cross at a right angle. Then there cannot be another edge incident to \(u\) which crosses the semicircle to the right of the first crossing. Indeed, if there was such an edge \((u,y)\), then \((v,w)\) would cross the fan formed by \((u,x)\) and \((u,y)\) as shown in Figure 8. Analogously, there also cannot be an edge incident to \(v\), which crosses the semicircle left to the first crossing. Hence, there can be at most one right-angle crossing between members of \(T^{\prime}\) below and above \(\overleftarrow{\delta(u)\delta(v)}\), respectively. Next, we use the above statement to obtain a bound on the total number of crossings that such a type \(T\) can be involved in. We do so by showing that the members of \(T\) which themselves do not have bends are only involved in a bounded number of crossings. Lemma 11: _Consider a \(b\)-bend \(\beta\)-restricted RAC drawing \(\delta\) of \(G\), and let \(T\) be a type containing vertices with precisely two neighbors in \(C\). Then at most \(3k+6+b\) members of \(T\) can be involved in a crossing in \(\delta\)._ Proof: Let \(T^{\prime}\) be the subset of \(T\) containing all members of \(T\) which do not have bends in \(\delta\), and let \(\gamma=|T|-|T^{\prime}|\). Further, let \(T_{0}\) be the set of members of which are pairwise crossing free, but which all cross at least some other edge in \(\delta\). \(T_{0}\) forms a layering structure in \(\delta\), as depicted in Figure 9. Moreover, if \(T_{0}\) contains two members that are incident to the same inner face in this layering structure and whose edges are drawn in parallel in \(\delta\), we remove one of these members from \(T_{0}\); observe that this may only reduce the size of \(T_{0}\) by one. Let \(\alpha\) be the number of members that remain in \(T_{0}\) at this point. At this point, an edge \(e\) without any bends cannot cross more than one member of \(T_{0}\), as no two edges on the same face in \(T_{0}\) are parallel by assumption. Without bends, this would imply that there must be a vertex in every layer, and since each vertex can only be connected to other vertices in the same or in one of the two adjacent layers there can be at most \(3k\) layers. Introducing bends, an edge outside \(T^{\prime}\) might cross one additional layer per bend; thus increasing the number of possibly crossed members to \(3k+1+b\). Since \(\gamma\) bends are already used by edges of \(T\), we obtain \(\alpha\leq 3k+1+b-\gamma\). Moving our attention to \(T^{\prime}\supseteq T_{0}\), the difference between the sizes of these two sets can be caused (1) by up to 4 members that are involved in crossings with other members of \(T^{\prime}\) following Lemma 10 and (2) by one additional member for the single removed member with parallel edges from \(T_{0}\), i.e. \(|T^{\prime}|\leq\alpha+5\). Hence, at most \(\alpha+5+\gamma=3k+6+b\) members of \(T\) can be involved in crossings in \(\delta\). In particular, Lemma 11 implies that in a \(b\)-bend \(\beta\)-restricted RAC drawing, every sufficiently large type \(T\) with precisely 2 neighbors in \(C\) must contain a Figure 8: Illustration for Lemma 10. Figure 9: Illustration for the proof of Lemma 11 with vertices in the vertex cover marked in turquoise. member that is not involved in any crossings. The next lemma highlights why this is useful in the context of our kernelization. Lemma 12: _Let \(T\) be a type with two neighbors in \(C\) and assume that \(G\) admits a \(b\)-bend \(\beta\)-restricted RAC drawing \(\delta\). If there is a member in \(T\) whose edges are drawn without crossings in \(\delta\), then the graph obtained from \(G\) by adding a vertex \(w^{\prime}\) to \(T\) admits a \(b\)-bend \(\beta\)-restricted RAC drawing as well._ Proof: Let \(u,v\) be the neighbors of \(T\) in the vertex cover and let \(w\in T\) be the member without crossings. We can draw \(w^{\prime}\) infinitesimally close to \(w\) such that the emerging layering triangles are drawn without crossings (see Figure 10). At this point, we have all the ingredients for the main result of this section: Theorem 4.1: \(b\)-bend \(\beta\)-restricted RAC Drawing _admits a kernel of size \(\mathcal{O}(b\cdot 2^{k})\), where \(k\) is the size of a provided vertex cover of the input graph._ Proof: Consider an input \((G,b,\beta)\) and let \(C\) be the provided vertex cover of \(G\). We apply the simple reduction rule of deleting vertices of degree \(1\) from \(G\), resulting in an instance where each type has either \(2\) or at least \(3\) neighbors in \(C\). For each type of the latter kind, we check if it contains more members than \(\max(3,7-i)+b\); if yes, we reject (or, equivalently, replace the instance with a trivial constant-size no-instance), and this is correct by Lemma 9. Moreover, for each type \(T\) with precisely \(2\) neighbors in \(C\) containing more than \(3k+6+b+1\) many members, we delete members from \(T\) until its size is precisely \(3k+6+b+1\)--the correctness of this step follows from Lemma 11 and 12. In the resulting graph, each of the at most \(2^{k}\) many types with at least \(3\) neighbors in \(C\) has size at most \(b+4\), while each of the at most \(k^{2}\) types with precisely \(2\) neighbors has size at most \(3k+6+b+1\). The kernel bound follows. From Theorem 4.1, the runtime bound given by Theorem 4.1 and the fact that a vertex cover of size at most \(2\cdot\mathbf{vcn}(G)\) can be obtained in linear time, we obtain: Figure 10: Illustration for the proof of Lemma 12. Corollary 14: \(b\)-bend \(\beta\)-restricted RAC Drawing _is fixed-parameter tractable parameterized by \(b+\mathbf{vcn}(G)\), and in particular can be solved in time \(2^{2^{\mathcal{O}(\mathbf{vcn}(G)+\log b)}}+\mathcal{O}(|V(G)|)\)._ Proof: Applying the runtime result of \(m^{\mathcal{O}(m^{2})}\) given in Theorem 4.1 to the given kernel yields a final runtime of \[\mathcal{O}(b\cdot 2^{\mathbf{vcn}(G)})^{\mathcal{O}(b^{2}4^{ \mathbf{vcn}(G)})} =2^{(\log b+\mathbf{vcn}(G))\cdot\mathcal{O}(b^{2}4^{\mathbf{vcn} (G)})}\] \[\leq 2^{b^{2}\log b\cdot 2^{\mathcal{O}(\mathbf{vcn}(G))}}\] \[=2^{2^{(\log\log b)+2\log b\cdot 2^{\mathcal{O}(\mathbf{vcn}(G))}}}\] \[=2^{2^{\mathcal{O}(\log b+\mathbf{vcn}(G))}},\] which concludes the proof. ## 6 An Extension to Neighborhood Diversity We extend the approach used for the vertex cover number to establish fixed-parameter tractability with respect to _neighborhood diversity_. Briefly recalling the definition of neighborhood diversity, let two vertices \(v,v^{\prime}\) be of the same type if \(N(v)\backslash\{v^{\prime}\}=N(v^{\prime})\backslash\{v\}\). Definition 15 ([40, 39]): The neighborhood diversity \(\mathbf{nd}(G)\) of a graph \(G\) is the minimum number \(k\), such that there exists a partition into \(k\) sets, where all vertices in each set have the same type. By the definition of neighborhood diversity, each set in the witnessing partition is either an independent set or clique in \(G\). Edges can occur either on all vertices between two sets or on none (see Figure 11). In general, a graph \(G\) with neighborhood diversity \(\mathbf{nd}(G)\) has a bounded vertex cover number \(\mathbf{vcn}(G)\). Thus, Theorem 4.1 would already imply tractability of \(b\)-bend RAC drawings under a bounded neighborhood diversity. However, \(\mathbf{vcn}(G)\) might be exponentially larger [40]. For \(b\)-bend RAC drawable graphs, we can show a better, linear bound on \(\mathbf{vcn}(G)\). Lemma 16: _Let \(G\) be a \(b\)-bend RAC drawable graph with a neighborhood diversity \(\mathbf{nd}(G)\). Then \(\mathbf{vcn}(G)\leq 5\cdot\mathbf{nd}(G)+b\)._ Proof: We begin by showing a linear bound on \(\mathbf{vcn}(G)\) for \(b=0\). Let \(S_{1},\ldots,S_{\mathbf{nd}(G)}\) be a partition witnessing the neighborhood diversity number \(\mathbf{nd}(G)\). We build a vertex cover \(C\) as follows. The size of each set \(S_{i}\) forming a clique in \(G\) is bounded by \(5\), as a \(K_{6}\) is not straight-line RAC drawable [20]. Put all vertices of such an \(S_{i}\) in \(C\). Let \(S_{i},S_{j}\) be a pair of two sets, which are both forming an independent set in \(G\), and have edges between each other. If there is an edge between a vertex in \(S_{i}\) and a vertex in \(S_{j}\), there is an edge between all vertices of \(S_{i}\) and \(S_{j}\). Let \(|S_{i}|\leq|S_{j}|\). Recalling that no complete bipartite graph \(K_{a,b}\) with \(a+b>7\) and \(\min(a,b)>2\) admits a straight-line RAC drawing [19], \(S_{i}\leq 3\). Put \(S_{i}\) into \(C\) to cover both sets. In total, we put at most \(5\cdot\mathbf{nd}(G)\) vertices into \(C\). For arbitrary number of bends \(b\), the total number of vertices in clique sets might increase by at most \(b\) without making \(G\) not \(b\)-bend RAC drawable. Similarly, the number of vertices in the smaller set \(S_{i}\) of a connected set pair \(S_{i},S_{j}\), might increase by at most \(b\) over all such sets. So in total, \(\mathbf{vcn}(G)\leq 5\cdot\mathbf{nd}(G)+b\). From Theorem 4.1 and Lemma 4.1 the following theorem follows directly: Theorem 4.2: \(b\)-bend \(\beta\)-restricted RAC Drawing _admits a kernel of size \(\mathcal{O}(b^{2}\cdot\mathbf{nd}(G)\cdot 2^{\mathbf{nd}(G)})\)._ Corollary 18: \(b\)-bend \(\beta\)-restricted RAC Drawing _is fixed-parameter tractable parameterized by \(\mathbf{nd}(G)+b\), and in particular can be solved in time \(2^{b^{\mathcal{O}(\mathbf{nd}(G))}}+\mathcal{O}(|V(G)|+\mathbf{nd}(G))\)._ ## 7 Concluding Remarks We have established the fixed-parameter tractability of \(b\)-bend \(\beta\)-restricted RAC Drawing when parameterized by the feedback edge number \(\mathbf{fen}(G)\), or by the vertex cover number \(\mathbf{vcn}(G)\) plus an upper bound \(b\) on the total number of bends. We have also shown that the latter result implies the fixed-parameter tractability of the problem w.r.t. the neighborhood diversity \(\mathbf{nd}(G)\) plus \(b\). A next step in the computational study of RAC Drawings would be to consider whether the problem is fixed-parameter tractable w.r.t. \(\mathbf{vcn}(G)\) alone. Interestingly, a reduction rule for degree-2 vertices without a bound on \(b\) is the main obstacle towards obtaining such a fixed-parameter algorithm, and dealing with this case seems to be required if one wishes to generalize the result towards Figure 11: Overview of a graph partitioned into its neighborhood diversity sets. Orange sets build cliques, turquoise sets are independent sets in \(G\). Each set may be connected to one ore more other sets. fixed-parameter tractability w.r.t. _treedepth_[42] plus \(b\). A different question one may ask is whether the fixed-parameter algorithm w.r.t. \(\mathbf{fen}(G)\) can be generalized towards the recently introduced parameter _slim tree-cut width_[31], which can be equivalently seen as a local version of the feedback edge number [12]. A natural long-term goal within this research direction is then to obtain an understanding of the complexity of BRAC w.r.t. treewidth [43]. Last but not least, it would be interesting to see whether our fixed-parameter tractability results can be strengthened by obtaining polynomial kernels for the same parameterizations. AcknowledgmentsThe authors graciously accept support from the WWTF (Project ICT22-029) and the FWF (Project Y1329) science funds.
2307.00296
Accelerated primal-dual methods with enlarged step sizes and operator learning for nonsmooth optimal control problems
We consider a general class of nonsmooth optimal control problems with partial differential equation (PDE) constraints, which are very challenging due to its nonsmooth objective functionals and the resulting high-dimensional and ill-conditioned systems after discretization. We focus on the application of a primal-dual method, with which different types of variables can be treated individually and thus its main computation at each iteration only requires solving two PDEs. Our target is to accelerate the primal-dual method with either larger step sizes or operator learning techniques. For the accelerated primal-dual method with larger step sizes, its convergence can be still proved rigorously while it numerically accelerates the original primal-dual method in a simple and universal way. For the operator learning acceleration, we construct deep neural network surrogate models for the involved PDEs. Once a neural operator is learned, solving a PDE requires only a forward pass of the neural network, and the computational cost is thus substantially reduced. The accelerated primal-dual method with operator learning is mesh-free, numerically efficient, and scalable to different types of PDEs. The acceleration effectiveness of these two techniques is promisingly validated by some preliminary numerical results.
Yongcun Song, Xiaoming Yuan, Hangrui Yue
2023-07-01T10:39:07Z
http://arxiv.org/abs/2307.00296v2
Accelerated primal-dual methods with enlarged step sizes and operator learning for nonsmooth optimal control problems+ ###### Abstract We consider a general class of nonsmooth optimal control problems with partial differential equation (PDE) constraints, which are very challenging due to their nonsmooth objective functionals and the resulting high-dimensional and ill-conditioned systems after discretization. We focus on the application of a primal-dual method, with which different types of variables can be treated individually in iterations and thus its main computation at each iteration only requires solving two PDEs. Our target is to accelerate the primal-dual method with either enlarged step sizes or operator learning techniques. The accelerated primal-dual method with enlarged step sizes improves the numerical performance of the original primal-dual method in a simple and universal way, while its convergence can be still proved rigorously. For the operator learning acceleration, we construct deep neural network surrogate models for the involved PDEs. Once a neural operator is learned, solving a PDE requires only a forward pass of the neural network, and the computational cost is thus substantially reduced. The accelerated primal-dual method with operator learning is mesh-free, numerically efficient, and scalable to different types of PDEs. The acceleration effectiveness of these two techniques is promisingly validated by some preliminary numerical results. O ptimal control; nonsmooth optimization; primal-dual method; operator learning; deep neural network 49M41, 35Q90, 35Q93, 65K05, 90C25, 68T07 ## 1 Introduction Optimal control problems with partial differential equation (PDE) constraints play a crucial role in various areas such as physics, chemistry, biology, engineering, and finance. We refer the reader to [14, 15, 16, 26, 34, 55] for a few references. Typically, additional nonsmooth constraints are imposed on the control (variable) to promote some desired properties such as boundedness, sparsity, and discontinuity; see [12, 25, 34, 51, 55] and references therein. These optimal control problems are numerically challenging. One particular reason is that the PDE constraints and other nonsmooth constraints on the control are coupled together and the resulting algebraic systems after discretization are usually high-dimensional and ill-conditioned. To solve such a nonsmooth optimal control problem, it is desirable to consider different types of variables separately in iterations so that a nonsmooth optimal control problem can be decoupled into some much easier subproblems while there is no need to solve any computationally demanding system after discretization. For this purpose, it becomes necessary to deliberately consider the structure of the model under discussion for algorithmic design. In this paper, we study the algorithmic design for optimal control problems that can be uniformly and abstractly formulated as \[\min_{u\in U,y\in Y}\quad\frac{1}{2}\|y-y_{d}\|_{Y}^{2}+\frac{\alpha}{2}\|u\|_ {U}^{2}+\theta(u)\qquad\text{s.t. }y=Su, \tag{1}\] where \(u\in U\) and \(y\in Y\), with \(U\) and \(Y\) being function spaces, are called the control and the state, respectively; \(y_{d}\in Y\) is a given target; and \(\alpha>0\) is a regularization parameter. In addition, \(y=Su\) represents a linear PDE, in which \(S:U\to Y\) is the corresponding solution operator. The nonsmooth convex functional \(\theta(u):U\to\mathbb{R}\) is employed to impose some additional constraints on the control, such as boundedness [34, 55], sparsity [51], and discontinuity [12]. Various optimal control problems with PDE constraints can be covered by (1). For instance, the abstract state equation \(y=Su\) can be specified as the parabolic equation [14], the elliptic equation [24], the wave equation [15], etc. Also, the control \(u\) can be a distributed control or a boundary control. Moreover, the functional \(\theta(u)\) can be the indicator function of an admissible set [19], the \(L^{1}\)-regularization term [51], or the total variation regularization [12, 25]. ### State-of-the-art Numerical study for optimal control problems, including (1), has become an increasingly active field in the past decades. In the literature, the semi-smooth Newton (SSN) methods have been studied intensively and extensively; see, e.g., [24, 26, 29, 31, 56] for control constrained optimal control problems, and [51] for sparse elliptic optimal control problems. In [24], it has been proved that SSN methods possess locally superlinear convergence and they usually can find high-precision solutions provided that some initial values are appropriately chosen. Computationally, it is notable that, at each iteration of SSN methods, one encounters a large-scale and ill-conditioned Newton system. Practically, it is required to solve these Newton systems up to very high accuracy to ensure the convergence, which is numerically challenging, especially for time-dependent problems 1; see, e.g., [44, 45, 48, 52] for more discussions. Footnote 1: The same concerns also apply to interior point methods, e.g., [43], for different types of optimal control problems. On the other hand, the alternating direction method of multipliers (ADMM) [13, 17] has been applied to various optimal control problems modeled by (1); see, e.g. [2, 18, 19, 46, 63]. At each iteration of the ADMM metods, one only needs to solve a simple optimization subproblem, which generally has a closed-form solution, and a standard unconstrained optimal control problem. The dimensionality of the unconstrained optimal control subproblems after discretization is inevitably high. Hence, these subproblems must be solved iteratively while it is computationally expensive to acquire accurate solutions of these subproblems. To tackle this computation bottleneck, an inexact ADMM was proposed in [19], with an automatically adjustable inexactness criterion for the inner iterations. As a result, the unconstrained optimal control subproblems only need to be solved up to a rather low accuracy by a few inner iterations, while the overall convergence of the inexact ADMM can be still guaranteed rigorously. Notwithstanding, we reiterate that optimal control subproblems still have to be solved for the inexact ADMM in [19]. ### Vanilla application of the primal-dual method To solve the problem (1) efficiently, we aim at such algorithms that can avoid both complicated Newton systems and unconstrained optimal control subproblems. For this purpose, it suffices to apply the primal-dual method in [8], because it does not require specific initial iterates and its resulting subproblems are easier than the original model. The primal-dual method in [8] and its variants have been widely applied in various areas such as PDEs [35, 36], imagining processing [8], statistical learning [20], and inverse problems [6, 10, 54, 57]. We shall show that, the vanilla application of the primal-dual method in [8] to the abstract model (1) requires solving two PDEs at each iteration, and thus it differs from the just-mentioned SSN and ADMM approaches in the literature. To fix ideas, we focus on a parabolic control constrained optimal control problem in the following discussion and all the results can be easily extended to other problems modeled by (1). Let \(\Omega\subset\mathbb{R}^{d}(d\geq 1)\) be a bounded domain and \(\Gamma:=\partial\Omega\) its boundary. We consider the following parabolic optimal control problem: \[\min_{u\in L^{2}(\mathcal{O}),y\in L^{2}(Q)}\frac{1}{2}\|y-y_{d}\|_{L^{2}(Q)} ^{2}+\frac{\alpha}{2}\|u\|_{L^{2}(\mathcal{O})}^{2}+\theta(u) \tag{2}\] subject to the parabolic problem \[\frac{\partial y}{\partial t}-\Delta y=u\chi_{\mathcal{O}}\text{ in }\Omega\times(0,T),\quad y=0\text{ on }\Gamma\times(0,T),\quad y(0)=0. \tag{3}\] Above, \(Q=\Omega\times(0,T)\) with \(0<T<+\infty\); \(\mathcal{O}=\omega\times(0,T)\) with \(\omega\) an open subset of \(\Omega\); \(\chi_{\mathcal{O}}\) is the characteristic function of \(\mathcal{O}\); \(y_{d}\in L^{2}(Q)\) is a given target; and \(\alpha>0\) is the regularization parameter. Moreover, we specify \(\theta(u)\) as the indicator function of the admissible set \(U_{ad}\): \[U_{ad}:=\{v\in L^{\infty}(\mathcal{O})|a\leq v(x,t)\leq b\text{ a.e. in }\Omega\times(0,T)\}, \tag{4}\] with \(a\) and \(b\) given constants. Existence and uniqueness of the solution to problem (2)-(4) have been well studied in [34, 55]. To apply the primal-dual method in [8] to problem (2)-(4), we first observe that (2)-(4) can be rewritten as \[\min_{u\in L^{2}(\mathcal{O})}f(Su)+g(u), \tag{5}\] where \(f(Su)=\frac{1}{2}\left\|Su-y_{d}\right\|_{L^{2}(Q)}^{2}\) and \(g(u)=\frac{\alpha}{2}\|u\|_{L^{2}(\mathcal{O})}^{2}+\theta(u)\). Introducing an auxiliary variable \(p\in L^{2}(Q)\), it follows from the Fenchel duality [4] that the primal-dual formulation of (5) reads as \[\min_{u\in L^{2}(\mathcal{O})}\max_{p\in L^{2}(Q)}g(u)+(p,Su)_{L^{2}(Q)}-f^{*}( p), \tag{6}\] where \((\cdot,\cdot)_{L^{2}(Q)}\) denotes the canonical \(L^{2}\)-inner product, \(f^{*}(p):=\sup_{y\in L^{2}(Q)}\{(y,p)_{L^{2}(Q)}-f(y)\}\) is the convex conjugate of \(f(y)\) and can be specified as \(f^{*}(p)=\frac{1}{2}\|p\|_{L^{2}(Q)}^{2}+(p,y_{d})_{L^{2}(Q)}\). Then, implementing the primal-dual method in [8] to (6), we readily obtain the following scheme: \[u^{k+1}=\arg\min_{u\in L^{2}(\mathcal{O})}\{g(u)+(p^{k},Su)_{L^{2}(Q)}+\frac{ 1}{2r}\|u-u^{k}\|_{L^{2}(\mathcal{O})}^{2}\}, \tag{7}\] \[\bar{u}^{k}=2u^{k+1}-u^{k}, \tag{8}\] \[p^{k+1}=\arg\max_{p\in L^{2}(Q)}\{(p,S\bar{u}^{k})_{L^{2}(Q)}-f^{*}(p)-\frac{1 }{2s}\|p-p^{k}\|_{L^{2}(Q)}^{2}\}, \tag{9}\] where the parameters \(r>0\) and \(s>0\) can be understood as the step sizes of the primal and dual subproblems, respectively. For the solutions to subproblems (7) and (9), one can show that \[u^{k+1}=P_{U_{ad}}\left(-\frac{S^{*}p^{k}-\frac{1}{r}u^{k}}{\alpha+\frac{1}{r} }\right),\quad p^{k+1}=\left(S(2u^{k+1}-u^{k})+\frac{1}{s}p^{k}-y_{d}\right)/( 1+\frac{1}{s}), \tag{10}\] where \(S^{*}:L^{2}(Q)\to L^{2}(\mathcal{O})\) is the adjoint operator of \(S\), and \(P_{U_{ad}}(\cdot)\) denotes the projection onto the admissible set \(U_{ad}\), namely, \(P_{U_{ad}}(v)(x,t):=\max\{a,\min\{v(x,t),b\}\}\) a.e in \(\mathcal{O},\forall v\in L^{2}(\mathcal{O})\). It follows from (10) that the main computation cost of (7)-(9) consists of solving \(y^{k}:=S(2u^{k+1}-u^{k})\), i.e., the state equation (3) with \(u=2u^{k+1}-u^{k}\), and computing \(q^{k}|_{\mathcal{O}}:=S^{*}p^{k}\), where \(q^{k}\) is obtained by solving the adjoint equation: \[-\frac{\partial q^{k}}{\partial t}-\Delta q^{k}=p^{k}\text{ in }\Omega\times(0,T),\ q^{k}=0\text{ on }\Gamma\times(0,T),\ q^{k}(T)=0. \tag{11}\] Obviously, the main computation of (7)-(9) is solving only the PDEs (3) and (11). The Newton systems of SSN methods and the unconstrained optimal control subproblems of ADMM methods are both completely avoided. Therefore, the computational load of the primal-dual method (7)-(9) at each iteration is much lower than that of the SSN and ADMM methods. Meanwhile, when \(\theta(u)\) is an \(L^{1}\) or a total variation regularization, methods for solving the resulting subproblem (7) can be found in Section 3 and [50]. It can be seen that, for these two cases, the primal-dual method (7)-(9) also only requires solving two PDEs as shown in (10). ### Enlarging step sizes for (7)-(9) As analyzed in [8], to ensure the convergence of the primal-dual method (7)-(9), the step sizes \(r\) and \(s\) are required to satisfy the condition \[r\cdot s<\frac{1}{\|S\|^{2}}, \tag{12}\] where \(\|S\|=\sup_{\|v\|_{L^{2}(\mathcal{O})}=1}\{\|Sv\|_{L^{2}(Q)},\forall v\in L^{ 2}(\mathcal{O})\}\). Numerical efficiency of the primal-dual method (7)-(9) certainly depends on the choices of the step sizes \(r\) and \(s\). In the literature, to allow for larger step sizes and hence accelerate convergence, \(r\cdot s\) are usually chosen to be very close to, or even equal the upper bound \(\frac{1}{\|S\|^{2}}\). It is thus clearly interesting to discuss if the upper bound \(\frac{1}{\|S\|^{2}}\) can be further enlarged theoretically, while the convergence of the primal-dual method (7)-(9) can be still guaranteed. Recently, it has been shown in [23] that, for saddle point problems in the generic convex setting, the convergence condition (12) can be optimally improved to \[r\cdot s<\frac{4}{3}\cdot\frac{1}{\|S\|^{2}}.\] We are motivated by the work [23] and consider whether or not the upper bound \(\frac{4}{3}\cdot\frac{1}{\|S\|^{2}}\) can be further enlarged for the model (1.1), given that the functionals \(\frac{1}{2}\left\|Su-y_{d}\right\|_{L^{2}(Q)}^{2}\) and \(\frac{\alpha}{2}\|u\|_{L^{2}(\mathcal{O})}^{2}\) are indeed strongly convex. Below, we shall show that, to ensure the convergence of the primal-dual method (1.7)-(1.9) for problem (1.1), the step sizes \(r\) and \(s\) can be chosen subject to \[r\cdot s<\frac{4+2\alpha r}{3}\cdot\frac{1}{\|S\|^{2}}. \tag{1.13}\] As a result, the step sizes \(r\) and \(s\) can be enlarged for the primal-dual method (1.7)-(1.9) and its numerical performance can be accelerated. With (1.13), the primal-dual method (1.7)-(1.9) is accelerated simply by a larger interval for possible choices of the step sizes, and the computational load of each iteration remains. As we shall show, this is a simple and universal way for accelerating the primal-dual method (1.7)-(1.9) by reducing its number of iterations, while its convergence can be still proved rigorously. ### Accelerating (1.7)-(1.9) with operator learning In the context of traditional numerical methods, the PDEs (1.3) and (1.11) should be solved repeatedly by certain mesh-based numerical discretization schemes (e.g., finite difference methods (FDM) or finite element methods (FEM)), which require solving large-scale and ill-conditioned algebraic systems. Even a single implementation of such a PDE solver could be expensive; hence the computation cost for solving the PDEs (1.3) and (1.11) repeatedly is usually extremely high. Furthermore, given another target \(y_{d}\in L^{2}(Q)\), one has to solve the resulting optimal control problem from scratch, and hence solve the state and adjoint equations repeatedly again. To tackle the computational difficulty above, we advocate to adopt deep learning techniques, which have recently emerged as a new powerful tool for scientific computing problems thanks to the universal approximation property and great expressibility of deep neural networks (DNNs). Indeed, various deep learning approaches have been developed for PDEs; see e.g. [5, 11, 21, 37, 38, 47, 49, 60] and references therein. Compared with traditional numerical solvers for PDEs, these deep learning techniques are usually mesh-free, easy to implement, and very flexible to different PDEs. It is arguably accepted that deep learning techniques are helping alleviate human efforts in algorithmic design yet empowering the solvability of a large class of scientific computing problems. Among deep learning techniques, one is to approximate PDE solutions via DNNs, such as the deep Ritz method [11], the deep Galerkin method [49], and physic-informed neural networks [38, 42, 47, 62]. Despite that these methods have shown promising results in diversing applications, each of them is tailored for a specific PDE. It is thus necessary to train a new neural network given a different input function (e.g., initial condition, boundary condition, or source term), which is computationally costly and time-consuming. Hence, these methods are not applicable to (1.3) and (1.11) because they have to be solved repeatedly with different \(u^{k}\) and \(p^{k}\). Another deep learning technique, called operator learning, is to apply a DNN to approximate the solution operator of a PDE, which maps from an input function to the PDE solution, see e.g., [30, 32, 37, 60]. To be concrete, consider a PDE solution operator \(G:X\to Y,v\mapsto w\), where \(X\) and \(Y\) are two infinite-dimensional Banach spaces and \(w=G(v)\). Operator learning aims at approximating \(G\) with a neural network \(\mathcal{G}_{\theta}\) parameterized by \(\theta\). Once a neural solution operator is learned, obtaining a PDE solution \(\mathcal{G}_{\theta}(v)\) for a new input function \(v\) requires only a forward pass of the neural network. Hence, neural solution operators can be used as effective surrogates for PDEs and are computationally attractive for problems that require repetitive yet expensive simulations, see e.g., [27, 40, 59]. We are thus inspired to consider constructing two DNN surrogates for the PDEs (1.3) and (1.11) by operator learning to accelerate the primal-dual method (1.7)-(1.9). Precisely, we propose to construct two neural surrogates \(y=\mathcal{S}_{\theta_{s}}(u)\) and \(q=\mathcal{S}_{\theta_{a}}(p)\) parameterized by \(\theta_{s}\) and \(\theta_{a}\) for (1.3) and (1.11), respectively. Then, replacing \(S\) and \(\dot{S}^{*}\) by \(\mathcal{S}_{\theta_{s}}\) and \(\mathcal{S}_{\theta_{a}}\) in (1.10), we propose the following primal-dual method with operator learning for solving (1.2)-(1.4): \[u^{k+1}=P_{U_{ad}}\left(-\frac{\mathcal{S}_{\theta_{a}}(p^{k})-\frac{1}{r}u^{ k}}{\alpha+\frac{1}{r}}\right),\quad p^{k+1}=\left(\mathcal{S}_{\theta_{s}}(2u^{k+1}-u^{ k})+\frac{1}{s}p^{k}-y_{d}\right)/(1+\frac{1}{s}). \tag{1.14}\] Different primal-dual methods can be specified from (1.14) by using different operator learning techniques such as the Deep Operator Networks (DeepONets) [37], the physic-informed DeepONets [60], the Fourier Neural Operator (FNO) [32], the Graph Neural Operator (GNO) [33], and the Laplace Neural Operator (LNO) [7]. Note that, given two neural surrogates, these primal-dual methods with operator learning only require implementing two forward passes of the neural networks and some simple algebraic operations. More importantly, given a different \(y_{d}\in L^{2}(Q)\), these primal-dual methods with operator learning can be applied directly to the resulting optimal control problems without the need of solving any PDE. Moreover, we reiterate that the resulting primal-dual methods with operator learning for solving (2)-(4) can be easily extended to other various optimal control problems in form of (1), see Section 5 for more details. Finally, we mention that some deep learning techniques have been recently developed for solving optimal control problems with PDE constraints, such as the ISMO [40], operator learning methods [27, 59], the amortized finite element analysis [61], and physics-informed neural networks (PINNs) methods [3, 22, 41, 47, 53]. All these deep learning methods, however, are designed for only smooth optimal control problems with PDE constraints, and they cannot be directly applied to the nonsmooth problems modeled by (1). To tackle this issue, the ADMM-PINNs algorithmic framework has been recently proposed in [50]. With the advantages of both the ADMM and PINNs, the ADMM-PINNs algorithmic framework in [50] is applicable to a wide range of nonsmooth optimal control and inverse problems. It is notable that the neural networks in the ADMM-PINNs algorithmic framework have to be re-trained at each iteration. ### Organization The rest of this paper is organized as follows. In Section 2, we prove the convergence of the primal-dual method (7)-(9) with the enlarged step sizes (13). In Section 3, we test a parabolic control constrained optimal control problem and validate the efficiency of the accelerated primal-dual method (7)-(9) with (13). In Section 4, we showcase extensions to other optimal control problems by a sparse elliptic optimal control problem. In Section 5, we focus on the implementation of the primal-dual method with operator learning (14), and report some numerical results to validate its efficiency. Finally, some conclusions and perspectives are given in Section 6. ## 2 Convergence analysis of (7)-(9) with (13) In this section, we rigorously prove the convergence for the primal-dual method (7)-(9) with the enlarged step sizes (13). For this purpose, we first show that the primal-dual method (7)-(9) can be equivalently interpreted as a linearized ADMM. We reiterate that the convergence analysis does not depend on the specific form of the solution operator \(S\) and the nonsmooth convex functional \(\theta(u)\) in (1). Hence, the convergence results can be applied to other optimal control problems with PDE constraints in the form of (1). ### Preliminary In this subsection, we summarize some known results in the literature for the convenience of further analysis. We denote by \((\cdot,\cdot)\) and \(\|\cdot\|\) the canonical \(L^{2}\)-inner product and the associated norm, respectively. Let \(\lambda\in L^{2}(Q)\) be the Lagrange multiplier associated with the constraint \(y=Su\). It is clear that problem (2)-(4) is equivalent to the following saddle point problem: \[\min_{u\in L^{2}(\mathcal{O}),y\in L^{2}(Q)}\max_{\lambda\in L^{2}(Q)}g(u)+f(y )+(\lambda,Su-y). \tag{15}\] Let \((u^{*},y^{*},\lambda^{*})^{\top}\) be the solution of (15). Then, the first-order optimality condition of (15) reads as \[\begin{cases}\partial\theta(u^{*})+\alpha u^{*}+S^{*}\lambda^{*}\ni 0\,,\\ y^{*}-y_{d}-\lambda^{*}=0,\\ -Su^{*}+y^{*}=0,\end{cases} \tag{16}\] which can be rewritten as the following variational inequalities (VIs): \[\begin{cases}\theta(u)-\theta(u^{*})+(u-u^{*},\alpha u^{*}+S^{*}\lambda^{*}) \geq 0,\,\forall u\in L^{2}(\mathcal{O}),\\ (y-y^{*},y^{*}-y_{d}-\lambda^{*})\geq 0,\,\forall y\in L^{2}(Q),\\ (q-\lambda^{*},-Su^{*}+y^{*})\geq 0,\forall q\in L^{2}(Q).\end{cases} \tag{17}\] The following lemma will be used later. Its proof can be found in [4], and thus omitted. **Lemma 2.1** (Moreau's identity).: _Let \(H\) be a Hilbert space and \(\phi:H\to R\cup\{+\infty\}\) a proper, convex and lower semi-continuous extended real-valued functional on \(H\). Let \(\phi^{*}(v):=\sup_{w\in H}(v,w)-\phi(w)\) be the convex conjugate of \(\phi(v)\). Then, for all \(w\in H\), it holds that_ \[w=\arg\min_{v}\{\phi(v)+\frac{1}{2}\|v-w\|^{2}\}+\arg\min_{v}\{\phi^{*}(v)+\frac {1}{2}\|v-w\|^{2}\}. \tag{4}\] For any constant \(s>0\), applying (4) to \(s\phi(v)\), instead of \(\phi(v)\), we have \[w=\arg\min_{v}\{\phi(v)+\frac{1}{2s}\|v-w\|^{2}\}+s\arg\min_{v}\{\phi^{*}(v)+ \frac{s}{2}\|v-\frac{1}{s}w\|^{2}\}. \tag{5}\] ### Equivalence between (7)-(9) and linearized ADMM In this subsection, we show that the primal-dual method (7)-(9) is equivalent to the following linearized ADMM: \[\begin{cases}y^{k+1}=\arg\min_{y\in L^{2}(Q)}\left\{f(y)-\left(y,sSu^{k}\right) +\frac{s}{2}\left\|y-\frac{1}{s}\lambda^{k}\right\|^{2}\right\},\\ u^{k+1}=\arg\min_{u\in L^{2}(\mathcal{O})}\left\{g(u)+\left(\lambda^{k}+s(Su^ {k}-y^{k+1}),Su\right)+\frac{1}{2r}\left\|u-u^{k}\right\|^{2}\right\},\\ \lambda^{k+1}=\lambda^{k}+s(Su^{k+1}-y^{k+1}).\end{cases} \tag{6}\] First, we note that the primal-dual method (7)-(9) can be rewritten as \[u^{k+1}=\arg\min_{u\in L^{2}(\mathcal{O})}\{g(u)+(p^{k},Su)+\frac{1}{2r}\|u-u ^{k}\|^{2}\}, \tag{7}\] \[p^{k+1}=\arg\max_{p\in L^{2}(Q)}\{-f^{*}(p)-\frac{1}{2s}\|p-(p^{k}+s(2Su^{k+1} -Su^{k}))\|^{2}\}. \tag{8}\] Let \(\lambda^{k+1}=p^{k}+sS(u^{k+1}-u^{k})\). Then, (7)-(8) can be written as \[u^{k+1}=\arg\min_{u\in L^{2}(\mathcal{O})}\{g(u)+(p^{k},Su)+\frac{1}{2r}\|u-u ^{k}\|^{2}\}, \tag{9}\] \[\lambda^{k+1}=p^{k}+sS(u^{k+1}-u^{k}), \tag{10}\] \[p^{k+1}=\arg\max_{p\in L^{2}(Q)}\{-f^{*}(p)-\frac{1}{2s}\|p-(\lambda^{k+1}+sSu ^{k+1})\|^{2}\}. \tag{11}\] Next, taking \(w=\lambda^{k+1}+sSu^{k+1}\) and \(\phi=f^{*}(p)\) in (5), we obtain that \[\lambda^{k+1}+sSu^{k+1}= \arg\min_{p\in L^{2}(Q)}\left\{f^{*}(p)-\left(p,Su^{k+1}\right)+ \frac{1}{2s}\left\|p-\lambda^{k+1}\right\|^{2}\right\} \tag{12}\] \[+s\arg\min_{y\in L^{2}(Q)}\left\{f(y)-\left(y,sSu^{k+1}\right)+ \frac{s}{2}\|y-\frac{1}{s}\lambda^{k+1}\|^{2}\right\}.\] Clearly, the first term of the right-hand side is exactly \(p^{k+1}\) obtained by (11). Additionally, we introduce \[y^{k+2}=\arg\min_{y\in L^{2}(Q)}\left\{f(y)-\left(y,sSu^{k+1}\right)+\frac{s} {2}\left\|y-\frac{1}{s}\lambda^{k+1}\right\|^{2}\right\}.\] Then, (12) can be rewritten as \(\lambda^{k+1}+sSu^{k+1}=p^{k+1}+sy^{k+2}\), which implies that \(p^{k}=\lambda^{k}+s(Su^{k}-y^{k+1})\). Substituting this result into (10) and (11) to elmiminate \(p^{k}\) and \(p^{k+1}\), we thus have \[u^{k+1}=\arg\min_{u\in L^{2}(\mathcal{O})}\left\{g(u)+\left(\lambda^{k}+s(Su^ {k}-y^{k+1}),Su\right)+\frac{1}{2r}\left\|u-u^{k}\right\|^{2}\right\}, \tag{13}\] \[\lambda^{k+1}=\lambda^{k}+s(Su^{k+1}-y^{k+1}),\] (14) \[y^{k+2}=\arg\min_{y\in L^{2}(Q)}\left\{f(y)-\left(y,sSu^{k+1} \right)+\frac{s}{2}\left\|y-\frac{1}{s}\lambda^{k+1}\right\|^{2}\right\}. \tag{15}\] Swapping the order such that the update of \(y\) comes first, we get the linearized ADMM (6) directly. ### Convergence In this subsection, we prove the convergence of the primal-dual method (1.7)-(1.9) with the enlarged step sizes (1.13) in form of the linearized ADMM (2.6). We first see that, for any \(y\in L^{2}(Q)\) and \(u\in L^{2}(\mathcal{O})\), the iterate \((y^{k+1},u^{k+1},\lambda^{k+1})^{\top}\) generated by the linearized ADMM (2.6) satisfies the following VIs: \[\big{(}y-y^{k+1},y^{k}-y_{d}-s(Su^{k}-y^{k+1})-\lambda^{k}\big{)} \geq 0, \tag{2.17}\] \[\theta(u)-\theta(u^{k+1})+(u-u^{k+1},\alpha u^{k+1}+S^{*}\left( \lambda^{k}+s(Su^{k}-y^{k+1})\right)+\frac{1}{r}(u^{k+1}-u^{k})) \geq 0,\] (2.18) \[\frac{1}{s}(\lambda^{k+1}-\lambda^{k})-(Su^{k+1}-y^{k+1}) =0. \tag{2.16}\] Though (2.6) is no more related to \(p^{k}\), for the convenience of further analysis, we still denote \[p^{k}=\lambda^{k}+s(Su^{k}-y^{k+1}). \tag{2.19}\] Substituting it into (2.16)-(2.18) yields \[\left\{\begin{aligned} &\big{(}y-y^{k+1},y^{k+1}-y_{d}-p^{k} \big{)}\geq 0,\forall y\in L^{2}(Q),\\ &\theta(u)-\theta(u^{k+1})+\bigg{(}u-u^{k+1},\alpha u^{k+1}+S^{* }p^{k}+\frac{1}{r}(u^{k+1}-u^{k})\bigg{)}\geq 0,\forall u\in L^{2}(\mathcal{O}),\\ &\bigg{(}\lambda-p^{k},\frac{1}{s}(\lambda^{k+1}-\lambda^{k})-( Su^{k+1}-y^{k+1})\bigg{)}\geq 0,\forall\lambda\in L^{2}(Q).\end{aligned}\right. \tag{2.20}\] Next, we present some useful lemmas. **Lemma 2.2**: _Let \(\{(u^{k},y^{k},\lambda^{k})^{\top}\}\) be the sequence generated by the linearized ADMM (2.6) and \((u^{*},y^{*},\lambda^{*})^{\top}\) the solution of (2.1). We have_ \[(S(u^{k+1}-u^{k}),\lambda^{k+1}-\lambda^{k})\geq\frac{1}{r}(u^{k+1}-u^{*},u^{k +1}-u^{k})+\frac{1}{s}(\lambda^{k+1}-\lambda^{*},\lambda^{k+1}-\lambda^{k})+ \alpha\|u^{k+1}-u^{*}\|^{2}. \tag{2.21}\] _Proof._ First, taking \((y,u,\lambda)^{\top}=(y^{k+1},u^{k+1},p^{k})^{\top}\) in (2.3) and \((y,u,\lambda)^{\top}=(y^{*},u^{*},\lambda^{*})^{\top}\) in (2.20), respectively, and adding them together, we get \[\left\{\begin{aligned} &\big{(}y^{*}-y^{k+1},-p^{k}+\lambda^{*} \big{)}\geq 0,\\ &\bigg{(}u^{*}-u^{k+1},\alpha u^{k+1}-\alpha u^{*}+S^{*}(p^{k}- \lambda^{*})+\frac{1}{r}(u^{k+1}-u^{k})\bigg{)}\geq 0,\\ &\bigg{(}\lambda^{*}-p^{k},\frac{1}{s}(\lambda^{k+1}-\lambda^{k} )-S(u^{k+1}-u^{*})+(y^{k+1}-y^{*})\bigg{)}\geq 0.\end{aligned}\right.\] Adding the above three inequalities together, we have \[\frac{1}{r}(u^{*}-u^{k+1},u^{k+1}-u^{k})+\frac{1}{s}(\lambda^{*}-p^{k},\lambda ^{k+1}-\lambda^{k})\geq\alpha\|u^{k+1}-u^{*}\|^{2}. \tag{2.22}\] From (2.14) and (2.19), we have \(p^{k}=\lambda^{k+1}-sS(u^{k+1}-u^{k})\). Then, the desired result (2.21) follows from (2.22) directly. **Lemma 2.3**: _Let \(\{(u^{k},y^{k},\lambda^{k})^{\top}\}\) be the sequence generated by the linearized ADMM (2.6), and \((u^{*},y^{*},\lambda^{*})^{\top}\) the solution point of (2.1). Then, we have_ \[\begin{split}& 2(S(u^{k+1}-u^{k}),\lambda^{k+1}-\lambda^{k}) \\ \geq&[(\frac{1}{r}+\alpha)\|u^{k+1}-u^{*}\|^{2}+ \frac{1}{s}\|\lambda^{k+1}-\lambda^{*}\|^{2}]-[(\frac{1}{r}+\alpha)\|u^{k}-u^{*} \|^{2}+\frac{1}{s}\|\lambda^{k}-\lambda^{*}\|^{2}]\\ &+[(\frac{1}{r}+\frac{\alpha}{2})\|u^{k+1}-u^{k}\|^{2})+\frac{1} {s}\|\lambda^{k+1}-\lambda^{k}\|^{2})].\end{split} \tag{2.23}\] Proof.: We first note that \[2(u^{k+1}-u^{*},u^{k+1}-u^{k})=(\|u^{k+1}-u^{*}\|^{2}-\|u^{k}-u^{*}\|^{2})+\|u^{k+1 }-u^{k}\|^{2}, \tag{24}\] and \[2(\lambda^{k+1}-\lambda^{*},\lambda^{k+1}-\lambda^{k})=(\|\lambda^{k+1}- \lambda^{*}\|^{2}-\|\lambda^{k}-\lambda^{*}\|^{2})+\|\lambda^{k+1}-\lambda^{k} \|^{2}. \tag{25}\] By the Cauchy-Schwarz inequality, we have \[2\|u^{k+1}-u^{*}\|^{2}= (\|u^{k+1}-u^{*}\|^{2}-\|u^{k}-u^{*}\|^{2})+(\|u^{k+1}-u^{*}\|^{2} +\|u^{k}-u^{*}\|^{2})\] \[\geq(\|u^{k+1}-u^{*}\|^{2}-\|u^{k}-u^{*}\|^{2})+\frac{1}{2}\|u^{k+ 1}-u^{k}\|^{2}. \tag{26}\] Substituting the inequalities (24), (25) and (26) into (21), we obtain the desired result (23) directly. For convenience, we introduce the notations \[D=\frac{1}{r}I-\frac{3s}{4+2\alpha r}S^{*}S,\text{ and }\sigma=\frac{2+4 \alpha r}{2+\alpha r}s. \tag{27}\] It is easy to verify that \(D\) is self-adjoint and positive definite under the condition (13). Moreover, it holds that \[\frac{1}{r}I-sS^{*}S=D-\frac{1}{4}\sigma S^{*}S. \tag{28}\] With the above result, we can obtain the following estimate. **Lemma 2.4**.: _Let \(\{(u^{k},y^{k},\lambda^{k})^{\top}\}\) be the sequence generated by the linearized ADMM (6). We have that_ \[-(S(u^{k+1}-u^{k}), \lambda^{k+1}-\lambda^{k})\geq\left[\frac{1}{2}\|u^{k+1}-u^{k}\|_ {D}^{2}+\frac{1}{8}\sigma\|S(u^{k+1}-u^{k})\|^{2}\right]+\alpha\|u^{k+1}-u^{k} \|^{2}\] \[-\left[\frac{1}{2}\|u^{k}-u^{k-1}\|_{D}^{2}+\frac{1}{8}\sigma\|S( u^{k}-u^{k-1})\|^{2}\right]-\frac{1}{2}\sigma\|S(u^{k+1}-u^{k})\|^{2}. \tag{29}\] Proof.: Substituting (18) into (17) to eliminate \(y^{k+1}\), we have \[\theta(u)-\theta(u^{k+1})+\left(u-u^{k+1},\alpha u^{k+1}+S^{*}\lambda^{k+1}+( \frac{1}{r}I-sS^{*}S)(u^{k+1}-u^{k})\right)\geq 0,\forall u\in L^{2}(\mathcal{O}). \tag{30}\] We relabel the superscript \(k+1\) as \(k\) in the above VI and obtain \[\theta(u)-\theta(u^{k})+\left(u-u^{k},\alpha u^{k}+S^{*}\lambda^{k}+(\frac{1}{ r}I-sS^{*}S)(u^{k}-u^{k-1})\right)\geq 0,\forall u\in L^{2}(\mathcal{O}). \tag{31}\] Taking \(u=u^{k}\) in (30) and \(u=u^{k+1}\) in (31), and adding the resulting two inequalities, we obtain \[-\left(S(u^{k+1}-u^{k}),\lambda^{k+1}-\lambda^{k}\right)\] \[\geq \left(u^{k+1}-u^{k},(\frac{1}{r}I-sS^{*}S)\left[(u^{k+1}-u^{k})-( u^{k}-u^{k-1})\right]\right)+\alpha\|u^{k+1}-u^{k}\|^{2}\] \[\stackrel{{\eqref{eq:2.28}}}{{=}} \left(u^{k+1}-u^{k},(D-\frac{1}{4}\sigma S^{*}S)\left[(u^{k+1}-u^{k })-(u^{k}-u^{k-1})\right]\right)+\alpha\|u^{k+1}-u^{k}\|^{2}\] \[= \|u^{k+1}-u^{k}\|_{D}^{2}-(u^{k+1}-u^{k},D(u^{k}-u^{k-1}))-\frac{ 1}{4}\sigma\|S(u^{k+1}-u^{k})\|^{2}\] \[\qquad+\frac{1}{4}\sigma\left(S(u^{k+1}-u^{k}),S(u^{k}-u^{k-1}) \right)+\alpha\|u^{k+1}-u^{k}\|^{2}. \tag{32}\] Applying the Cauchy-Schwarz inequality to (32), we have \[-\left(S(u^{k+1}-u^{k}),\lambda^{k+1}-\lambda^{k}\right)\geq\ \frac{1}{2}\|u^{k+1}-u^{k}\|_{D}^{2}-\frac{1}{2}\|u^{k}-u^{k-1}\|_{D}^{2}\] \[\ \ With the help of preceding lemmas, we now prove the strong global convergence of the linearized ADMM (6) under the condition (13). Let \(\{(u^{k},y^{k},\lambda^{k})^{\top}\}\) be the sequence generated by the linearized ADMM (6), and \((u^{*},y^{*},\lambda^{*})^{\top}\) the solution point of (1). If \(r\) and \(s\) satisfy the condition (13), then \(\{u^{k}\}\) converges to \(u^{*}\) strongly in \(L^{2}(\mathcal{O})\), \(\{y^{k}\}\) converges to \(y^{*}\) strongly in \(L^{2}(Q)\), and \(\lambda^{k}\) converges to \(\lambda^{*}\) strongly in \(L^{2}(Q)\). Summarizing the inequality (36) from \(k=1\) to \(k=\infty\), we have that \[\delta\sum_{k=1}^{\infty}(\|u^{k+1}-u^{k}\|^{2}+\|\lambda^{k+1}-\lambda^{k}\|^ {2})\leq(\frac{1}{r}+\alpha)\|u^{1}-u^{*}\|^{2}+\frac{1}{s}\|\lambda^{1}- \lambda^{*}\|^{2}+\frac{1}{2}\|u^{1}-u^{0}\|_{D}^{2}+\frac{1}{8}\sigma\|S(u^{ 1}-u^{0})\|^{2}<+\infty.\] As a result, we have \(\|u^{k+1}-u^{k}\|\to 0\) and \(\|\lambda^{k+1}-\lambda^{k}\|\to 0\), and \(\{u^{k}\}\) and \(\{\lambda^{k}\}\) are bounded in \(L^{2}(\mathcal{O})\) and \(L^{2}(Q)\), respectively. Recall (21). We have \[(S(u^{k+1}-u^{k}),\lambda^{k+1}-\lambda^{k})\geq \frac{1}{r}(u^{k+1}-u^{*},u^{k+1}-u^{k})+\frac{1}{s}(\lambda^{k+1 }-\lambda^{*},\lambda^{k+1}-\lambda^{k})+\alpha\|u^{*}-u^{k+1}\|^{2},\] which implies that \[\alpha\|u^{*}-u^{k+1}\|^{2}\leq\|S\|\|u^{k+1}-u^{k}\|\|\lambda^{k+1}-\lambda^{ k}\|+\frac{1}{r}\|u^{k+1}-u^{*}\|\|u^{k+1}-u^{k}\|+\frac{1}{s}\|\lambda^{k+1}- \lambda^{*}\|\|\lambda^{k+1}-\lambda^{k}\|.\] Since the solution operator \(S\) and the iterates \(\lambda^{k}\) and \(u^{k}\) are bounded, it follows from \(\|u^{k+1}-u^{k}\|\to 0\) and \(\|\lambda^{k+1}-\lambda^{k}\|\to 0\) that \(u^{k}\to u^{*}\) strongly in \(L^{2}(\mathcal{O})\). It follows from the continuity of the operator \(S\) that \(Su^{k}\to Su^{*}\) strongly in \(L^{2}(\mathcal{O})\). Additionally, the fact \(\|\lambda^{k+1}-\lambda^{k}\|\to 0\) implies that \(\|Su^{k+1}-y^{k+1}\|\to 0\), and hence \(y^{k}\to y^{*}\) strongly in \(L^{2}(Q)\). Concerning with the convergence of \(\lambda^{k}\), we note that \(\lambda^{*}=y^{*}-y_{d}\) (see (2)) and it follows from the optimality condition of the \(y\)-subproblem in (6) that \(\lambda^{k}=-sSu^{k}+y^{k+1}-y_{d}+sy^{k+1}.\) We thus have that \[\|\lambda^{k}-\lambda^{*}\|=\|-s(Su^{k}-y^{k})+(y^{k+1}-y^{*})\|\leq s\|Su^{k }-y^{k}\|+\|y^{k+1}-y^{*}\|.\] Since \(\|Su^{k}-y^{k}\|\to 0\) and \(\|y^{k+1}-y^{*}\|\to 0\), we conclude that \(\lambda^{k}\to\lambda^{*}\) strongly in \(L^{2}(Q)\). ## 3 Numerical results In this section, we solve a parabolic control constrained optimal control problem to validate the acceleration effectiveness of the primal-dual method (7)-(9) with the enlarged step sizes (13). For the numerical discretization for all experiments, we employ the backward Euler finite difference scheme (with step size \(\tau\)) for the time discretization and the piecewise linear finite element method (with mesh size \(h\)) for the space discretization, respectively. Our codes were written in MATLAB R2020b and numerical experiments were conducted on a MacBook Pro with mac OS Monterey, Intel(R) Core(TM) i7-9570h (2.60 GHz), and 16 GB RAM. We consider the following example: \[\min_{u\in L^{2}(Q),y\in L^{2}(Q)}\quad\frac{1}{2}\|y-y_{d}\|_{L^{2}(Q)}^{2}+ \frac{\alpha}{2}\|u\|_{L^{2}(Q)}^{2}+\theta(u), \tag{14}\] where \(y\) and \(u\) satisfy the following parabolic equation: \[\frac{\partial y}{\partial t}-\Delta y=f+u\text{ in }\Omega\times(0,T),\quad y=0 \text{ on }\Gamma\times(0,T),\quad y(0)=\varphi. \tag{15}\] Above, \(\varphi\in L^{2}(\Omega)\), the function \(f\in L^{2}(Q)\) is a source term that helps us construct the exact solution without affection to the numerical implementation. The nonsmooth term \(\theta(u)\) is the indicator function of the admissible set (4). We set \(\Omega=(0,1)^{2}\), \(\omega=\Omega\), \(T=1\), \(a=-0.5\), \(b=0.5\) and \[y=(1-t)\sin\pi x_{1}\sin\pi x_{2},\ q=\alpha(1-t)\sin 2\pi x_{1} \sin 2\pi x_{2},\ \varphi=\sin\pi x_{1}\sin\pi x_{2},\] \[f=-u+\frac{dy}{dt}-\Delta y,\ y_{d}=y+\frac{dq}{dt}+\Delta q,\ u= \min(-0.5,\max(0.5,-q/\alpha)).\] Then, it is easy to verify that \((u^{*},y^{*})^{\top}=(u,y)^{\top}\) is the optimal solution of (14). The problem (14) has been discussed in, e.g. [1, 19]. For the purpose of numerical comparison, we also test the accelerated primal-dual (APD) method in [8] and the inexact ADMM (InADMM) method in [19]. Numerical implementations of the InADMM follow all the settings in [19], including the parameters settings, the solvers for the subproblems, and the stopping criteria. All algorithms to be tested are summarized below. 1. PD-C: The primal-dual method (1.7)-(1.9) with the original convergence condition (1.12); 2. PD-I: The primal-dual method (1.7)-(1.9) with the enlarged step sizes (1.13); 3. APD(\(k\)): The accelerated primal-dual method in [8], which adjusts the parameters every \(k\) iterations; 4. InADMM: The inexact ADMM in [19] with CG inner iterations. The initial values for all primal-dual methods are set as \((u^{0},p^{0})^{\top}=(0,0)^{\top}\). For a prescribed tolerance \(tol>0\), we terminate the iterations if \[\max\Big{\{}\frac{\|u^{k+1}-u^{k}\|_{L^{2}(\mathcal{O})}}{\max\{1,\|u^{k}\|_{ L^{2}(\mathcal{O})}\}},\frac{\|p^{k+1}-p^{k}\|_{L^{2}(Q)}}{\max\{1,\|p^{k}\|_{L^{2}( Q)}\}}\Big{\}}\leq tol. \tag{3.3}\] Recall that the upper bound of step sizes \(1/\|S\|^{2}\) is enlarged by the factor \(\frac{4+2\alpha r}{3}\). It is clear that the choice of \(\alpha\) affects the value of \(\frac{4+2\alpha r}{3}\) and thus has a further impact on the performance of PD-I. Intuitively, a relatively large \(\alpha\) always leads to a large \(\frac{4+2\alpha r}{3}\) and hence more likely improves the numerical efficiency. To validate this fact, we consider two different cases for problem (3.1) in terms of the value of \(\alpha\) in the following discussion. **Case I: \(\alpha=10^{-3}\).** Concerning with the choices of \(r\) and \(s\) in all primal-dual methods, we note that, after the space-time discretization, one can estimate that \(\|S\|=\|S^{*}\|\approx 0.05\) and this value is not affected by the mesh sizes \(\tau\) and \(h\). According to (1.12), \(r\) and \(s\) should be chosen such that \(r\cdot s<1/\|S^{*}S\|\approx 400\). Here, we choose \(r=4\times 10^{3}\) and \(s=1\times 10^{-1}\) for PD-C. In addition, it follows from (1.13) that the upper bound of \(r\cdot s\) can be enlarged by \(\frac{4+2\alpha r}{3}=4\). We thus choose \(r=4\times 10^{3}\) and \(s=4\times 10^{-1}\) for PD-I. The parameters for all test algorithms are summarized in Table 3. The numerical results with \(\tau=h=1/2^{6}\) and \(tol=10^{-5}\) are summarized in Table 3. We observe that PD-C is slower than InADMM, while PD-I is comparable to InADMM in terms of the total computational cost. For APD, it is remarkable that implementing the adaptive step size selection strategy at every iteration is not efficient and it should be deliberately determined in practice, which is validated by the fact that APD(5) converges much faster than APD(1). We see that PD-I is more efficient than PD-C, APD(1), and APD(5). In particular, PD-I is 3 times faster than PD-C, which implies the superiority of the improved condition (1.13) to the original one (1.12). **Case II: \(\alpha=10^{-5}\).** The parameters for all primal-dual methods are summarized in Table 3. The numerical results with \(\tau=h=1/2^{6}\) and \(tol=10^{-5}\) are presented in Table 3.4 and Figure 3.2. We \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline Algorithm & \multicolumn{3}{c|}{Parameters} \\ \hline PD-C & & \(r=4\times 10^{3},s=1\times 10^{-1}\) \\ \hline PD-I & & \(r=4\times 10^{3},s=4\times 10^{-1}\) \\ \hline APD(k) & \(r_{0}=1\times 10^{3},s_{0}=4\times 10^{-1};\forall k\geq 0,\tau_{k}=\frac{1}{ \sqrt{1+s_{k}}},r_{k+1}=\frac{r_{k}}{\tau_{k}};s_{k+1}=s_{k}\tau_{k}\) \\ \hline \end{tabular} \end{table} Table 3: Parameters for all primal-dual algorithms for Case I of Example 1 observe from Table 4 that all primal-dual methods require less CPU time than that of InADMM. More specifically, although InADMM requires only 22 outer iterations, a total of 264 PDEs are required to be solved to promote the convergence. Compared with PD-C, PD-I can improve the numerical efficiency by 19.7% and it is even faster than APD. Here, we set \(\alpha=10^{-5}\), which leads to the value of \(\frac{4+2\alpha r}{3}\) relatively small, and hence compared with Case I, less numerical efficiency is improved by PD-I. Next, we recall that both PD-C and PD-I are described on the continuous level and their convergence are analyzed in function spaces. Hence, mesh-independent property of these algorithms can be expected in practice, which means that the convergence behavior is independent of the fineness of the discretization. We test PD-C and PD-I with \(\alpha=10^{-3}\) and \(\tau=h=1/2^{i},i=4,\cdots,9\), and report the iteration numbers in Table 5, from which mesh-independent properties of PD-C and PD-I can be observed. Finally, in Table 6, we report the \(L^{2}\)-error for the iterate (\(u\), \(y\)) obtained by PD-I for various values of \(h\) and \(\tau\). For succinctness, we only give the results for the case where \(\alpha=10^{-5}\) and \(tol=10^{-5}\). It is \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline Algorithm & \(Iter\) & No. PDEs & \(CPU\) & \(Obj\) & \(\|u^{k}-u^{*}\|\) & \(\|y^{k}-y^{*}\|\) \\ \hline PD-C & 122 & 244 & 7.2705 & \(3.5033\times 10^{-7}\) & \(4.6747\times 10^{-3}\) & \(8.6408\times 10^{-6}\) \\ \hline PD-I & 98 & 196 & 5.8277 & \(3.5034\times 10^{-7}\) & \(4.6715\times 10^{-3}\) & \(8.6379\times 10^{-6}\) \\ \hline APD(1) & 118 & 236 & 7.0588 & \(3.5030\times 10^{-7}\) & \(4.6731\times 10^{-3}\) & \(8.5595\times 10^{-6}\) \\ \hline APD(5) & 106 & 212 & 6.3541 & \(3.5036\times 10^{-7}\) & \(4.6698\times 10^{-3}\) & \(8.6628\times 10^{-6}\) \\ \hline InADMM & 22 & 264 & 10.8675 & \(3.5035\times 10^{-7}\) & \(4.6767\times 10^{-3}\) & \(8.6088\times 10^{-6}\) \\ \hline \end{tabular} \end{table} Table 4: Numerical comparisons for Case II of Example 1. (\(\alpha=10^{-5},\tau=h=1/2^{6},tol=10^{-5}\)) Figure 3: Numerical results at \(t=0.25\) for Case I of Example 1. (\(\alpha=10^{-5},\tau=h=1/2^{6},tol=10^{-5}\)) \begin{table} \begin{tabular}{|c|c|} \hline Algorithm & Parameters \\ \hline PD-C & \(r=4\times 10^{3},s=1\times 10^{-1}\) \\ \hline PD-I & \(r=5.6\times 10^{3},s=1\times 10^{-1}\) \\ \hline APD(k) & \(r_{0}=4\times 10^{3},s_{0}=1\times 10^{-1};\forall k\geq 0,\tau_{k}=\frac{1}{ \sqrt{1+s_{k}}},r_{k+1}=\frac{r_{k}}{T_{k}};s_{k+1}=s_{k}\tau_{k}\) \\ \hline \end{tabular} \end{table} Table 3: Parameters for all primal-dual algorithms for Case II of Example 1. clear from Table 6 that, when PD-I is applied to the problem (13), the iterative accuracy is sufficient and the overall error of \(u\) and \(y\) are both dominated by the discretization error. ## 4 Extension: A sparse elliptic optimal control problem In previous sections, we focus on the parabolic optimal control problem (2)-(4) to expose our main ideas more clearly. As mentioned in the introduction, various optimal control problems can be covered by the model (1) and all previous discussions can be easily extended to them. In this section, we showcase by a sparse elliptic optimal control problem to delineate how to extend the primal-dual method (7)-(9) with the enlarged step sizes (13) to other optimal control problems. Some notations and discussions analogous to previous ones are not repeated for succinctness. Let us consider the following sparse elliptic optimal control problem: \[\min_{u\in L^{2}(\mathcal{O}),y\in H^{1}_{0}(\Omega)}\;J(y,u)=\frac{1}{2}\|y-y _{d}\|^{2}_{L^{2}(\Omega)}+\frac{\alpha}{2}\|u\|^{2}_{L^{2}(\Omega)}+\mu\|u\| _{L^{1}(\Omega)}+I_{U_{ad}}(u), \tag{14}\] where \(y\) and \(u\) satisfy the following state equation: \[-\Delta y=u\;\text{in}\;\Omega,\quad y=0\;\text{on}\;\Gamma. \tag{15}\] In (14)-(15), \(\Omega\subset\mathbb{R}^{d}(d\geq 1)\) is a convex polyhedral domain with boundary \(\Gamma:=\partial\Omega\), \(y_{d}\in L^{2}(\Omega)\) is a given target, and the constants \(\alpha>0\) and \(\mu>0\) are regularization parameters. We denote by \(I_{U_{ad}}(\cdot)\) the indicator function of the admissible set \(U_{ad}:=\{u\in L^{\infty}(\Omega)|a\leq u(x)\leq b,\text{ a.e. in }\Omega\}\subset L^{2}(\Omega)\), where \(a,b\in L^{2}(\Omega)\) with \(a<0<b\) almost everywhere. Due to the presence of the nonsmooth \(L^{1}\)-regularization term, the optimal control of (14) has small support [51, 58]. Because of this special structural property, such problems capture important applications in various fields such as optimal actuator placement [51] and impulse control [9]. ### Primal-dual method for (14)-(15) Similar to what we have done for (2)-(4), implementing the primal-dual method in [8] to (14)-(15) yields the following iterative scheme: \[u^{k+1}=P_{U_{ad}}\left(\mathbb{S}_{\frac{\mu\tau}{\alpha\tau+1}}\left(\frac {u^{k}-rS^{*}p^{k}}{\alpha\tau+1}\right)\right),\quad p^{k+1}=\left(S(2u^{k+1} -u^{k})+\frac{1}{s}p^{k}-y_{d}\right)/(1+\frac{1}{s}), \tag{16}\] where \(S:L^{2}(\Omega)\to L^{2}(\Omega)\) such that \(y=Su\) is the solution operator associated with the elliptic state equation (15), \(S^{*}:L^{2}(\Omega)\to L^{2}(\Omega)\) is the adjoint operator of \(S\), \(P_{U_{ad}}(\cdot)\) denotes the projection onto the admissible set \(U_{ad}\), namely, \(P_{U_{ad}}(v)(x):=\max\{a,\min\{v(x),b\}\}\) a.e. in \(\Omega\), \(\forall v\in L^{2}(\Omega)\), and \(\mathbb{S}\) is the Shrinkage operator defined by \[\mathbb{S}_{\zeta}(v)(x)=\text{sgn}(v(x))(|v(x)|-\zeta)_{+}\;\text{a.e. in}\;\; \Omega,\] where \(\zeta\) a positive constant, "sgn" is the sign function, and \((\cdot)_{+}\) denotes the positive part. Under the condition (12) or (13), both the PD-C and PD-I can be proposed for problem (14)-(15), and convergence analysis can simply follow the results in Section 2 directly. At each iteration of (16), the main computation consists of solving the state equation (15) to compute \(S(2u^{k+1}-u^{k})\), and the adjoint equation \(-\Delta q^{k}=p^{k}\) in \(\Omega,\;q=0\) on \(\Gamma\), to compute \(q^{k}=S^{*}p^{k}\). \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline Mesh size & \(1/2^{4}\) & \(1/2^{5}\) & \(1/2^{6}\) & \(1/2^{7}\) & \(1/2^{8}\) & \(1/2^{9}\) \\ \hline PD-C & 73 & 73 & 73 & 73 & 73 & 73 \\ \hline PD-I & 25 & 25 & 24 & 24 & 23 & 23 \\ \hline \end{tabular} \end{table} Table 6: Numerical errors of PD-I for Example 1. (\(\alpha=10^{-3},tol=10^{-5}\)) \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline error & \(h=\tau=2^{-5}\) & \(h=\tau=2^{-6}\) & \(h=\tau=2^{-7}\) & \(h=\tau=2^{-8}\) \\ \hline \(\|u-u^{*}\|_{L^{2}(\Omega)}\) & \(1.8404\times 10^{-2}\) & \(4.6715\times 10^{-3}\) & \(1.1815\times 10^{-3}\) & \(3.2924\times 10^{-4}\) \\ \hline \(\|y-y^{*}\|_{L^{2}(\Omega)}\) & \(3.6458\times 10^{-5}\) & \(8.6370\times 10^{-6}\) & \(2.1690\times 10^{-6}\) & \(5.8059\times 10^{-7}\) \\ \hline \end{tabular} \end{table} Table 5: Iteration numbers w.r.t different mesh sizes for Example 1. ### Numerical results In this subsection, we report some numerical results to validate the efficiency of the primal-dual method (4.3) for solving (4.1)-(4.2). **Example 2.** We consider the example given in [51]. To be concrete, we set \(\Omega=(0,1)\times(0,1)\), \(a=-30\), \(b=30\), and \(y_{d}=\frac{1}{6}e^{2x_{1}}\sin(2\pi x_{1})\sin(2\pi x_{2})\) in (4.1)-(4.2). In all numerical experiments, the numerical discretization is implemented by the finite element method described in [58]. We test the PD-C and PD-I for two cases in terms of the choice of \(\alpha\). The initial value is set as \((u^{0},p^{0})^{\top}=(0,0)^{\top}\), and all algorithms are terminated if (3.3) holds with \(tol=10^{-5}\). First, we set \(\mu=5\times 10^{-3}\) and \(\alpha=1\times 10^{-3}\). The parameters are selected as those listed in Table 1. We summarize the numerical results in Table 1. It is clear that all algorithms are robust to the mesh size, and mesh-independent convergence can be observed. The PD-I improves the numerical efficiency significantly. The numerical results \(u\) and \(y\) obtained by PD-I with \(h=1/2^{6}\) are reported in Figure 1. As expected, we note that \(u=0\) on a relatively large part of \(\Omega\) due to the presence of the regularization term \(\mu\|u\|_{L^{1}(\Omega)}\). Second, we still set \(\mu=5\times 10^{-3}\) but \(\alpha=1\times 10^{-5}\). For this case, the parameters are selected as those listed in Table 3. The numerical results of all test algorithms with respect to different mesh sizes are presented in Table 2. We observe from these results that the performance of all algorithms are robust to the mesh sizes, while PD-I improves the numerical efficiency of PD-C sharply. The numerical results \(u\) and \(y\) obtained by PD-I with \(h=1/2^{6}\) are reported in Figure 1. Next, we study the effectiveness of \(\mu\) on the performance of PD-C and PD-I. For this purpose, we implement both of them to Example 3 with different \(\mu\) and \(\alpha=10^{-3}\). The results are reported in Table 3 and the computed optimal controls are depicted in Figure 2. These results indicate that all algorithms are robust to the values of \(\mu\). Moreover, it was shown in [51] that, as \(\mu\) increases, the size of the nonzero region of \(u\) decreases, and when \(\mu\) is sufficiently large, \(u\) is zero on the whole \(\Omega\). From Figure 2, it is easy to see that the nonzero part of \(u\) decreases as \(\mu\) increases, which coincides with the results in [51]. notation. Then, the central concern is constructing two surrogates \(y=\mathcal{S}_{\theta_{*}}(u)\) and \(q=\mathcal{S}_{\theta_{a}}(p)\) for the state equation \(y=Su\) and its adjoint \(q=S^{*}q\), respectively, by some operator learning methods. We first elaborate on the main ideas of operator learning. For this purpose, let \(G\) be the solution operator of a generic PDE defined on the domain \(\mathcal{D}\) which takes an input function \(u\), and \(y=G(u)\) be the solution of the PDE. Operator learning aims at approximating \(G\) with a neural network \(\mathcal{G}_{\theta}\). For any point \(z\in\mathcal{D}\), \(G(u)(z)\) is a real number (the value of \(y\) at \(z\)). The neural network \(\mathcal{G}_{\theta}\) takes inputs consisting of \(u\) and \(z\), and outputs the value \(\mathcal{G}_{\theta}(u)(z)\). Suppose that we have a data set \(\{G(u_{i})(z_{j})\}_{1\leq i\leq N_{1},1\leq j\leq N_{2}}\) of different input functions \(\{u_{i}\}\) and points \(\{z_{j}\}\). Then, the neural network is trained by solving \[\theta^{*}=\min_{\theta}\|\mathcal{G}_{\theta}(u)(z)-G(u)(z)\|_{L^{2}( \mathcal{D})}^{2}.\] Operator learning provides a surrogate model \(y=\mathcal{G}_{\theta^{*}}(u)\) for \(y=G(u)\). Several operator learning methods, such as the DeepONets [37], the physics-informed DeepONets [60], the FNO [32], the GNO [33], and the LNO [7], have been recently proposed in the PDE literature. In the following discussion, to expose our main ideas clearly, we focus on the DeepONets [60] to elaborate on the implementation of (14). Other operator learning methods can be applied in a similar way. Figure 1: Workflow of the DeepONets. Source: the figure is adapted from [37]. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{\(\mu\)} & \multicolumn{2}{c|}{PD-C} & \multicolumn{2}{c|}{PD-I} \\ \cline{2-5} & Iter & \(\|y-y_{d}\|_{L^{2}(\Omega)}\) & Iter & \(\|y-y_{d}\|_{L^{2}(\Omega)}\) \\ \hline 0 & 87 & \(2.4963\times 10^{-1}\) & 30 & \(2.4963\times 10^{-1}\) \\ \hline \(5\times 10^{-4}\) & 88 & \(2.5356\times 10^{-1}\) & 31 & \(2.5356\times 10^{-1}\) \\ \hline \(3\times 10^{-3}\) & 93 & \(2.7034\times 10^{-1}\) & 32 & \(2.7034\times 10^{-1}\) \\ \hline \(2\times 10^{-2}\) & 97 & \(2.9018\times 10^{-1}\) & 32 & \(2.9018\times 10^{-1}\) \\ \hline \end{tabular} \end{table} Table 3: Numerical comparisons w.r.t different \(\mu\) for Example 2 when \(\alpha=10^{-3}\). ### Primal-dual method with DeepONets for (2)-(4) The DeepONets [37] provide a specialized deep learning framework to learn PDE solution operators. For the convenience of readers, we give a brief overview of the DeepONets, with a special focus on learning the solution operator of the state equation (3). Typically, as shown in Figure 1, the DeepONets architecture consists of two separate neural networks referred to as the "branch" and "trunk" networks, respectively. Then the DeepONets can be expressed as \[\mathcal{G}_{\theta}(u)(z)=\sum_{i=1}^{n}b_{i}(u)t_{i}(z)+b_{0}.\] Above, \(\theta\) denotes the collection of all trainable weight and bias parameters in the branch and trunk networks. The vector \((b_{1}(u),\cdots,b_{n}(u))^{\top}\) is the output of the branch network with the input \(\{u(x_{j})\}_{j=1}^{m}\). For each input function \(u\), \(\{u(x_{j})\}_{j=1}^{m}\) represent the evaluations at the fixed scattered sensors \(\{x_{j}\}_{j=1}^{m}\subset\Omega\times(0,T)\). The vector \((t_{1}(z),\cdots,t_{n}(z))^{\top}\) is the output of the trunk network with the continuous co-ordinates \(z\in\Omega\times(0,T)\) as inputs; and \(b_{0}\in\mathbb{R}\) is a trainable bias. Different from the fixed locations \(\{x_{j}\}_{j=1}^{m}\subset\Omega\times(0,T)\), the coordinates \(z\in\Omega\times(0,T)\) may vary for different \(u\). The final output of the DeepONets is obtained by merging the outputs of the branch and trunk networks via an inner product. The stacked DeepONet has one trunk network and \(n\) stacked branch networks, while the unstacked DeepONet has one trunk network and one branch network. Furthermore, we note that the DeepONets do not restrict the branch and trunk nets to any specific architecture. As \(z\in\Omega\times(0,T)\) is usually low dimensional, a standard fully-connect neural network (FNN) is commonly used as the trunk net. The choice of the branch net depends on the structure of the input function \(u\), and it can be chosen as a FNN, residual neural network, convolutional neural network (CNN), or a graph neural network (GNN). For instance, if \(\{u(x_{j})\}_{j=1}^{m}\) is defined on a two-dimensional equispaced grid, then a CNN can be used; if \(\{u(x_{j})\}_{j=1}^{m}\) is given on an unstructured mesh, then a GNN can be used. We refer to [37, 39] for more discussions on the DeepONets. Given a data set of different input functions \(\{u_{i}\}\) and points \(\{z_{j}\}\): \(\{G(u_{i})(z_{j})\}_{1\leq i\leq N_{1},1\leq j\leq N_{2}}\), we train the DeepONets by solving \[\min_{\theta}\mathcal{L}(\theta):=\frac{1}{N_{1}N_{2}}\sum_{i=1}^{N_{1}}\sum_{ j=1}^{N_{2}}\|\mathcal{G}_{\theta}(u_{i})(z_{j})-G(u_{i})(z_{j})\|_{L^{2}( \Omega\times(0,T))}^{2}. \tag{10}\] Note that one data point is a triplet \((u_{i};z_{j};G(u_{i})(z_{j}))\), and thus one specific input function \(u_{i}\) may appear in multiple data points with different \(z_{j}\). For example, a dataset of size \(400\) can be generated from \(20\)\(u\) trajectories, and each evaluates \(G(u)(z)\) for \(20\)\(z\) locations. Using the DeepONets, we can obtain a surrogate model \(y=\mathcal{S}_{\theta_{s}^{*}}(u)\) for the state equation \(y=Su\), where \(\theta_{s}^{*}\) is obtained by solving (10) with \(G\) replaced by the solution operator \(S\). Similarly, we can also obtain a surrogate model \(q=\mathcal{S}_{\theta_{s}^{*}}(p)\) for the adjoint equation \(q=S^{*}p\). We thus specify (14) as the following primal-dual method with DeepONets: \[u^{k+1}=P_{U_{ad}}\left(-\frac{\mathcal{S}_{\theta_{s}^{*}}(p^{k})-\frac{1}{r} u^{k}}{\alpha+\frac{1}{r}}\right),\quad p^{k+1}=(\mathcal{S}_{\theta_{s}^{*}}(2u^{k +1}-u^{k})+\frac{1}{s}p^{k}-y_{d})/(1+\frac{1}{s}). \tag{11}\] Clearly, with the pre-trained surrogate models \(y=\mathcal{S}_{\theta_{s}^{*}}(u)\) and \(q=\mathcal{S}_{\theta_{a}^{*}}(p)\), one only needs to compute \(\mathcal{S}_{\theta_{a}^{*}}(p^{k})\) and \(\mathcal{S}_{\theta_{s}^{*}}(2u^{k+1}-u^{k})\), and implement some simple algebraic operations. Moreover, given a different target \(y_{d}\), the primal-dual method with DeepONets (11) can be directly applied to the resulting optimal control problem without solving any PDE. Hence, the primal-dual method with DeepONets (11) is easy and cheap to implement. Finally, it is easy to see that the primal-dual method with DeepONets (11) for solving (2)-(4) can be easily extended to other various optimal control problems modeled by (1), see Example 3 in Section 5.2 for more discussions. ### Numerical results In this section, we discuss the implementation of the primal-dual method with DeepONets (11), and validate its effectiveness via some pedagogical numerical examples involving elliptic and parabolic control constrained optimal control problems. **Example 3.** We consider the following elliptic control constrained optimal control problem: \[\min_{u\in L^{2}(\Omega),y\in L^{2}(\Omega)}\ J(y,u)=\frac{1}{2}\|y-y_{d}\|_{L^{2 }(\Omega)}^{2}+\frac{\alpha}{2}\|u\|_{L^{2}(\Omega)}^{2}+\theta(u), \tag{5.3}\] where the state \(y\) and the control \(u\) satisfy the following state equation: \[-\nu\Delta y+y=u+f\ \text{in}\ \Omega,\quad y=0\ \text{on}\ \Gamma. \tag{5.4}\] Above, \(\Omega\subset\mathbb{R}^{d}(d\geq 1)\) is a convex polyhedral domain with boundary \(\Gamma:=\partial\Omega\), \(y_{d}\in L^{2}(\Omega)\) is a given target, and \(f\in H^{-1}(\Omega)\) is a given source term. The constant \(\nu>0\) is the diffusion coefficient and \(\alpha>0\) is a regularization parameter. We denote by \(\theta(u):=I_{U_{ad}}(u)\) the indicator function of the admissible set \(U_{ad}=\{u\in L^{\infty}(\Omega)|a\leq u(x)\leq b,\ \text{a.e. in}\ \Omega\} \subset L^{2}(\Omega)\), where \(a,b\in L^{2}(\Omega)\). In our numerical experiments, we set \(\Omega=(0,1)\), \(\nu=1\), \(\alpha=10^{-3}\), \(a=-0.5\) and \(b=0.5\). We further let \[y=k_{s}\sin(\pi x),q=\alpha k_{a}\sin(2\pi x),u=\max\{a,\min\{b,-\frac{q}{ \alpha}\}\},\] \[f=-u-\Delta y+y,y_{d}=y+\Delta q-q,\] where \(k_{s}\) and \(k_{a}\) are constants. Then, it is easy to show that \((u,y)^{\top}\) is the solution of problem (5.3)-(5.4). By choosing different \(k_{s}\) and \(k_{a}\), we can obtain different \(y_{d}\) and thus specify a series of elliptic control constrained optimal control problems. To obtain a surrogate model for (5.4), we first consider constructing a neural operator \(\mathcal{N}_{\theta}\) by a DeepONet to approximate the solution operator \(\bar{S}\) of the following elliptic equation \[-\nu\Delta y+y=u\ \text{in}\ \Omega,\quad y=0\ \text{on}\ \Gamma. \tag{5.5}\] Then, it is easy to see that \(y=\mathcal{S}_{\theta^{*}_{s}}(u):=\mathcal{N}_{\theta^{*}}(u+f)\) is a surrogate model for (5.4). Moreover, since the state equation (5.4) is self-adjoint, we can use \(q=\mathcal{S}_{\theta^{*}_{s}}(p):=\mathcal{N}_{\theta^{*}}(p)\) as a surrogate model for the corresponding adjoint system of (5.3)-(5.4). We employ an unstacked DeepONet to construct \(y=\mathcal{N}_{\theta^{*}}(u)\). Both the branch net and the trunk net are fully-connected neural networks consisting of 2 hidden layers with 20 neurons per hidden layer and equipped with hyperbolic tangent activation functions. We adapt the MATLAB codes used in [32] to generate a set of training data \(\{u_{i};z_{j};\bar{S}(u_{i})(z_{j})\}_{1\leq i\leq N_{1},1\leq j\leq N_{2}}\). For every \(u_{i}\), \(\{u_{i}(x_{j})\}_{j=1}^{m}\) are the inputs of the branch network. We take \(N_{1}=1000,N_{2}=m=65\), \(\{x_{j}\}_{j=1}^{m}\) and \(\{z_{j}\}_{j=1}^{N_{2}}\) are equi-spaced grids in [0,1]. We sample zero-boundary functions \(\{u_{i}\}_{i=1}^{N_{1}}\in L^{2}(0,1)\) from a Gaussian random field with a Riesz kernel, i.e., \[u_{i}\sim\mathcal{GR}(0,C),\ \text{with}\ C=49^{2}(-\Delta+49I)^{-2.5},\] where \(\Delta\) and \(I\) represent the Laplacian and the identity operator, respectively. We then compute the solutions \(\bar{S}(u_{i})\) exactly in a Fourier space (see [32] for the details), and finally evaluate the values of \(\bar{S}(u_{i})(z_{j})\) for every \((u_{i},z_{j})\). Moreover, we modify the output \(\mathcal{N}_{\theta}(u_{i})(z_{j})\) as \(\mathcal{N}_{\theta}(u_{i})(z_{j})x(x-1)\) so that the homogeneous Dirichlet boundary condition \(y(0)=y(1)=0\) is satisfied automatically. For training the neural networks, we implement 20000 iterations of the Adam [28] with learning rate \(\eta=10^{-3}\). The training of the DeepONet is performed in Python utilizing the PyTorch framework. The training process is initialized using the default initializer of PyTorch. After the training process, we thus obtain the neural operator \(\mathcal{N}_{\theta^{*}}\) and hence the surrogate models \(y=\mathcal{N}_{\theta^{*}}(u+f)\) and \(q=\mathcal{N}_{\theta^{*}}(p)\). We then obtain the following primal-dual method with DeepONet for solving (5.3)-(5.4): \[u^{k+1}=P_{U_{ad}}\left(-\frac{\mathcal{N}_{\theta^{*}}p^{k})-\frac{1}{\tau}u^{ k}}{\alpha+\frac{1}{\tau}}\right),\quad p^{k+1}=(\mathcal{N}_{\theta^{*}}(2u^{k+1}-u^{ k}+f)+\frac{1}{s}p^{k}-y_{d})/(1+\frac{1}{s}). \tag{5.6}\] We implement (5.6) to (5.3)-(5.4) with different choices of \(k_{s}\) and \(k_{a}\). We set \(r=2\times 10^{3}\) and \(s=4\times 10^{-1}\) in (5.6), and terminate the iteration if (3.3) holds with \(tol=10^{-5}\). The numerical results are reported in Table 1 and Figure 2. First, it can be observed from Table 1 that (5.6) converges fast and the iteration numbers are almost not affected by \(k_{s}\) and \(k_{a}\). Moreover, the relative errors of \(u\) and \(y\) are very small for all cases under investigation, which, together with the results in Figure 2, imply that the computed controls and the exact ones are in excellent agreement and they are visually indistinguishable. From these results, we may conclude that (5.6) is efficient and robust enough to pursue highly accurate solutions for control constrained elliptic optimal control problems. **Example 4**.: We consider the parabolic control constrained optimal control problem (3.1). In particular, we set \(\Omega=(0,1)\), \(T=1\), \(\alpha=10^{-3}\)\(a=-100\) and \(b=100\). Let \[y=k_{s}(e^{t}-1)\sin(\pi x),\ q=\alpha k_{a}(T-t)\sin(2\pi x),\ u =\max\{a,\min\{b,-\frac{q}{\alpha}\}\},\] \[f=-u+\frac{\partial y}{\partial t}-\Delta y,\ y_{d}=y+\frac{ \partial q}{\partial t}+\Delta q,\] where \(k_{s}\) and \(k_{a}\) are constants. Then, it is easy to show that \((u,y)^{\top}\) is the solution of problem (3.1). A series of parabolic control constrained optimal control problems can be specified by choosing different \(k_{s}\) and \(k_{a}\). It is easy to see that implementing the primal-dual method with DeepONets (5.2) to (3.1) requires two surrogate models \(y=\mathcal{S}_{\theta_{s}^{*}}(u)\) and \(q=\mathcal{S}_{\theta_{a}^{*}}(p)\), respectively, for the state equation (3.2) and the corresponding adjoint equation: \[-\frac{\partial q}{\partial t}-\Delta q=p\ \text{in}\ \Omega\times(0,T),\ q=0\ \text{on}\ \Gamma\times(0,T),\ q(T)=0. \tag{5.7}\] \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline & \(k_{s}=-0.2\) & \(k_{s}=0.2\) & \(k_{s}=0.4\) & \(k_{s}=0.6\) & \(k_{s}=0.8\) & \(k_{a}=1\) \\ & \(k_{a}=-1\) & \(k_{a}=1\) & \(k_{a}=2\) & \(k_{a}=3\) & \(k_{a}=4\) & \(k_{a}=5\) \\ \hline Iter & 30 & 29 & 26 & 25 & 25 & 26 \\ \hline \(Err(u)\) & \(1.41\times 10^{-2}\) & \(6.68\times 10^{-3}\) & \(9.30\times 10^{-3}\) & \(1.28\times 10^{-2}\) & \(1.69\times 10^{-2}\) & \(7.64\times 10^{-3}\) \\ \hline \(Err(y)\) & \(1.46\times 10^{-3}\) & \(1.76\times 10^{-3}\) & \(1.84\times 10^{-3}\) & \(1.86\times 10^{-3}\) & \(1.85\times 10^{-3}\) & \(1.99\times 10^{-3}\) \\ \hline \end{tabular} \end{table} Table 1: Numerical results for Example 3 ( Err(u)=\(\frac{\|u^{k}-u\|_{L^{2}(\Omega)}}{\|u\|_{L^{2}(\Omega)}}\),Err(y)=\(\frac{\|u^{k}-u\|_{L^{2}(\Omega)}}{\|y\|_{L^{2}(\Omega)}}\) ). Figure 2: Numerical and exact controls for Example 3. To this end, we first discretize (14) and (15) in time by the backward Euler method with the step size \(\tau=T/N\), where \(N\) is a positive integer. The resulting discretized state equation reads: \(y_{0}=\phi\); for \(n=1,\ldots,N\), with \(y_{n-1}\) being known, we obtain \(y_{n}\) from the solution of the following linear elliptic problem: \[-\tau\Delta y_{n}+y_{n}=\tau(f_{n}+u_{n})+y_{n-1}\text{ in }\Omega,\quad y_{n}=0 \text{ on }\Gamma, \tag{16}\] and the resulting discretized adjoint equation reads: \(q(T)=0\); for \(n=N-1,\ldots,0\), with \(q_{n+1}\) being known, we obtain \(q_{n}\) from the solution of the following linear elliptic problem: \[-\tau\Delta q_{n}+q_{n}=\tau p_{n}+q_{n+1}\text{ in }\Omega,\quad q_{n}=0\text{ on }\Gamma, \tag{17}\] where we denote by \(y_{n}\), \(f_{n}\), \(u_{n}\), \(q_{n}\) and \(p_{n}\) the approximate values of \(y(n\tau)\), \(f(n\tau)\), \(u(n\tau)\), \(q(n\tau)\) and \(p(n\tau)\), respectively. It is easy to see that the elliptic equations (16) and (17) have the same form as that of (14). Hence, we can follow the same routine presented in Example 3 to construct two DeepONet surrogates for (16) and (17). For the implementation of (13), we set \(r=8\times 10^{2}\) and \(s=4\times 10^{-1}\) and terminate the iteration if (15) holds with \(tol=10^{-5}\). The numerical results with respect to different \(k_{s}\) and \(k_{a}\) are reported in Table 2 and Figure 3. Table 2 shows that the primal-dual method with DeepONets (13) converges fast, with almost the same number of iterations for different values of \(k_{s}\) and \(k_{a}\). This suggests that the method is highly efficient and robust to the choices of \(k_{s}\) and \(k_{a}\). Additionally, the relative errors of \(u\) and \(y\) are very small across all test cases, which, in conjunction with the results presented in Figure 2, indicate that the exact and computed controls are in excellent agreement and cannot be distinguished visually. Overall, these results demonstrate that the primal-dual method with DeepONets (13) is capable of producing highly accurate solutions. Figure 3: Numerical and exact controls at \(t=0.25\) (left), \(0.5\) (middle), and \(0.75\) (right) for Example 4.
2305.10824
Integrating Item Relevance in Training Loss for Sequential Recommender Systems
Sequential Recommender Systems (SRSs) are a popular type of recommender system that learns from a user's history to predict the next item they are likely to interact with. However, user interactions can be affected by noise stemming from account sharing, inconsistent preferences, or accidental clicks. To address this issue, we (i) propose a new evaluation protocol that takes multiple future items into account and (ii) introduce a novel relevance-aware loss function to train a SRS with multiple future items to make it more robust to noise. Our relevance-aware models obtain an improvement of ~1.2% of NDCG@10 and 0.88% in the traditional evaluation protocol, while in the new evaluation protocol, the improvement is ~1.63% of NDCG@10 and ~1.5% of HR w.r.t the best performing models.
Andrea Bacciu, Federico Siciliano, Nicola Tonellotto, Fabrizio Silvestri
2023-05-18T09:06:00Z
http://arxiv.org/abs/2305.10824v3
# Integrating Item Relevance in Training Loss for Sequential Recommender Systems ###### Abstract. Sequential Recommender Systems (SRSs) are a popular type of recommender system that learns from a user's history to predict the next item they are likely to interact with. However, user interactions can be affected by noise stemming from account sharing, inconsistent preferences, or accidental clicks. To address this issue, we (i) propose a new evaluation protocol that takes multiple future items into account and (ii) introduce a novel relevance-aware loss function to train a SRS with multiple future items to make it more robust to noise. Our relevance-aware models obtain an improvement of 1.2% of NDCG@10 and 0.88% in the traditional evaluation protocol, while in the new evaluation protocol, the improvement is 1.63% of NDCG@10 and 1.5% of HR w.r.t the best performing models. Recommender systems, Sequential recommendation, item relevance + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + Footnote †: journal: ACM + accidentally clicking on items, or performing multiple actions quickly, can greatly affect the evaluation results, as shown by Gupta and Gupta (2018), Oh et al. (2018). The evaluation of SRSs has been thoroughly analyzed in recent research (Gupta et al., 2018; Gupta et al., 2019; Gupta et al., 2019). Our paper contributes to this line of research by proposing a novel approach: we depart from the single-relevant item approach typically taken in the literature and adopt a novel evaluation method and training approach that considers multiple future items and accounts for noise in the sequences. The proposed method provides, as shown in the experiments, a more accurate and robust solution for evaluating and training SRSs. We conducted experiments on four datasets typically used in this research domain, using SASRec (Gupta et al., 2019), a widely cited SRS, as the reference model for our experiments. The research questions we address in our experiments are the following: * **RQ1:** Is there an alternative to the single relevant item evaluation protocol that is more suitable to determine the best-performing ranking model among several proposed options? * **RQ2:** Can the item relevance be successfully incorporated into the training mechanism of an SRS to boost its performance? * **RQ3:** What is the impact of the considered number of future items on the evaluation metrics as well as on the training performance of the models considered? Our paper makes two key contributions to the field of SRSs. Firstly, we introduce a new evaluation protocol that considers multiple future items as potentially relevant instead of the typical single-relevant item hypothesis used in this research area. Secondly, our experiments show that when trained in the multiple-relevant item regime, SASRec outperforms the state-of-the-art models, as shown by the significant improvements in NDCG@10 and Recall@10 scores. ## 2. Related Work ### Sequential Recommender Systems Sequential Recommender Systems (SRSs) are a class of recommender systems that personalizes recommendations to users based on their historical interactions with items in a sequence (Srivastava et al., 2014), capturing its temporal dynamics. SRSs have received considerable attention in the research community in recent years (Gupta et al., 2019) and have been applied in various domains (Gupta et al., 2019), including movies (Gupta et al., 2019; Gupta et al., 2019; Gupta et al., 2019), music (Gupta et al., 2019; Gupta et al., 2019), and e-commerce (Gupta et al., 2019; Gupta et al., 2019). Various techniques have been developed to model the temporal dependencies in the sequences, including Markov Chain models, Recurrent Neural Networks (RNNs), and Attention mechanisms. Markov Chain models are a type of probabilistic model that assume the future state of a sequence only depends on the current state; for this reason, they struggle to capture complex dependencies in long-term sequences (Gupta et al., 2019; Gupta et al., 2019). RNNs are a type of neural network architecture that can capture the long-term dependencies in sequential data. They have shown great potential in modeling sequential data, and they have been used to develop various SRSs, such as session-based recommenders (Gupta et al., 2019; Gupta et al., 2019; Gupta et al., 2019; Gupta et al., 2019; Gupta et al., 2019), context-aware recommenders (Gupta et al., 2019; Gupta et al., 2019; Gupta et al., 2019), and sequential graph neural networks (Gupta et al., 2019); Gupta et al. (2019). Attention mechanisms have recently gained attention in SRSs due to their ability to dynamically weight the importance of different parts of the sequence (Gupta et al., 2019). By doing so, attention mechanisms can better capture the important features in the sequence and improve the prediction accuracy (Srivastava et al., 2014; Gupta et al., 2019). ### Evaluating Sequential Recommender Systems Evaluating Sequential Recommender Systems (SRSs) has been a topic of great interest in recent years. Both (Gupta et al., 2019) and (Gupta et al., 2019) examined common data splitting methods for SRSs and discussed why commonly used evaluation methods are ill-defined, suggesting appropriate offline evaluation for SRSs. In particolar, they showed that existing evaluation protocols do not consider the temporal dynamics of user behaviour, which can affect the accuracy of the recommendations. In (Han et al., 2017) it is shown that the current evaluation protocols for SRSs can lead to data leakage, where the model learns information from the test data that is not available during training. They address the problem by proposing an evaluation methodology that takes into account the global timeline of data samples in the evaluation of SRSs. A metric called Rank List Sensitivity (RLS) is introduced in (Krishnan et al., 2017) to evaluate the discrepancy between two rankings, so to evaluate models' sensitivity with respect to training data. Finally, (Bahdan et al., 2017) presented an evaluation methodology specifically designed to evaluate the precision of algorithms for the Search Shortcut Problem. This metric considers item relevance, which allows an effective evaluation. ## 3. Methodology ### Current Evaluation Protocol In line with previous research employing SRSs (Han et al., 2017; Li et al., 2018; Li et al., 2019), the current evaluation method involves shuffling one positive item (the next item in the sequence) with 100 random negative items not part of the input sequence. These items are then ranked based on their relevance scores determined by the model. The resulting rank is used to calculate evaluation metrics like Normalized Discounted Cumulative Gain (NDCG) and Hit Rate (HR), which can be measured with various cut-offs, typically 10. ### Problems with the current protocol The current evaluation protocol assumes only one item is relevant to a user, which may not be true in real-world scenarios where multiple interactions could be relevant. Users' history can be noisy, as shown in (Shen et al., 2018). Noise can come from account sharing, inconsistent preferences, or accidental clicks. For instance, in e-commerce, many clicks don't lead to purchases, and some purchases receive negative reviews. Evaluating a model based on one positive item, which might be an error, negatively impacts its performance, and this issue is not addressed in the current evaluation protocol. ### Multi Future Items: a new evaluation protocol To address the aforementioned problem, we propose a new evaluation protocol for SRSs called Multi Future Items (MFI). In MFI, we make the Assumption 1, that a good ranking should not only contain the single future next item in the sequence, but the whole sequence of future items in the correct order. Hence, MFI is rewarding the models that produce a better ranking considering multiple future items. Assumption 1.: _Given a user \(u\) and its ordered interactions' sequence \([I_{1},I_{2},...,I_{i}]\), the ideal ranking of length \(K\) is the sequence of future items \([I_{i+1},I_{i+2},...,I_{i+K}]\) arranged user's interaction temporal order._ To evaluate the performance of SRSs under Assumption 1, we use traditional evaluation metrics for sequential recommendation models such as NDCG and HR. To have multiple future items per evaluation, we therefore split the user's history differently. Given a sequence \([I_{1},I_{2},...,I_{L}]\), in the traditional evaluation protocol only item \(I_{L}\) is reserved for testing. In our proposed evaluation protocol, the sequence \([I_{1},I_{2},...,I_{L-K}]\) is allocated for training the model, while the sequence \([I_{L-K+1},I_{L-K+2},...,I_{L}]\) is used for testing purposes. ### Item Relevance The new evaluation protocol requires higher ranking capabilities than the traditional evaluation protocol. In the original evaluation protocol, the ability to rank a single item and treat all other items as not relevant to the user is evaluated. Conversely, in the new evaluation protocol, it is important to define item relevance to give importance to multiple future items and scale their importance according to their position in the sequence. Definition 1.: _Given a ranking \([I_{1},I_{2},...,I_{K}]\) of length \(K\), we define the item relevance function \(r\colon\ \mathbb{N}\to[0,1]\)_ In order to compare the relevance of items on sequences of different lengths \(K\), we further define that the item relevance sums to one. Definition 2.: _Given a ranking \([I_{1},I_{2},...,I_{K}]\) of length \(K\), \(r\) must satisfy \(\sum_{i=1}^{K}r(i)=1\)_ Item relevance should account for the fact that some interactions will occur further in the future and therefore assign higher importance to them: we establish that the relevance of an item at time \(t\) cannot be lower than the relevance of the item at time \(t+1\). Definition 3.: _Given a ranking \([I_{1},I_{2},...,I_{K}]\) of length \(K\), \(r\) must satisfy \(r(i+1)\leq r(i)\quad\forall i\in\{1,2,...,K\}\)_ Our approach to item relevance is inspired by (Bacciu et al., 2017) who proposed a similarity function, which takes into account the item relevance, to evaluate query recommendation using collaborative filtering. They suggested four different functions for item relevance: \(r(i)=1\), \(r(i)=K-i\), \(r(i)=(K-i)^{2}\), and \(r(i)=e^{K-i}\) with \(i\in\{1,2,...,K\}\). We refer to these functions respectively as _Fixed_, _Linear_, _Power_, and _Exponential_. The first assigns equal relevance to all items, while the others assign higher relevance to the next items in the sequence at the expense of the more distant ones. We show a visualization of these functions in Figure 1. The functions can be easily normalized to comply with the Definition 2. It should be noted that our evaluation protocol generalizes the traditional one because we can revert to it by setting a maximum importance to the next item and zero importance for all other future items. ### Relevance-based Loss Current Neural SRSs use Binary Cross-Entropy as the loss function; we propose a modification of it called Relevance-based loss. Relevance-based loss uses multiple positive items, similarly to what proposed by (Silk and Schuster, 2017), except that it assigns weights to them following the functions defined in Section 3.4. (Silk and Schuster, 2017), though, propose a formulation that assigns equal relevance to all future items (equivalent to our Fixed formulation). However, by assigning equal importance to all items, the model ignores the natural order of interactions, making this strategy unsuitable for some tasks. For instance, in the case of movie recommendations, it is not realistic to suggest _Back To The Future 3_ before the user has watched the Figure 1. A visualization of how the various loss scaling strategy weigh the item relevance. first and the second. To address this limitation, we propose a more general formula that accounts for item relevance through time, which we call Relevance-based loss: \[\ell(\vec{x}\mid pos,neg,r)=-\sum_{i=1}^{pos}\log\left(\left(\vec{x}_{pos}\right) _{i}\right)r(pos-i+1)-\sum_{i=1}^{neg}\log\left(1-\left(\vec{x}_{neg}\right)_{ i}\right) \tag{1}\] where \(pos\) and \(neg\) represent respectively the number of positive and negative items considered. \(\vec{x}\) is the score given by the model to each item, while \(\vec{x}_{pos}\) and \(\vec{x}_{neg}\) represent respectively the subset of \(\vec{x}\) containing only the positive and negative items and \(r\) is the item relevance function defined in Section 3.4. In Equation 1, we weigh the loss of each item \(i\) by its relevance score \(r(i)\). By doing so, the model is encouraged to focus more on the relevant items while learning. ## 4. Experimental Setup We assess our techniques using four datasets derived from real-world use cases: MovieLens[(10)] and Foursquare [(34)]. MovieLens is a popular benchmark dataset for Recommendation Systems containing movie ratings. We utilize both MovieLens-1M and MovieLens-100K, which comprises 1 million and 100.000 user ratings, respectively. The Foursquare NYC and Foursquare TKY datasets, presented in [(34)], are collections of check-in records from New York and Tokyo. The NYC dataset contains 227,428 records, while the TKY dataset has 573,703 records. We select the Self-Attentive Sequential Recommendation (SASRec) model1[(16)] for our experiment, as it has consistently demonstrated exceptional performance across multiple benchmarks and garnered significant recognition in the literature. To have a fair comparison, we keep the original hyper-parameter of SASRec's paper. We perform our experiments on a workstation equipped with an Intel Core i9-10940X (14-core CPU running at 3.3GHz) and 256GB of RAM, and a single Nvidia RTX A6000 with 48GB of VRAM. Footnote 1: [https://github.com/pmixer/SASRec.pytorch](https://github.com/pmixer/SASRec.pytorch) ## 5. Results In this Section, we present the results to answer our posed research questions (Section 1). During our experiments, we vary (i) The number of evaluation positives; (ii) The number of training positives; (iii) the item relevance function. From here on, we will denote by 'our' o 'new' models, the models trained with our proposed training loss. _RQ1: Is there an alternative to the single relevant item evaluation protocol that is more suitable to determine the best-performing ranking model among several proposed options?_ Table 1 compares various models in the traditional evaluation protocol, including two baselines (Baseline and Fixed) and the models trained with our proposed item relevance-based loss (Linear, Power, Exponential). As expected, in the traditional evaluation protocol, there is no clear winner. Instead, in Table 2, the performance discrepancies are more pronounced and it is, therefore, possible to identify a better model. Hence, the new evaluation protocol is more robust to the noise introduced in 3.2 and can better identify performance differences between models and reward the model with the best ranking performance as evaluated by means of NDCG and HR. We can also assert that models that discount the future excessively, i.e. only (or almost only) take into account the next item (Baseline and Exponential), generally perform worse. _RQ2: Can the item relevance be successfully incorporated into the training mechanism of an SRS to boost its performance?_ As can be seen from Table 1, in the traditional evaluation protocol, models that integrate the item relevance are on-par or better than the baselines for at least one of the metrics. In particular, Linear item relevance outperforms the other approaches in 6 out of 10 cases while getting the second-best result in 2 cases. When considering more positive items for evaluation (see Table 2) it is possible to notice that the Linear model outperforms the baselines and the other models, except in a benchmark where Power item relevance yields a better NDCG score and the baselines a better HR. Fixed, on the other hand, returns performance comparable to Baseline, suggesting that incorporating more positive items during training, even if without weighting them in the loss, can lead to a slight increase in performance. Figure 2 shows the performance using NDCG@10 for the MovieLens-1M dataset for both the traditional and new evaluation protocol. From Figure 2, the improvement provided by item-relevance loss is more evident. We can in fact see that the Baseline (the original SASRec), which does not consider multiple future items during training, has a consistently slower convergence rate in both evaluation protocols than the item relevance-aware models. Instead, Linear item relevance obtains the fastest convergence. This shows that the item relevance loss allows the training of better models that yield higher performances in both evaluation protocols. To summarize, our relevance-aware models obtain an improvement of 1.2% of NDCG@10 and 0.88% in the traditional evaluation protocol, while in the new evaluation protocol, the improvement is 1.63% of NDCG@10 and 1.5% of HR. using 1, 5, and 10 evaluation positive items. We can notice that the relative performance of the relevance-aware models remains consistent when varying numbers of evaluation positive items, despite variations in their absolute performance. In Figure 4 we report the results of all models using 2, 3, 4, 5, 10 training positive items. In the traditional evaluation protocol (Figures (a)a, (c)c), increasing the number of training positive items seems to impact the performances negatively: increasing the number of training positive items leads to a deterioration of performance. This is particularly evident for Fixed Item Relevance, where an increase in training positive items always leads to a decrease in performance. For Linear, this behaviour is more attenuated, although using 10 training positive items still generates the worst performance. In the traditional evaluation protocol, the behaviour is generally expected, as we are only evaluating the model with one positive item; the other positive items used for training can only confound the model. Instead, in the new evaluation Figure 3: NDCG@10 at varying number of positive items used for evaluation on Movielens-1M Figure 2: NDCG@10 at varying training epochs for all 5 models on the MovieLens 1M dataset protocol (Figures 3(b), 3(d)), where we are evaluating models with 10 positive items, we see something different. Fixed Item Relevance increases performance up to 5 training positives, but at 10 it performs worst. This indicates that this type of relevance function is not particularly suitable, even when the training is similar to the evaluation. Conversely, Linear shows interesting results: performance increases with the number of training positives. This suggests that our models still have room for further improvement. Although not shown, the results for Power Item Relevance are similar to those for Linear, while Exponential shows a similar trend to Fixed in general. ## 6. Conclusions In this work, we challenged the typical assumption made in Sequential Recommendation Systems (SRSs), which only consider the immediate next item in a sequence as the one to predict. We have relaxed the evaluation protocol to not penalize the model in case of noisy sequences and designed an item relevance loss in order to optimize the model to predict multiple future items. We demonstrated the importance of having more positive examples both in the training Figure 4. NDCG@10 at varying training epochs for the new item-relevance models for different numbers of training positives on Movielens-1M and evaluation of SRSs. Our experiments show that when trained in the multiple relevant item regime, our systems outperform the state-of-the-art models, with an improvement of 1.2% of NDCG@10 and 0.88% in the original evaluation protocol. In the new evaluation protocol, the improvement is 1.63% of NDCG@10 and 1.5% of HR. Among all the variants of grading multiple relevant items that we experimented with, it turns out that the Linear one outperforms the others and demonstrates its potential usefulness in practical applications. ## Acknowledgments This work was partially supported by projects FAIR (PE0000013) and SERICS (PE00000014) under the MUR National Recovery and Resilience Plan funded by the European Union - NextGenerationEU and by ERC Starting Grant No. 802554 (SPECGEO) and PRIN 2020 project n.2020TA3K9N "LEGO.AI".
2310.05721
Stability of $L^2-$invariants on stratified spaces
Let $\overline{M}$ be a compact smoothly stratified pseudo-manifold endowed with a wedge metric $g$. Let $\overline{M}_\Gamma$ be a Galois $\Gamma$-covering. Under additional assumptions on $\overline{M}$, satisfied for example by Witt pseudo-manifolds, we show that the $L^2$-Betti numbers and the Novikov-Shubin invariants are well defined. We then establish their invariance under a smoothly stratified, strongly stratum preserving homotopy equivalence, thus extending results of Dodziuk, Gromov and Shubin to these pseudo-manifolds.
Francesco Bei, Paolo Piazza, Boris Vertman
2023-10-09T13:46:37Z
http://arxiv.org/abs/2310.05721v3
# Stability of \(\mathbb{L}^{2}-\)Invariants on stratified spaces ###### Abstract. Let \(\overline{M}\) be a compact smoothly stratified pseudo-manifold endowed with a wedge metric \(g\). Let \(\overline{M}_{\Gamma}\) be a Galois \(\Gamma\)-covering. Under additional assumptions on \(\overline{M}\), satisfied for example by Witt pseudo-manifolds, we show that the \(\mathbb{L}^{2}\)-Betti numbers and the Novikov-Shubin invariants are well defined. We then establish their invariance under a smoothly stratified codimension-preserving homotopy equivalence, thus extending results of Dodziuk, Gromov and Shubin to these pseudo-manifolds. Key words and phrases:Novikov-Shubin invariants, \(\mathbb{L}^{2}\)-Betti numbers, stratified spaces 2010 Mathematics Subject Classification: Primary 58A12; Secondary 58G12; 58A14 ###### Contents * 1 Introduction and statement of the main results * 2 Reformulating the problem using Mishchenko bundles * 3 Existence of \(\mathbb{L}^{2}\)-Betti numbers and Novikov-Shubin invariants * 4 A Hilsum-Skandalis-type replacement of a pullback * 5 Stability of \(\mathbb{L}^{2}\)-Betti numbers and Novikov-Shubin invariants ## 1. Introduction and statement of the main results ### Historical overview Let \((M,g)\) be a compact manifold with an infinite fundamental group. In his seminal paper [1] Atiyah introduced \(\mathbb{L}^{2}\)-Betti numbers associated to the universal covering of \(M\), and conjectured their stability under homotopy equivalences. The conjecture was established shorty after the appearance of Atiyah's paper by Dodziuk in [1]. Later, Novikov and Shubin [21, 22] introduced new invariants associated to a Galois \(\Gamma\)-covering \(M_{\Gamma}\) of \(M\), with \(\Gamma\) a finitely generated discrete group; these invariants measure the density of the continuous spectrum of the differential form Laplacian on \(M_{\Gamma}\) and are nowadays referred to as the Novikov-Shubin invariants. Their stability under homotopy equivalence was proved by Gromov and Shubin in [11, 12]. _The goal of this article is to address the same stability problems but for singular spaces, namely compact smoothly stratified (Thom-Mather) pseudo-manifolds._ ### Statement of the main results Consider a compact smoothly stratified (Thom-Mather) pseudo-manifold \(\overline{M}\) with an iterated wedge metric \(g\) in its open interior \(M\subset\overline{M}\). Simply put, the singular neighborhoods of \(\overline{M}\) can locally be viewed as fibrations of cones, as in Figure 1, where their cross sections may be singular again. Let \(p:\overline{M}_{\Gamma}\to\overline{M}\) be a Galois \(\Gamma\)-covering with \(\Gamma\) being the group of deck transformations \(\Gamma\). It is again a stratified pseudo-manifold and the metric \(g\) lifts to an iterated wedge metric \(g_{\Gamma}\) in the interior \(M_{\Gamma}\) of \(\overline{M}_{\Gamma}\). The corresponding minimal and maximal Hilbert complexes \((\mathcal{D}_{\min}(M_{\Gamma}),d_{\Gamma})\) and \((\mathcal{D}_{\max}(M_{\Gamma}),d_{\Gamma})\) arise from the de Rham complex of compactly supported differential forms on \(M_{\Gamma}\) by choosing minimal and maximal closed extensions of the differential in \(L^{2}\), respectively. Note that in general \[(\mathcal{D}_{\min}(M_{\Gamma}),d_{\Gamma})\neq(\mathcal{D}_{\max}(M_{\Gamma} ),d_{\Gamma}).\] The \(L^{2}\)-Betti numbers are defined as von Neumann \(\Gamma\)-dimensions of the corresponding reduced \(L^{2}\)-cohomologies \[\begin{split} b^{*}_{(2),\min}(\overline{M}_{\Gamma})& :=\dim_{\Gamma}\overline{H}^{*}_{(2)}(\mathcal{D}_{\min}(M_{\Gamma}),d_{ \Gamma}),\\ b^{*}_{(2),\max}(\overline{M}_{\Gamma})&:=\dim_{ \Gamma}\overline{H}^{*}_{(2)}(\mathcal{D}_{\max}(M_{\Gamma}),d_{\Gamma}). \end{split} \tag{1.1}\] The Novikov-Shubin invariants \(\alpha_{*,\min}(\overline{M}_{\Gamma})\) and \(\alpha_{*,\max}(\overline{M}_{\Gamma})\) are in turn defined in terms of the Hodge Laplacians associated to the minimal and maximal Hilbert complexes \((\mathcal{D}_{\min}(M_{\Gamma}),d)\) and \((\mathcal{D}_{\max}(M_{\Gamma}),d)\), respectively. These invariants will be introduced explicitly in SS3 below. We can now state our first main theorem. **Theorem 1.1**.: _Let \(p:\overline{M}_{\Gamma}\to\overline{M}\) be a Galois \(\Gamma\)-covering of a compact smoothly stratified pseudo-manifold \(\overline{M}\) endowed with an iterated wedge metric. If the Assumption 3.2 holds true, we have the following properties:_ 1. _The_ \(L^{2}\)_-Betti numbers_ \(b^{*}_{(2),\min}(\overline{M}_{\Gamma})\) _and_ \(b^{*}_{(2),\max}(\overline{M}_{\Gamma})\) _are finite._ 2. _The Novikov-Shubin invariants_ \(\alpha_{*,\min}(\overline{M}_{\Gamma})\) _and_ \(\alpha_{*,\max}(\overline{M}_{\Gamma})\) _are well-defined._ Assumption 3.2 is a technical assumption on the resolvent \((i+D_{abs/rel})^{-1}\) on \((M,g)\), with \(D_{abs/rel}\) being the absolute and relative self-adjoint extensions \[D_{rel}:=d_{\min}+d^{*}_{\min},\quad\text{and}\quad D_{abs}:=d_{\max}+d^{*}_{ \max}\,.\] It is satisfied on stratified pseudo-manifolds with isolated singularities and on Witt spaces of arbitrary depth, as shown in Proposition 3.3 below. Figure 1. A cone bundle over \(B\). In order to state our second main result we pause for a moment and explain in detail the result of Gromov and Shubin [11]. Let \(X\) and \(Y\) be connected smooth compact manifolds without boundary and let \(f:X\to Y\) be a homotopy equivalence. We now consider two connected Galois \(\Gamma\)-coverings \(X_{\Gamma}\xrightarrow{p}X\) and \(Y_{\Gamma}\xrightarrow{q}Y\); once we fix base points \(x\in X,\tilde{x}\in p^{-1}(x)\), \(f(x)\in Y\) and \(\tilde{y}\in q^{-1}(f(x))\) we have well defined surjective homomorphisms \[\pi_{1}(X,x)\xrightarrow{j_{X}}\Gamma,\quad\pi_{1}(Y,f(x))\xrightarrow{j_{Y}}\Gamma.\] We shall explain all this later in the paper. Gromov and Shubin requires the following diagram to commute: We shall weaken this assumption of Gromov and Shubin, that is \(j_{X}=j_{Y}\circ f_{*}\), and require only that \[f_{*}(\operatorname{Ker}(j_{X}))=\operatorname{Ker}(j_{Y})\,. \tag{1.2}\] This is condition (4.1) appearing in the statement of our next result. In order to state it, we need to introduce one final ingredient \(-\) a smoothly stratified codimension-preserving homotopy equivalence between compact smoothly stratified pseudo-manifolds. This is a homotopy equivalence that maps singular strata onto singular strata and preserves co-dimension. We will be more precise in SS5 below. Our second main result now reads as follows. **Theorem 1.2**.: _Let \(\overline{M}\) and \(\overline{N}\) be two compact smoothly stratified pseudo-manifolds which satisfy Assumption 3.2; assume that there exist two Galois \(\Gamma\)-coverings \(\overline{M}_{\Gamma}\), \(\overline{N}_{\Gamma}\) and a smoothly stratified codimension-preserving homotopy equivalence \(f:\overline{M}\to\overline{N}\) satisfying condition (1.2) (see (4.1) for precise notation). Then_ \[\begin{split} b^{*}_{(2),\min/\max}(\overline{M}_{\Gamma})& =b^{*}_{(2),\min/\max}(\overline{N}_{\Gamma}),\\ \alpha_{*,\min/\max}(\overline{M}_{\Gamma})&=\alpha_ {*,\min/\max}(\overline{N}_{\Gamma}).\end{split} \tag{1.3}\] **Corollary 1.3**.: _Let \(\overline{M}\) and \(\overline{N}\) be two compact smoothly stratified pseudo-manifolds which satisfy Assumption 3.2; assume that there exists a smoothly stratified codimension-preserving homotopy equivalence \(f:\overline{M}\to\overline{N}\) which satisfies the condition (4.1). Then_ \[\begin{split} b^{*}_{(2),\min/\max}(\overline{M})& =b^{*}_{(2),\min/\max}(\overline{N}),\\ \alpha_{*,\min/\max}(\overline{M})&=\alpha_{*,\min/ \max}(\overline{N})\end{split} \tag{1.4}\] _where \(b^{*}_{(2),\min/\max}(\overline{M})\), \(b^{*}_{(2),\min/\max}(\overline{N})\), \(\alpha_{*,\min/\max}(\overline{M})\) and \(\alpha_{*,\min/\max}(\overline{N})\) denote the maximal/minimal \(L^{2}\)-Betti numbers and Novikov-Shubin invariants of \(\overline{M}\) and \(\overline{N}\) computed with respect to the corresponding universal coverings._ As anticipated in SS1.1, this theorem extends the result of Dodziuk [13] for \(L^{2}\)-Betti numbers and of Gromov and Shubin [14] for Novikov-Shubin invariants to the setting of smoothly stratified Thom-Mather pseudo-manifolds. ### Background and notation This paper builds strongly upon the classical notions of stratified spaces, their Galois coverings and the corresponding von Neumann theory. A careful introduction of these concepts is out of scope of the present note and is presented in full detail in other references. We list the main objects of interest for us, their notation and precise references where these objects are introduced. 1. **stratified space \(\overline{M}\), its open interior \(M\) and resolution \(\widetilde{M}\).** A compact smoothly stratified (Thom-Mather) pseudo-manifold \(\overline{M}\) with open interior \(M\subset\overline{M}\) is defined, for example, in [1, Definition 2.1]. The precise definition is rather involved, due to additional Thom-Mather conditions, see [16], which guarantee that such \(\overline{M}\) can be resolved into a compact manifold with corners and boundary fibration structure, see [1, Proposition 2.5] and [1, Definition 2]. Following [1] we denote this resolution by \(\widetilde{M}\). Further references are e.g. [1], [20], see also [1, 2]. 2. **iterated incomplete wedge metric \(g\) and complete edge metric \(\rho_{M}^{-2}g\).** An (iterated incomplete) wedge metric \(g\) on the open interior \(M\) is defined in [1, Definition 5] (note that there such a metric is called 'iterated edge'). If \(\rho_{M}\) is a total boundary function on \(\widetilde{M}\), i.e. a smooth function that vanishes to first order at all boundary faces, then a complete edge metric is defined by \(\rho_{M}^{-2}g\) as in [1, (3.2)] 3. **Galois covering \(\widetilde{M}_{\Gamma}\) and von Neumann theory.** A Galois covering \(\pi:\overline{M}_{\Gamma}\to\overline{M}\) with Galois group \(\Gamma\) is again a smoothly stratified (Thom-Mather) pseudo-manifold with the smooth open and dense stratum \(M_{\Gamma}\) being a Galois covering of \(M\). Any singular stratum \(Y_{\Gamma}\) of \(\overline{M}_{\Gamma}\) is a Galois covering of a singular stratum \(Y\) in \(\overline{M}\) of same depTheorem The lift \(g_{\Gamma}\) of the wedge metric \(g\) defines a \(\Gamma\)-invariant wedge metric on \(M_{\Gamma}\). We refer to e.g. [1, SS7.1] and [15, SS2.4] for more details. 4. **edge and wedge tangent bundles \({}^{e}TM,{}^{e}TM_{\Gamma}\) and \({}^{w}TM,{}^{w}TM_{\Gamma}\).** The edge tangent bundle \({}^{e}TM\) is a vector bundle over \(\widetilde{M}\), defined in [1, SS4.1] (note that there a different notation \({}^{ie}TM\) is used, with the additional upper script i standing for 'iterated'). An efficient definition of the wedge tangent bundle \({}^{w}TM\) is obtained by specifying the smooth sections (of its dual bundle) as \[C^{\infty}(\widetilde{M},{}^{w}T^{*}M):=\rho_{M}\,C^{\infty}(\widetilde{M},{}^ {e}T^{*}M).\] One defines \({}^{\mathsf{c}}TM_{\Gamma},{}^{\mathsf{c}}TM_{\Gamma}\) in the same way. Consider the \(\mathrm{L}^{2}\)-completions \[\mathrm{L}^{2}\Omega^{*}(M,g):=\mathrm{L}^{2}(M,\Lambda^{*}({}^{ \mathsf{w}}T^{*}M),g),\] \[\mathrm{L}^{2}\Omega^{*}(M_{\Gamma},g_{\Gamma}):=\mathrm{L}^{2}(M_{ \Gamma},\Lambda^{*}({}^{\mathsf{w}}T^{*}M_{\Gamma}),g_{\Gamma}).\] We shall often simply abbreviate the spaces as \(\mathrm{L}^{2}\Omega^{*}(M)\) and \(\mathrm{L}^{2}\Omega^{*}(M_{\Gamma})\), respectively, without keeping track of the metrics. 5. **Minimal and maximal complexes \((\mathcal{D}_{\min}(M_{\Gamma}),d_{\Gamma})\) and \((\mathcal{D}_{\max}(M_{\Gamma}),d_{\Gamma})\).** The notion of Hilbert complexes has been carefully introduced in [1]. The minimal and maximal Hilbert complexes are defined e.g. in [1, Lemma 3.1]. They are defined by the minimal and maximal domains \(\mathcal{D}_{\min/\max}(M_{\Gamma}):=\mathcal{D}(d_{\Gamma;\min/\max})\) in \(\mathrm{L}^{2}\Omega^{*}(M_{\Gamma},g_{\Gamma})\) of de Rham differentials on \(M_{\Gamma}\). _Acknowledgements._ The authors would like to acknowledge interesting discussions with Pierre Albin and Matthias Lesch. The third author thanks Sapienza University for hospitality. This work was supported by the "National Group for the Algebraic and Geometric Structures and their Applications" (GNSAGA-INDAM). ## 2. Reformulating the problem using Mishchenko bundles Below, it will become necessary to use an equivalent description of the Hilbert complexes \((\mathcal{D}_{\min}(M_{\Gamma}),d_{\Gamma}),(\mathcal{D}_{\max}(M_{\Gamma}),d _{\Gamma})\), and the Hilbert space \(\mathrm{L}^{2}\Omega^{*}(M_{\Gamma},g_{\Gamma})\), working only on the base \(\overline{M}\) and using the von Neumann algebra framework. We shall be brief here and refer to [11, SS1 and SS2] and [12] for details. Consider the complex group ring \(\mathbb{C}\Gamma\) of complex-valued maps \(f:\Gamma\to\mathbb{C}\) with compact support and the Hilbert space of square-summable sequences \[\ell^{2}\Gamma:=\Big{\{}f:\Gamma\to\mathbb{C}\ \big{|}\ \sum_{\gamma\in\Gamma} \big{|}f(\gamma)\big{|}^{2}<\infty\Big{\}},\] which can alternatively be viewed as Hilbert space completion of \(\mathbb{C}\Gamma\). Recall that the von Neumann algebra \(\mathcal{N}\Gamma\) is defined as the space of bounded \(\Gamma\)-equivariant linear operators on the Hilbert space \(\ell^{2}\Gamma\). We recall the notion of \(\mathcal{N}\Gamma\)-Hilbert modules here, see [11, SS1 and SS2] and [1, SS4] for more details **Definition 2.1**.: _A \(\mathcal{N}\Gamma\)-Hilbert module \(H\) is a Hilbert space together with a linear isometric \(\Gamma\)-action such that there exists a Hilbert space \(\mathcal{H}\) and an isometric linear \(\Gamma\)-embedding of \(H\) into the tensor product of Hilbert spaces \(\mathcal{H}\otimes\ell^{2}\Gamma\) equipped with the obvious \(\Gamma\)-action._ We now define the so-called Mishchenko bundles \[\mathcal{E}:=\overline{M}_{\Gamma}\times_{\Gamma}\ell^{2}\Gamma \longrightarrow\overline{M}, \tag{2.1}\] \[\mathcal{E}_{\mathsf{c}}:=\overline{M}_{\Gamma}\times_{\Gamma} \mathbb{C}\Gamma \longrightarrow\overline{M},\] Fibres of both bundles are modules, the first one for the natural \(\mathcal{N}\Gamma\)-action and the second one for the natural \(\mathbb{C}\Gamma\) action. In fact, the bundle \(\mathcal{E}\) is a bundle of \(\mathcal{N}\Gamma\)-Hilbert modules in the sense of [12, Definition 2.10] and the same is true after twisting \(\mathcal{E}\) with the vector bundle \(\Lambda^{*w}(\mathsf{T}^{*}\mathsf{M})\). The construction defines natural isomorphisms of vector spaces \[\begin{split}\Phi_{\mathsf{c}}:&\operatorname{C}_{ \mathsf{c}}^{\infty}(\mathsf{M}_{\Gamma},\Lambda^{*}({}^{w}\mathsf{T}^{*} \mathsf{M}_{\Gamma}))\to\operatorname{C}_{\mathsf{c}}^{\infty}(\mathsf{M}, \Lambda^{*}({}^{w}\mathsf{T}^{*}\mathsf{M})\otimes\mathcal{E}_{\mathsf{c}}),\\ \Phi^{\prime}:&\operatorname{C}_{(2)}^{\infty}( \mathsf{M}_{\Gamma},\Lambda^{*}({}^{w}\mathsf{T}^{*}\mathsf{M}_{\Gamma}))\to \operatorname{C}^{\infty}(\mathsf{M},\Lambda^{*}({}^{w}\mathsf{T}^{*}\mathsf{M })\otimes\mathcal{E}),\end{split} \tag{2.2}\] with \(\operatorname{C}_{(2)}^{\infty}(\mathsf{M}_{\Gamma},\Lambda^{*}({}^{w} \mathsf{T}^{*}\mathsf{M}_{\Gamma}))\) defined as \[\Big{\{}\mathsf{s}\in\operatorname{C}^{\infty}(\mathsf{M}_{\Gamma},\Lambda^{ *}({}^{w}\mathsf{T}^{*}\mathsf{M}_{\Gamma})):\forall_{x\in\mathsf{M}_{\Gamma}} \sum_{\gamma}|\mathsf{s}(\gamma\cdot x)|^{2}<\infty\Big{\}},\] and \(\Phi\), \(\Phi^{\prime}\) written down explicitly in [5, SS7.5, (1)]. The lower index \(\mathsf{c}\) indicates compact support. We shall simplify notation \[\Omega_{\mathsf{c}}^{*}(\mathsf{M},\mathcal{E}_{\mathsf{c}}):= \operatorname{C}_{\mathsf{c}}^{\infty}(\mathsf{M},\Lambda^{*}({}^{w}\mathsf{T} ^{*}\mathsf{M})\otimes\mathcal{E}_{\mathsf{c}}),\qquad\Omega^{*}(\mathsf{M}, \mathcal{E}):=\operatorname{C}^{\infty}(\mathsf{M},\Lambda^{*}({}^{w} \mathsf{T}^{*}\mathsf{M})\otimes\mathcal{E}),\] \[\Omega_{\mathsf{c}}^{*}(\mathsf{M}_{\Gamma}):=\operatorname{C}_{ \mathsf{c}}^{\infty}(\mathsf{M}_{\Gamma},\Lambda^{*}({}^{w}\mathsf{T}^{*} \mathsf{M}_{\Gamma})).\] We can define a metric \(g_{\mathcal{E}}\) on \(\Lambda^{*}({}^{w}\mathsf{T}^{*}\mathsf{M})\otimes\mathcal{E}\), using the metric \(g\) together with the \(\ell^{2}\Gamma\) inner product along the fibers. Noting in the first equality below that \(\ell^{2}\Gamma\) is the Hilbert space completion of \(\mathbb{C}\Gamma\), we have \[\begin{split}\operatorname{L}^{2}(\mathsf{M},\Lambda^{*}({}^{w} \mathsf{T}^{*}\mathsf{M})\otimes\mathcal{E},g_{\mathcal{E}})&= \overline{\Omega_{\mathsf{c}}^{*}(\mathsf{M},\mathcal{E}_{\mathsf{c}})}^{g_{ \mathcal{E}}}\\ \operatorname{L}^{2}(\mathsf{M}_{\Gamma},\Lambda^{*}({}^{w} \mathsf{T}^{*}\mathsf{M}),g_{\Gamma})&=\overline{\Omega_{ \mathsf{c}}^{*}(\mathsf{M}_{\Gamma})}^{g_{\Gamma}}\end{split} \tag{2.3}\] where the closures are in terms of the corresponding \(\operatorname{L}^{2}\) inner products defined by the metrics in the upper indices. We shall also use the following simplified notation \[\operatorname{L}^{2}\Omega^{*}(\mathsf{M},\mathcal{E}):=\operatorname{L}^{2}( \mathsf{M},\Lambda^{*}({}^{w}\mathsf{T}^{*}\mathsf{M})\otimes\mathcal{E},g_{ \mathcal{E}}),\] together with the one that we have already introduced \[\operatorname{L}^{2}\Omega^{*}(\mathsf{M}_{\Gamma}):=\operatorname{L}^{2}( \mathsf{M}_{\Gamma},\Lambda^{*}({}^{w}\mathsf{T}^{*}\mathsf{M}_{\Gamma}),g_{ \Gamma}).\] In fact, [5, Lemma 7.10] proves that these two spaces are \(\mathcal{N}\Gamma\)-Hilbert modules in the sense of [5, Definition 2.1], and as explained in [5, SS7.5, (2)], the isomorphism \(\Phi\) extends to an isometry of \(\mathcal{N}\Gamma\)-Hilbert modules \[\Phi:\operatorname{L}^{2}\Omega^{*}(\mathsf{M}_{\Gamma})\to\operatorname{L}^{2} \Omega^{*}(\mathsf{M},\mathcal{E}) \tag{2.4}\] We now study the Hilbert complexes \((\mathcal{D}_{\min}(\mathsf{M}_{\Gamma}),\,\mathsf{d}_{\Gamma})\) and \((\mathcal{D}_{\max}(\mathsf{M}_{\Gamma}),\,\mathsf{d}_{\Gamma})\) under the isometric identification \(\Phi\). There is a canonical connection on \(\mathcal{E}\) and its subbundle \(\mathcal{E}_{\mathsf{c}}\), over the interior \(\mathsf{M}\), induced by the exterior differential on \(\mathsf{M}_{\Gamma}\). We refer to [5, SS3] for more details on connections in the von Neumann setting. Hence, there is a natural (twisted) de Rham differential \(\mathsf{d}_{\mathcal{E}}\) on the twisted differential forms \(\Omega_{\mathsf{c}}^{*}(\mathsf{M},\mathcal{E}_{\mathsf{c}})\). In fact, \(\mathsf{d}_{\mathcal{E}}\) and the de Rham differential \(\mathsf{d}_{\Gamma}\) on \(\mathsf{M}_{\Gamma}\) act on larger domains of smooth forms \[\begin{split}\Omega_{\max}^{*}(\mathsf{M},\mathcal{E})& :=\Big{\{}\omega\in\Omega^{*}(\mathsf{M},\mathcal{E})\cap \operatorname{L}^{2}\Omega^{*}(\mathsf{M},\mathcal{E})\mid\mathsf{d}_{ \mathcal{E}}\omega\in\Omega^{*}(\mathsf{M},\mathcal{E})\cap\operatorname{L}^{2} \Omega^{*}(\mathsf{M},\mathcal{E})\Big{\}},\\ \Omega_{\max}^{*}(\mathsf{M}_{\Gamma})&:=\Big{\{} \omega\in\Omega^{*}(\mathsf{M}_{\Gamma})\cap\operatorname{L}^{2}\Omega^{*}( \mathsf{M}_{\Gamma})\mid\mathsf{d}_{\Gamma}\omega\in\Omega^{*}(\mathsf{M}_{ \Gamma})\cap\operatorname{L}^{2}\Omega^{*}(\mathsf{M}_{\Gamma})\Big{\}}.\end{split} \tag{2.5}\] Recalling the classical definitions of \((\mathcal{D}_{\min}(M_{\Gamma}),d_{\Gamma})\) and \((\mathcal{D}_{\max}(M_{\Gamma}),d_{\Gamma})\), we can define the corresponding minimal and maximal Hilbert complexes in \(L^{2}\Omega^{*}(M,\mathcal{E})\) in the same way. Define the graph norms \[\|\omega\|_{\mathcal{E}} :=\|\omega\|_{L^{2}\Omega^{*}(M,\mathcal{E})}+\|d_{\mathcal{E}} \omega\|_{L^{2}\Omega^{*}(M,\mathcal{E})}, \omega\in\Omega^{*}_{\max}(M,\mathcal{E}), \tag{2.6}\] \[\|\omega\|_{\Gamma} :=\|\omega\|_{L^{2}\Omega^{*}(M_{\Gamma})}+\|d_{\Gamma}\omega\|_{ L^{2}\Omega^{*}(M_{\Gamma})}, \omega\in\Omega^{*}_{\max}(M_{\Gamma}).\] **Definition 2.2**.: _Consider the graph norms \(\|\cdot\|_{\mathcal{E}}\) and \(\|\cdot\|_{\Gamma}\). Then the minimal and maximal Hilbert complexes in \(L^{2}\Omega^{*}(M,\mathcal{E})\) and \(L^{2}\Omega^{*}(M_{\Gamma})\) are defined as follows._ 1. \((\mathcal{D}_{\min}(M,\mathcal{E}),d_{\mathcal{E}})\) _and_ \((\mathcal{D}_{\max}(M,\mathcal{E}),d_{\mathcal{E}})\) _are defined by_ \[\mathcal{D}_{\min}(M,\mathcal{E}):=\overline{\Omega^{*}_{\mathrm{c}}(M, \mathcal{E}_{\mathrm{c}})}^{\|\cdot\|_{\mathcal{E}}},\mathcal{D}_{\max}(M, \mathcal{E}):=\overline{\Omega^{*}_{\max}(M,\mathcal{E})}^{\|\cdot\|_{ \mathcal{E}}}\] (2.7) 2. \((\mathcal{D}_{\min}(M_{\Gamma}),d_{\Gamma})\) _and_ \((\mathcal{D}_{\max}(M_{\Gamma}),d_{\Gamma})\) _are defined by_ \[\mathcal{D}_{\min}(M_{\Gamma}):=\overline{\Omega^{*}_{\mathrm{c}}(M_{\Gamma}) }^{\|\cdot\|_{\Gamma}},\mathcal{D}_{\max}(M_{\Gamma}):=\overline{\Omega^{*}_{ \max}(M_{\Gamma})}^{\|\cdot\|_{\Gamma}}.\] (2.8) The definition of the maximal domain in (2.8) is equivalent to the classical distributional definition, see [2, (2.31)]. Notice that while the von Neumann theory is carefully developed in e.g. [10] and [11], the definition of the minimal domain in (2.8) is only implicit in the literature. Recall now the notion of \(\mathcal{N}\Gamma\)-Hilbert complexes and isomorphisms between them: **Definition 2.3**.: _Let \(\Gamma\) be a finitely generated discrete group._ 1. _A Hilbert complex_ \((\mathcal{D}(d_{*}),d_{*}),\mathcal{D}(d_{*})\subset H_{*}\)_, is an_ \(\mathcal{N}\Gamma\)_-Hilbert complex, if each Hilbert space_ \(H_{*}\) _is a_ \(\mathcal{N}\Gamma\)_-Hilbert module and each differential_ \(d_{*}\) _is a closed and densely defined operator commuting with the action of_ \(\Gamma\) _and satisfying_ \(d_{*+1}\circ d_{*}\equiv 0\) _on the domain of_ \(d_{*}\)_._ 2. _A morphism between_ \(\mathcal{N}\Gamma\)_-Hilbert complexes_ \(H_{\bullet}:=(\mathcal{D}(d),d)\) _with_ \(\mathcal{D}(d_{*})\subset H_{*}\) _and_ \(K_{\bullet}:=(\mathcal{D}(d^{\prime}),d^{\prime})\) _with_ \(\mathcal{D}(d^{\prime}_{*})\subset K_{*}\)_, is defined as a sequence of bounded linear operators_ \(f_{k}:H_{k}\to K_{k}\) _for each_ \(k\)_, commuting with the_ \(\Gamma\) _action and satisfying_ \(f_{k+1}\circ d_{k}=d_{k}\circ f_{k}\) _on_ \(\operatorname{dom}(d_{k})\)_. We write_ \(f:H_{\bullet}\to K_{\bullet}\)_._ 3. _We say that these_ \(\mathcal{N}\Gamma\)_-Hilbert complexes are isomorphic, if the maps_ \(f_{k}:H_{k}\to K_{k}\) _are isometries and hence_ \(f_{k}\mathcal{D}(d_{k})=\mathcal{D}(d^{\prime}_{*})\) _for each_ \(k\)_._ The minimal and maximal Hilbert complexes above are \(\mathcal{N}\Gamma\)-Hilbert complexes in the sense of Definition 2.3. We arrive at a central observation. **Proposition 2.4**.: _The isometry \(\Phi:L^{2}\Omega^{*}(M_{\Gamma})\to L^{2}\Omega^{*}(M,\mathcal{E})\) of \(\mathcal{N}\Gamma\)-Hilbert modules in (2.4), defines isomorphisms of the \(\mathcal{N}\Gamma\)-Hilbert complexes_ \[\begin{split}\Phi:(\mathcal{D}_{\min}(M_{\Gamma}),d_{\Gamma})& \to(\mathcal{D}_{\min}(M,\mathcal{E}),d_{\mathcal{E}})\\ \Phi:(\mathcal{D}_{\max}(M_{\Gamma}),d_{\Gamma})&\to( \mathcal{D}_{\max}(M,\mathcal{E}),d_{\mathcal{E}}).\end{split} \tag{2.9}\] Proof.: One can check easily by construction that on \(\Omega^{*}_{\mathrm{c}}(M,\mathcal{E}_{\mathrm{c}})\) (in fact also on \(\Omega^{*}_{\max}(M,\mathcal{E})\) by a partition of unity argument) \[d_{\mathcal{E}}=\Phi\circ d_{\Gamma}\circ\Phi^{-1}.\] Thus, using the isomorphisms (2.2) and the fact that the map \(\Phi:L^{2}\Omega^{*}(M,\mathcal{E})\to L^{2}\Omega^{*}(M_{\Gamma})\) in (2.4) is an isometry, we obtain \[\begin{split}&\mathcal{D}_{\min}(M,\mathcal{E})=\Phi(\mathcal{D}_{ \min}(M_{\Gamma})),\quad\mathrm{d}_{\mathcal{E}}=\Phi\circ\mathrm{d}_{\Gamma} \circ\Phi^{-1}\text{ on }\mathcal{D}_{\min}(M,\mathcal{E}),\\ &\mathcal{D}_{\max}(M,\mathcal{E})=\Phi(\mathcal{D}_{\max}(M_{ \Gamma})),\quad\mathrm{d}_{\mathcal{E}}=\Phi\circ\mathrm{d}_{\Gamma}\circ\Phi^ {-1}\text{ on }\mathcal{D}_{\max}(M,\mathcal{E}).\end{split} \tag{2.10}\] The statement now follows. ## 3. Existence of \(L^{2}\)-Betti numbers and Novikov-Shubin invariants ### Some analytic properties Let \(D_{\mathrm{rel}}\) and \(D_{\mathrm{abs}}\) denote the self-adjoint operators on \(L^{2}\Omega^{*}(M,g)\) defined as the rolled-up operators of the Hilbert complexes \((\mathcal{D}_{\min}(M),\mathrm{d})\) and \((\mathcal{D}_{\max}(M),\mathrm{d})\), respectively. Let us also introduce the operators \(\Delta_{\mathrm{abs}}:=D_{\mathrm{abs}}^{2}\) and \(\Delta_{\mathrm{rel}}:=D_{\mathrm{rel}}^{2}\). Similarly, let \(D_{\Gamma,\mathrm{rel}}\) and \(D_{\Gamma,\mathrm{abs}}\) denote the self-adjoint operators on \(L^{2}\Omega^{*}(M_{\Gamma},g_{\Gamma})\) defined as the rolled-up operators of the Hilbert complexes \((\mathcal{D}_{\min}(M_{\Gamma}),\mathrm{d}_{\Gamma})\) and \((\mathcal{D}_{\max}(M_{\Gamma}),\mathrm{d}_{\Gamma})\), respectively. We write \[D_{\mathrm{rel}} :=\mathrm{d}_{\Gamma,\min}+\mathrm{d}_{\Gamma,\min}^{*},\quad \Delta_{\Gamma,\mathrm{rel}}:=D_{\Gamma,\mathrm{rel}}^{2},\] \[D_{\mathrm{abs}} :=\mathrm{d}_{\Gamma,\max}+\mathrm{d}_{\Gamma,\max}^{*},\quad \Delta_{\Gamma,\mathrm{abs}}:=D_{\Gamma,\mathrm{abs}}^{2}.\] We define the following sets of functions \[\mathcal{A}(\overline{M}) :=\{\mathrm{f}\in C(\overline{M}):\mathrm{f}|_{M}\in C^{\infty}( M),\ \mathrm{d}\mathrm{f}\in L^{\infty}\Omega^{1}(M,g)\},\] \[\mathcal{A}(\overline{M}_{\Gamma}) :=\{\mathrm{f}\in C(\overline{M}_{\Gamma}):\mathrm{f}|_{M_{\Gamma }}\in C^{\infty}(M_{\Gamma}),\ \mathrm{d}\mathrm{f}\in L^{\infty}_{\mathrm{loc}}\Omega^{1}(M_{ \Gamma},g_{\Gamma})\}.\] Note that given an open cover \(\{U_{\alpha}\}_{\alpha\in 1}\) of \(\overline{M}_{\Gamma}\), there exists a partition of unity \(\{\phi_{\alpha^{\prime}}\}_{\alpha^{\prime}\in\mathrm{f}}\), with compact support subordinate to \(\{U_{\alpha}\}_{\alpha\in 1}\) with \(\phi_{\alpha^{\prime}}\in\mathcal{A}(\overline{M}_{\Gamma})\) for each \(\alpha^{\prime}\in\mathrm{f}\), see [23, Proposition 3.2.2]. Obviously the analogous result holds true on \(\overline{M}\) with respect to \(\mathcal{A}(\overline{M})\). **Lemma 3.1**.: _Let \(U\subset\overline{M}_{\Gamma}\) be an open subset such that \(p|_{U}:U\to V\) is an isomorphism, with \(V=p(U)\) and \(p:\overline{M}_{\Gamma}\to\overline{M}\) the covering map. Let \(n\in\mathbb{N}\) and consider any collection \(\{\phi_{1},\ldots,\phi_{n}\}\subset\mathcal{A}(\overline{M}_{\Gamma})\) with \(\mathrm{supp}(\phi_{k})\subset U\). Then_ \[\prod_{k=1}^{n}\big{(}\phi_{k}(i+D_{\Gamma,\mathrm{abs}/\mathrm{rel}})^{-1} \big{)} \tag{3.1}\] _is a Hilbert-Schmidt operator for \(n\gg 0\) sufficiently large._ Proof.: Let us denote with \(\Psi:L^{2}\Omega^{*}(\mathrm{reg}(U),g_{\Gamma}|_{\mathrm{reg}(U)})\to L^{2} \Omega^{*}(\mathrm{reg}(V),g|_{\mathrm{reg}(V)})\) the isometry induced by \((p|_{U})^{-1}:V\to U\). Here, we write \(\mathrm{reg}(U)\) and \(\mathrm{reg}(V)\) for the open interior of \(U\) and \(V\), respectively. Let \(\phi\in\mathcal{A}(\overline{M}_{\Gamma})\) with \(\mathrm{supp}(\phi)\subset U\) and \(\omega\in\mathcal{D}(D_{\Gamma,\mathrm{abs}/\mathrm{rel}})\) be arbitrarily fixed. It is not difficult to show that \[\phi\omega\in\mathcal{D}(D_{\Gamma,\mathrm{abs}/\mathrm{rel}}), \quad\Psi(\phi\omega)\in\mathcal{D}(D_{\mathrm{abs}/\mathrm{rel}}),\] \[D_{\mathrm{abs}/\mathrm{rel}}(\Psi(\phi\omega))=\Psi(D_{\Gamma,\mathrm{abs}/\mathrm{rel}}(\phi\omega)).\] Hence we get \[\phi(i+D_{\Gamma,\mathrm{abs}/\mathrm{rel}})^{-1} =\Psi^{-1}(i+D_{\mathrm{abs}/\mathrm{rel}})^{-1}\circ(i+D_{\mathrm{ abs}/\mathrm{rel}})\Psi\circ\phi(i+D_{\Gamma,\mathrm{abs}/\mathrm{rel}})^{-1}\] \[=\Psi^{-1}(i+D_{\mathrm{abs}/\mathrm{rel}})^{-1}\circ\Psi(i+D_{ \Gamma,\mathrm{abs}/\mathrm{rel}})\circ\phi(i+D_{\Gamma,\mathrm{abs}/\mathrm{ rel}})^{-1}.\] Note that \((i+D_{\mathrm{f,abs/rel}})\phi(i+D_{\mathrm{r,abs/rel}})^{-1}\) is a bounded operator and \((i+D_{\mathrm{abs/rel}})^{-1}\) lies in the \(n\)-Schatten class for \(n\) sufficiently big as a consequence of [1, Theorem 1.1]. Therefore \(\phi(i+D_{\mathrm{r,abs/rel}})^{-1}\) is \(n\)-Schatten and so we can conclude that (3.1) is Hilbert Schmidt for \(n\) sufficiently big. Below, we shall impose an analytic assumption. **Assumption 3.2**.: _For any arbitrarily fixed cutoff functions \(\phi\), \(\chi\in\mathcal{A}(\overline{M})\) such that \(\mathrm{supp}(\phi)\cap\mathrm{supp}(\chi)=\varnothing\), the following composition is Hilbert-Schmidt_ \[\phi(i+D_{\mathrm{abs/rel}})^{-1}\chi. \tag{3.2}\] We expect the Assumption 3.2 to hold on any compact smoothly stratified (Thom-Mather) pseudo-manifold with an iterated wedge metric. Currently, we can show that the assumption holds in the following two cases. **Proposition 3.3**.: _The above Assumption 3.2 is satisfied in the following two cases:_ 1. \((M,g)\) _has an isolated conical singularity._ 2. \(M\) _is a Witt pseudo-manifold with (non-isolated) singularities of arbitrary depth and_ \(g\) _is an adapted iterated wedge metric as in_ _[_1_, Proposition 5.4]___(notice that it is always possible to endow a Witt pseudo-manifold with such a metric, see again_ _[_1_, Proposition 5.4]__)._ Proof.: Let us abbreviate \(D:=D_{\mathrm{abs/rel}}\) Notice that in the Witt case the operator is essentially self-adjoint, [1, Section 5.8], so \(D_{\mathrm{abs}}=D_{\mathrm{rel}}\) in this case. The proof uses the microlocal analysis of the heat kernel \(e^{-tD^{2}}\) by Mooers [10, Theorem 4.1] for the first case, and the microlocal description of the resolvent \((D-\lambda)^{-1}\) under the Witt assumption by Albin and Gell-Redman [1, Theorem 4.3] for the second case. In both cases, the microlocal analysis employs the blowup techniques, see for example Melrose [19] and Mazzeo [20]. Since the notion of blowups appears only within the limits of this proof, we don't attempt to provide a detailed presentation and instead suggest to think of blowups in terms of polar coordinates around the corresponding blown up submanifolds. _Proof of the first case:_ Since the Assumption 3.2 is concerned with the resolvent, when using Mooers [10, Theorem 4.1] in the first case we need to pass from the heat operator to the resolvent. We do this using the following relation \[(i-D)\int_{0}^{\infty}e^{-t(Id+D^{2})}dt=(i-D)(Id+D^{2})^{-1}=(i+D)^{-1}.\] In particular, we have a relation between the resolvent and the heat operator \[\phi(i+D)^{-1}\chi=\int_{0}^{\infty}e^{-t}\phi(i-D)e^{-tD^{2}}\chi dt. \tag{3.3}\] In what follows, we are going to explain why this is an equality of polyhomogeneous (in particular smooth in the interior) conormal Schwartz kernels. The asymptotics of the Schwartz kernel of the operator on the right hand side in (3.3) is conveniently described by the heat-space blowup of \(\widetilde{M}\times\widetilde{M}\times[0,\infty)_{\sqrt{t}}\), where by a small abuse of notation we denote the boundary of \(\widetilde{M}\) by \(\partial M\), see [10]. In the isolated case, which we consider here, \(\partial M\) is simply the link of the cone. One first blows up the highest codimension corner \(\partial M\times\partial M\times\{0\}\), which introduces a new boundary face \(\mathrm{ff}\). Then one blows up the temporal diagonal \(\mathrm{diag}(\widetilde{M}\times\widetilde{M})\times\{0\}\), which introduces the boundary face \(\mathrm{td}\). The resulting blowup is referred to as the heat space blowup \(\mathcal{M}^{2}_{h}\) and is illustrated in Figure 2. There, \(x\) and \(\widetilde{x}\) are boundary defining functions of the two copies of \(\widetilde{M}\). The boundary faces \(\mathrm{rf}\), \(\mathrm{lf}\) and \(\mathrm{tf}\) correspond to the boundary faces \(\{x=0\},\{\widetilde{x}=0\}\) and \(\{t=0\}\) before the blowup, respectively. There is also a canonical blowdown map \[\beta:\mathcal{M}^{2}_{h}\to\widetilde{M}\times\widetilde{M}\times[0,\infty).\] Since \(\mathrm{supp}(\phi)\cap\mathrm{supp}(x)=\varnothing\), the lift \(\beta^{*}\phi e^{-tD^{2}}x\) is in fact supported away from \(\mathrm{ff}\) and \(\mathrm{td}\). Figure 2 illustrates the support of \(\beta^{*}\phi e^{-tD^{2}}x\), where we consider the generic case that \(\mathrm{supp}(\phi)\cap\partial M\neq\varnothing\). In that case \(\mathrm{supp}(x)\) must be fully contained in the interior of \(M\) and hence the support of \(\beta^{*}\phi e^{-tD^{2}}x\) in \(\mathcal{M}^{2}_{h}\) is of the form as indicated by the blue shaded region. It is a central result of Mooers [11] that \(\beta^{*}e^{-tD^{2}}\) is a conormal polyhomogeneous section of \(\beta^{*}E\) on \(\mathcal{M}^{2}_{h}\), smooth in the open interior, where we have abbreviated \[E:=\Lambda^{*}({}^{w}T^{*}M)\boxtimes\Lambda^{*}({}^{w}T^{*}M).\] The kernel \(\beta^{*}\phi e^{-tD^{2}}x\) vanishes identically near \(\mathrm{ff}\) and \(\mathrm{td}\) and hence the precise asymptotics of \(\beta^{*}e^{-tD^{2}}\) at \(\mathrm{ff}\) and \(\mathrm{td}\) is irrelevant here. The lift \(\beta^{*}e^{-tD^{2}}\) is vanishing to infinite order at \(\mathrm{tf}\) and is of order \((\alpha,p)\) in its asymptotics at \(\mathrm{rf}\), i.e. \[\beta^{*}e^{-tD^{2}}\sim\rho^{\alpha}_{\mathrm{rf}}\log^{p}(\rho_{\mathrm{rf} }),\ \rho_{\mathrm{rf}}\to 0. \tag{3.4}\] The precise value of \((\alpha,p)\) is governed by the spectrum of some operators on the link. Instead of making it explicit here, note that (3.4) translates for any \(u\in C^{\infty}_{c}(M,\Lambda^{*}({}^{w}T^{*}M))\) and any fixed \(t>0\) into an asymptotics \[(e^{-tD^{2}}u)(x)=x^{\alpha}\log^{p}(x)\ G(x), \tag{3.5}\] where \(G(x)\) is a bounded section of \(\Lambda^{*}({}^{w}T^{*}M)\) (with the fibrewise inner product defined by \(g\)) as \(x\to 0\). By definition, \(e^{-tD^{2}}u\in\mathcal{D}(D^{2})\) and hence in particular \(e^{-tD^{2}}u\in L^{2}(M,\Lambda^{*}({}^{w}T^{*}M),g)\). Noting that the volume form of \(g\) is of the form \(x^{\dim\,\partial M}dx\) times a volume form on \(\partial M\), up to some bounded function on \(\widetilde{M}\), \(e^{-tD^{2}}u\in L^{2}\) and (3.5) implies \[2\alpha>-\dim\,\partial M-1. \tag{3.6}\] In the generic case, illustrated in Figure 2, \(\beta^{*}\phi e^{-tD^{2}}\chi\) is vanishing identically near ff and td, and hence it is a lift of a polyhomogeneous function on \([0,\infty)\times\widetilde{M}\times\widetilde{M}\). We conclude from (3.6) \[\phi e^{-tD^{2}}\chi \in L^{2}(M\times M,E;g),\] \[\phi(i-D)e^{-tD^{2}}\chi \in L^{2}(M\times M,E;g),\] uniformly as \(t\to\infty\) and \(t\to 0\). Note that for the second statement we used exactly the same argument as above: polyhomogeneity of the lift \(\beta^{*}(i-D)e^{-tD^{2}}\), and \((i-D)e^{-tD^{2}}u\in\mathcal{D}(D)\subset L^{2}(M,\Lambda^{*}(^{w}\mathbb{T}^ {*}M),g)\) to get a bound of the form (3.6) for its asymptotics at rf. Consequently the Schwartz kernel \(e^{-t}\phi(i-D)e^{-tD^{2}}\chi\) is integrable in \(t\in(0,\infty)\) and hence \[\int_{0}^{\infty}e^{-t}\phi(i-D)e^{-tD^{2}}\chi dt\in L^{2}(M\times M,E;g),\] We remark, that it is essential for the cutoff functions \(\phi\) and \(\chi\) to have disjoint support, since otherwise the integrand behaves as \(t^{-\dim M}\) on the diagonal and is not integrable. Thus, (3.3) holds as equality of integral kernels and \(\phi(i+D)^{-1}\chi\) is Hilbert-Schmidt, proving the claim in the first case. _Proof of the second case:_ For the second case let us first consider the case where \(\overline{M}\) is stratified of depth one. Then its resolution \(\widetilde{M}\) is a compact manifold with fibered boundary, denoted by \(\partial M\) with a small abuse of notation. \(\partial M\) is the total space of a fibration \(\phi:\partial M\to B\) over a compact base manifold \(B\) with fibres \(F\) being compact manifolds as well. The wedge metric is conical on the fibres of the boundary collar \(\mathcal{U}\cong(0,1)\times\partial M\). In this setting the resolvent \((i+D)^{-1}\) is conveniently described as a polyhomogeneous (with bounds) distribution on the edge space blowup of \(\widetilde{M}\times\widetilde{M}\), obtained by blowing up the fibre diagonal \[\operatorname{diag}_{\phi}(\partial M\times\partial M)=\{(p,p^{\prime})| \phi(p)=\phi(p^{\prime})\}.\] The blowup introduces a new boundary face ff and is illustrated in Figure 3. There, \(x\) and \(\widetilde{x}\) are boundary defining functions of the two copies of \(\widetilde{M}\). Moreover, \((y,\widetilde{y})\) are local coordinates on the two copies of the base of the fibration \(\phi:\partial M\to B\), so that locally \(\operatorname{diag}_{\phi}(\partial M\times\partial M)=\{y=\widetilde{y}\}\). The boundary faces rf and lf correspond to the boundary faces \(\{x=0\}\) and \(\{\widetilde{x}=0\}\) before the blowup, respectively. There is also a canonical blowdown map \[\beta:\mathcal{M}_{e}^{2}\to\widetilde{M}\times\widetilde{M}.\] Since \(\operatorname{supp}(\phi)\cap\operatorname{supp}(x)=\varnothing\), the lift \(\beta^{*}\phi(i+D)^{-1}x\) is supported away from ff. Figure 3 also illustrates the support of \(\beta^{*}\phi(i+D)^{-1}x\), where we consider the generic case that \(\operatorname{supp}(\phi)\cap\partial M\neq\varnothing\). In that case, support of \(\beta^{*}\phi(i+D)^{-1}x\) in \(\mathcal{M}_{e}^{2}\) is of the form as indicated by the blue shaded region. The shaded region may intersect the boundary face If, but most importantly is supported away from the front face ff. Remark now that since we are considering an _adapted_ iterated wedge metric, the operator \(D\) satisfies what Albin and Gell-Redman call the geometric Witt condition; this means that we can indeed apply their results. The microlocal description of the resolvent in Albin and Gell-Redman [1, Theorem 4.3] asserts that the resolvent \((i+D)^{-1}\) lifts to a polyhomogeneous (with bounds) distribution on \(\mathcal{M}_{e}^{2}\) with a conormal singularity along the lifted diagonal. In the generic case illustrated in Figure 3, the lift \(\beta^{*}\phi(i+D)^{-1}\chi\) vanishes identically near ff and the lifted diagonal. Hence \(\beta^{*}\phi(i+D)^{-1}\chi\) is smooth in the interior of \(\mathcal{M}_{e}^{2}\), vanishing to infinite order at ff. Let us now discuss the asymptotics of \(\beta^{*}(i+D)^{-1}\) and \(\beta^{*}\phi(i+D)^{-1}\chi\) at the left face \(\mathrm{lf}\) and the right face \(\mathrm{rf}\). By symmetry, it suffices to study the right face \(\mathrm{rf}\) only. The asymptotics of \(\beta^{*}(i+D)^{-1}\) (and hence also of \(\beta^{*}\phi(i+D)^{-1}\chi\)) at \(\mathrm{rf}\) has a lower bound \(\alpha^{\prime}\), i.e. \[\rho_{\mathrm{rf}}^{-\alpha^{\prime}}\cdot\beta^{*}(i+D)^{-1} \tag{3.7}\] is a section of \(\beta^{*}E\) (with the fibrewise inner product defined by \(g\)), bounded near \(\rho_{\mathrm{rf}}=0\). The bound \(\alpha^{\prime}\) is determined by the spectrum of certain operators on the fibres \(F\). Instead of inferring the explicit value of \(\alpha^{\prime}\) from [1, Theorem 4.3], we argue as in the isolated case. Let us note that (3.7) translates for any \(u\in C_{c}^{\infty}(M,\Lambda^{*}(w^{\top}T^{*}M))\) into \[x^{-\alpha^{\prime}}\Big{(}(i+D)^{-1}u\Big{)}(x) \tag{3.8}\] being bounded near \(x=0\) as a section of \(\Lambda^{*}(w^{\top}T^{*}M)\) with the fibrewise inner product defined by \(g\). By definition, \((i+D)^{-1}u\in\mathcal{D}(D)\) and hence in particular \((i+D)^{-1}u\in L^{2}(M,\Lambda^{*}(w^{\top}T^{*}M),g)\). Exactly as above in (3.6), this implies \[2\alpha^{\prime}>-\dim F-1. \tag{3.9}\] Since \(\beta^{*}\phi(i+D)^{-1}\chi\) vanishes identically near ff and is smooth in the interior, it is the lift of a polyhomogeneous function on \(\widetilde{M}\times\widetilde{M}\) with bound \(\alpha^{\prime}\) in its asymptotics as \(x,\widetilde{x}\to 0\). Hence by (3.9) \[\phi(i+D)^{-1}\chi\in L^{2}(M\times M,E;g).\] This proves the claim in the second case in case of stratification depth one. In the general stratification depth, the front face \(f\) in \(\mathcal{M}_{e}^{2}\) in Figure 3 requires additional blowups, see [1, Definition 3.1]. However, \(\beta^{*}\phi(i+D)^{-1}\chi\) is supported away from \(f\) and thus these blowups do not affect the argument. This completes the proof. Before we continue to the next result, we shall point out that in the Witt case and for an adapted metric as in the second case of Proposition 3, the Gauss-Bonnet operator is essentially self-adjoint on \((M_{\Gamma},g_{\Gamma})\), see [1, Proposition 6.3]. Hence in that case there is no point in distinguishing \(D_{\Gamma,\mathrm{abs}}\) and \(D_{\Gamma,\mathrm{rel}}\). **Proposition 3.4**.: _Let \(U\subset\overline{M}_{\Gamma}\) be an open subset, such that \(p|_{U}:U\to V\) is an isomorphism, with \(V=p(U)\) and \(p:\overline{M}_{\Gamma}\to\overline{M}\) the covering map. Let \(\overline{\phi}\), \(\overline{\chi}\in\mathcal{A}(\overline{M}_{\Gamma})\) with \(\operatorname{supp}(\overline{\phi})\cap\operatorname{supp}(\overline{\chi})=\varnothing\) and \(\operatorname{supp}(\overline{\phi})\subset U\). If Assumption 3 holds true then_ \[\overline{\phi}(i+D_{\Gamma,\mathrm{abs}/\mathrm{rel}})^{-1}\overline{\chi}\] _is a Hilbert-Schmidt operator._ Proof.: Let \(\Psi\) be the isometry defined in the proof of Lemma 3.1. Let \(\overline{\theta}\in\mathcal{A}(\overline{M}_{\Gamma})\) with \(\operatorname{supp}(\overline{\theta})\subset U\), \(\operatorname{supp}(\overline{\theta})\cap\operatorname{supp}(\overline{ \chi})=\varnothing\) and \(\overline{\theta}\equiv 1\) on an open neighbourhood of \(\operatorname{supp}(\overline{\phi})\). Since \(\overline{\theta}\in\mathcal{A}(\overline{M}_{\Gamma})\) the following operator \([D_{\Gamma,\mathrm{abs}/\mathrm{rel}},\overline{\theta}](i+D_{\Gamma, \mathrm{abs}/\mathrm{rel}})^{-1}\overline{\chi}\) is well defined and we have \[\begin{split}&[D_{\Gamma,\mathrm{abs}/\mathrm{rel}},\overline{ \theta}](i+D_{\Gamma,\mathrm{abs}/\mathrm{rel}})^{-1}\overline{\chi}\\ =& D_{\Gamma,\mathrm{abs}/\mathrm{rel}}\overline{ \theta}(i+D_{\Gamma,\mathrm{abs}/\mathrm{rel}})^{-1}\overline{\chi}- \overline{\theta}(-i+i+D_{\Gamma,\mathrm{abs}/\mathrm{rel}})(i+D_{\Gamma, \mathrm{abs}/\mathrm{rel}})^{-1}\overline{\chi}\\ =&(i+D_{\Gamma,\mathrm{abs}/\mathrm{rel}})\overline{ \theta}(i+D_{\Gamma,\mathrm{abs}/\mathrm{rel}})^{-1}\overline{\chi}-\overline{ \theta}\overline{\chi}\\ =&(i+D_{\Gamma,\mathrm{abs}/\mathrm{rel}})\overline{ \theta}(i+D_{\Gamma,\mathrm{abs}/\mathrm{rel}})^{-1}\overline{\chi}.\end{split} \tag{3.10}\] Let now introduce a second auxiliary function \(\overline{\zeta}\in\mathcal{A}(\overline{M}_{\Gamma})\) with \(\operatorname{supp}(\overline{\zeta})\subset U\) and \(\overline{\zeta}\equiv 1\) on \(\operatorname{supp}([D_{\Gamma,\mathrm{abs}/\mathrm{rel}},\overline{\theta}])\). Then \([D_{\Gamma,\mathrm{abs}/\mathrm{rel}},\overline{\theta}]\overline{\zeta} \equiv[D_{\Gamma,\mathrm{abs}/\mathrm{rel}},\overline{\theta}]\). Applying \(\Psi\) on both sides of (3.10) implies \[\Psi[D_{\Gamma,\mathrm{abs}/\mathrm{rel}},\overline{\theta}] \overline{\zeta}(i+D_{\Gamma,\mathrm{abs}/\mathrm{rel}})^{-1}\overline{\chi} =\Psi[D_{\Gamma,\mathrm{abs}/\mathrm{rel}},\overline{\theta}] (i+D_{\Gamma,\mathrm{abs}/\mathrm{rel}})^{-1}\overline{\chi}\] \[=\Psi(i+D_{\Gamma,\mathrm{abs}/\mathrm{rel}})\overline{\theta}(i +D_{\Gamma,\mathrm{abs}/\mathrm{rel}})^{-1}\overline{\chi}.\] Let us denote \(\theta:=\overline{\theta}\circ(p|_{U})^{-1}\). The presence of \(\overline{\zeta}\) allows us to commute \(\Psi\) with the commutator and yields \[[D_{\mathrm{abs}/\mathrm{rel}},\theta]\Psi\overline{\zeta}(i+D_{\Gamma, \mathrm{abs}/\mathrm{rel}})^{-1}\overline{\chi}=(i+D_{\mathrm{abs}/\mathrm{ rel}})\Psi\overline{\theta}(i+D_{\Gamma,\mathrm{abs}/\mathrm{rel}})^{-1} \overline{\chi}.\] Applying the resolvent on both sides of this equality implies \[(i+D_{\mathrm{abs}/\mathrm{rel}})^{-1}[D_{\mathrm{abs}/\mathrm{rel}},\theta] \Psi\overline{\zeta}(i+D_{\Gamma,\mathrm{abs}/\mathrm{rel}})^{-1}\overline{\chi }=\Psi\overline{\theta}(i+D_{\Gamma,\mathrm{abs}/\mathrm{rel}})^{-1}\overline{\chi}.\] Therefore, by multiplying both sides with \(\phi:=\overline{\phi}\circ(p|_{U})^{-1}\), we get \[\begin{split}\phi(i+D_{\mathrm{abs}/\mathrm{rel}})^{-1}[D_{ \mathrm{abs}/\mathrm{rel}},\theta]\Psi\overline{\zeta}(i+D_{\Gamma,\mathrm{ abs}/\mathrm{rel}})^{-1}\overline{\chi}=\Phi\Psi\overline{\theta}(i+D_{\Gamma, \mathrm{abs}/\mathrm{rel}})^{-1}\overline{\chi}\\ =\Psi\overline{\phi}\overline{\theta}(i+D_{\Gamma,\mathrm{abs}/ \mathrm{rel}})^{-1}\overline{\chi}=\Psi\overline{\phi}(i+D_{\Gamma,\mathrm{abs }/\mathrm{rel}})^{-1}\overline{\chi}\end{split}\] and so we arrive at \[\Psi^{-1}\phi(\mathrm{i}+\mathrm{D}_{\mathrm{abs/rel}})^{-1}[\mathrm{D}_{\mathrm{ abs/rel}},\theta]\Psi\overline{\zeta}(\mathrm{i}+\mathrm{D}_{\Gamma;\mathrm{abs/rel}})^{-1} \overline{\chi}=\overline{\phi}(\mathrm{i}+\mathrm{D}_{\Gamma;\mathrm{abs/rel}}) ^{-1}\overline{\chi}.\] Note now that \(\mathrm{supp}(\phi)\subset\mathrm{V}\), \(\phi\in\mathcal{A}(\overline{M})\) and \(\mathrm{supp}(\phi)\cap\mathrm{supp}([\mathrm{D}_{\mathrm{abs/rel}},\theta])=\varnothing\). Hence there exists \(\gamma\in\mathcal{A}(\overline{M})\) such that \(\gamma[\mathrm{D}_{\mathrm{abs/rel}},\theta]=[\mathrm{D}_{\mathrm{abs/rel}},\theta]\) and \(\mathrm{supp}(\phi)\cap\mathrm{supp}(\gamma)=\varnothing\). We obtain \[\Psi^{-1}\phi(\mathrm{i}+\mathrm{D}_{\mathrm{abs/rel}})^{-1}\gamma[\mathrm{D}_ {\mathrm{abs/rel}},\theta]\Psi\overline{\zeta}(\mathrm{i}+\mathrm{D}_{\Gamma; \mathrm{abs/rel}})^{-1}\overline{\chi}=\overline{\phi}(\mathrm{i}+\mathrm{D}_ {\Gamma;\mathrm{abs/rel}})^{-1}\overline{\chi}\] Finally since both \([\mathrm{D}_{\mathrm{abs/rel}},\theta]\) and \((\mathrm{i}+\mathrm{D}_{\Gamma;\mathrm{abs/rel}})^{-1}\) are bounded operators and \(\phi(\mathrm{i}+\mathrm{D}_{\mathrm{abs/rel}})^{-1}\gamma\) is a Hilbert-Schmidt operator by Assumption 3.2, we can conclude that \(\overline{\phi}(\mathrm{i}+\mathrm{D}_{\Gamma;\mathrm{abs/rel}})^{-1}\overline{\chi}\) is also Hilbert-Schmidt, as required. **Proposition 3.5**.: _In the setting of Proposition 3.4 the operator_ \[(\mathrm{Id}+\Delta_{\Gamma;\mathrm{abs/rel}})^{-n}\] _is \(\Gamma\)-trace class for \(n\gg 0\) sufficiently large._ Proof.: Since the argument is the same for both \(\Delta_{\Gamma;\mathrm{abs}}\) and \(\Delta_{\mathrm{rel}}\) we drop the subscription \(\mathrm{abs/rel}\) in the rest of the proof. Now we note that \((\mathrm{Id}+\Delta_{\Gamma})^{-n}=(\mathrm{i}+\mathrm{D}_{\Gamma})^{-n/2} \circ(\mathrm{i}-\mathrm{D}_{\Gamma})^{-n/2}\). Thus it is enough to prove that both \((\mathrm{i}+\mathrm{D}_{\Gamma})^{-n/2}\) and \((\mathrm{i}-\mathrm{D}_{\Gamma})^{-n/2}\) are \(\Gamma\)-Hilbert-Schmidt. We show now that \((\mathrm{i}+\mathrm{D}_{\Gamma})^{-n/2}\) is \(\Gamma\)-Hilbert-Schmidt. The proof of the other case is identical. According to [1, Definition (4.3)'] we need to show that \(\theta(\mathrm{i}+\mathrm{D}_{\Gamma})^{-n/2}\) is Hilbert-Schmidt, with \(\theta\) an arbitrarily fixed bounded measurable function with compact support on \(\overline{M}_{\Gamma}\). Let \(\{\mathrm{U}_{1},...,\mathrm{U}_{\ell}\}\) be a finite open cover of \(\mathrm{supp}(\theta)\) such that \(\mathrm{p}|_{\mathrm{U}_{k}}:\mathrm{U}_{k}\to\mathrm{V}_{k}=\mathrm{p}( \mathrm{U}_{k})\) is an isomorphism. Thanks to [13, Proposition 3.2.2] we can find \(\psi_{1},...,\psi_{k}\in\mathcal{A}(\overline{M}_{\Gamma})\) such that \(\mathrm{supp}(\psi_{k})\subset\mathrm{U}_{k}\) and \(\sum_{k=1}^{\ell}\psi_{k}(x)=1\) for each \(x\in\mathrm{supp}(\theta)\). Thus we have \[\theta(\mathrm{i}+\mathrm{D}_{\Gamma})^{-n/2}=\theta\left(\sum_{k=1}^{\ell} \psi_{k}\right)(\mathrm{i}+\mathrm{D}_{\Gamma})^{-n/2}=\sum_{k=1}^{\ell} \theta\psi_{k}(\mathrm{i}+\mathrm{D}_{\Gamma})^{-n/2}.\] Since \(\theta\) is bounded, it is enough to show that \(\psi_{k}(\mathrm{i}+\mathrm{D}_{\Gamma})^{-n/2}\) is \(\Gamma\)-Hilbert-Schmidt for any \(k\). Without loss of generality we can assume that \(n\) is even. Let \(q=n/2\) and \(\phi_{1}\in\mathcal{A}(\overline{M}_{\Gamma})\) with \(\mathrm{supp}(\phi_{1})\subset\mathrm{U}_{k}\) and \(\mathrm{supp}(\psi_{k})\cap\mathrm{supp}(1-\phi_{1})=\varnothing\). We have \[\psi_{k}(\mathrm{i}+\mathrm{D}_{\Gamma})^{-n/2} =\psi_{k}(\mathrm{i}+\mathrm{D}_{\Gamma})^{-1}(\mathrm{i}+ \mathrm{D}_{\Gamma})^{1-q}\] \[=\psi_{k}(\mathrm{i}+\mathrm{D}_{\Gamma})^{-1}(\phi_{1}+1-\phi_{1 })(\mathrm{i}+\mathrm{D}_{\Gamma})^{1-q}\] \[=\psi_{k}(\mathrm{i}+\mathrm{D}_{\Gamma})^{-1}\phi_{1}(\mathrm{i} +\mathrm{D}_{\Gamma})^{1-q}+\psi_{k}(\mathrm{i}+\mathrm{D}_{\Gamma})^{-1}(1- \phi_{1})(\mathrm{i}+\mathrm{D}_{\Gamma})^{1-q}.\] Note that \(\psi_{k}(\mathrm{i}+\mathrm{D}_{\Gamma})^{-1}(1-\phi_{1})(\mathrm{i}+\mathrm{D} _{\Gamma})^{1-q}\) is Hilbert-Schmidt, since \((\mathrm{i}+\mathrm{D}_{\Gamma})^{1-q}\) is bounded and \(\psi_{k}(\mathrm{i}+\mathrm{D}_{\Gamma})^{-1}(1-\phi_{1})\) is Hilbert-Schmidt thanks to Proposition 3.4. Thus \(\psi_{k}(\mathrm{i}+\mathrm{D}_{\Gamma})^{-p/2}\) is Hilbert-Schmidt if and only if \(\psi_{k}(\mathrm{i}+\mathrm{D}_{\Gamma})^{-1}\phi_{1}(\mathrm{i}+\mathrm{D}_{ \Gamma})^{1-q}\) is Hilbert-Schmidt. By picking \(\phi_{1},...,\phi_{q-1}\in\mathcal{A}(\overline{M}_{\Gamma})\) with \(\mathrm{supp}(\phi_{j})\subset\mathrm{U}_{k}\) and \(\mathrm{supp}(\phi_{j})\cap\mathrm{supp}(1-\phi_{j+1})=\varnothing\) for each \(j=2,...,q-1\) and iterating the above procedure, we get that \(\psi_{k}(\mathrm{i}+\mathrm{D}_{\Gamma})^{-n/2}\) is Hilbert-Schmidt if and only if the composition \[\psi_{k}(\mathrm{i}+\mathrm{D}_{\Gamma})^{-1}\phi_{1}(\mathrm{i}+\mathrm{D}_{ \Gamma})^{-1}...\phi_{q-1}(\mathrm{i}+\mathrm{D}_{\Gamma})^{-1} \tag{3.11}\] is Hilbert-Schmidt. Thanks to Lemma 3.1 we know that (3.11) is Hilbert-Schmidt. We can thus finally conclude that \(\psi_{k}(i+D_{\Gamma})^{-n/2}\) is also Hilbert-Schmidt. **Remark 3.6**.: _Note that this statement in the Witt case, where the operator is in fact essentially self-adjoint, is already claimed in [10, Proposition 7.4]. However, the argument given there is very short and does not explain properly the complexity of the proof._ **Theorem 3.7**.: _Let \(f:\mathbb{R}\to\mathbb{R}\) be any rapidly decreasing function. Then_ \[f(\Delta_{\Gamma,abs/rel})\] _is \(\Gamma\)-trace class. In particular, the corresponding heat operators as well as spectral projections to finite intervals are \(\Gamma\)-trace class._ Proof.: Let us consider the function \(g(z):=(1+z)^{n}f(z)\) with \(n>0\) sufficiently large. Note that \(g\) is a bounded function and thus \(g(\Delta_{\Gamma,abs/rel})\) is a bounded operator. Writing \(f(z)=(1+z)^{-n}g(z)\) we conclude that \[f(\Delta_{\Gamma,abs/rel}) =(\operatorname{Id}+\Delta_{\Gamma,abs/rel})^{-n}\circ\left(( \operatorname{Id}+\Delta_{\Gamma,abs/rel})^{n}f(\Delta_{\Gamma,abs/rel})\right)\] \[=(1+\Delta_{\Gamma,abs/rel})^{-n}g(\Delta_{\Gamma,abs/rel}),\] is the composition of a bounded operator \(g(\Delta_{\Gamma,abs/rel})\) with a \(\Gamma\)-trace class operator \((\operatorname{Id}+\Delta_{\Gamma,abs/rel})^{-n}\). Thus \(f(\Delta_{\Gamma,abs/rel})\) is \(\Gamma\)-trace class, as required. We have now everything in place to define \(L^{2}\)-Betti numbers and Novikov-Shubin invariants on \(\overline{M}_{\Gamma}\), thereby proving our first main Theorem 1.1. ### Definition of \(L^{2}\)-Betti numbers and Novikov-Shubin invariants We begin by recalling the notion of \(\Gamma\)-dimension of a Hilbert \(\mathcal{N}\Gamma\)-module. As before, we refer the reader to [11, SS1 and SS2] and [12] for more details. Continuing in the notion of Definition 2.1, we have the following. **Definition 3.8**.: _Recall, a \(\mathcal{N}\Gamma\)-Hilbert module \(H\) is a Hilbert space with an isometric linear \(\Gamma\)-embedding of \(H\) into the tensor product \(\mathcal{H}\otimes\ell^{2}\Gamma\). The \(\Gamma\) dimension of a Hilbert \(\mathcal{N}\Gamma\)-module \(H\) is defined as the von Neuman trace of the orthogonal projection \(\mathcal{P}:\mathcal{H}\otimes L^{2}(\Gamma)\to H\). Explicitly, let \(\{b_{i}\}\) be an arbitrarily fixed Hilbert basis of \(\mathcal{H}\) and \(e\in\Gamma\) the unit element, see [11, Definition 1.8]. Then we have_ \[\dim_{\Gamma}(H):=\operatorname{Tr}_{\Gamma}(\mathcal{P})=\sum_{i}\left\langle \mathcal{P}(b_{i}\otimes e),b_{i}\otimes\delta_{e}\right\rangle_{\mathcal{H} \otimes L^{2}(\Gamma)}\in[0,\infty]. \tag{3.12}\] As explained in [11, p. 17] the above definition is well-posed and independent of the choices made. Now let us consider such an \(\mathcal{N}\Gamma\)-complex \(H_{\bullet}:=(\mathcal{D}(d),d)\) and define in terms of the adjoint \(d^{*}\) the associated Laplace operator \[\Delta_{k}:=d^{*}_{k}\circ d_{k}+d_{k-1}\circ d^{*}_{k-1}. \tag{3.13}\] It is clear that \(\Delta_{k}\) commutes with the \(\Gamma\)-action and thus \(\ker(\Delta_{k})\) has the structure of Hilbert \(\mathcal{N}\Gamma\)-module. We can therefore define in view of (3.12). **Definition 3.9**.: _The \(k\)-th \(L^{2}\)-Betti number of \(H_{\bullet}:=(\mathcal{D}(d),d)\) is defined as_ \[b^{k}_{(2),\Gamma}(H_{\bullet}):=\dim_{\Gamma}(\ker(\Delta_{k}))\in[0,\infty]. \tag{3.14}\] In order to define Novikov-Shubin invariants, consider for each \(k\) and \(\lambda\geq 0\) \[\begin{split}&\mathrm{d}_{k}^{\perp}:=\mathrm{d}_{k}\upharpoonright \mathcal{D}(\mathrm{d}_{k})\cap\overline{\mathrm{im}(\mathrm{d}_{k-1})}^{\perp}, \\ &\mathcal{L}(\mathrm{d}_{k}^{\perp},\lambda):=\Big{\{}\mathrm{L} \subset\mathrm{H}_{k}\text{ Hilbert }\mathcal{N}\Gamma\text{-submodules }|\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad Thus the \(L^{2}\)-Betti numbers \(b^{k}_{(2),\min/\max}(\overline{M}_{\Gamma})\) are finite. Concerning the second part, since \(E_{\lambda}(P)\) is a \(\Gamma\) trace-class projection for each \(\lambda\), the Hilbert \(\mathcal{N}\Gamma\) complexes \((\mathcal{D}_{\min/\max}(M_{\Gamma}),d_{\Gamma})\) are Fredholm, see [10, Proposition 2.3] and the argument given in the proof of [10, Lemma 2.6]. We can thus conclude that the Novikov-Subin invariants \(\alpha_{k,\max/\min}(\overline{M}_{\Gamma})\) are well-defined, as claimed. ## 4. A Hilsum-Skandalis-type replacement of a pullback ### Topological preliminaries This section contains some topological properties that are well known to people familiar with coverings. However, since we could not pin down a specific reference, we prefer to collect and prove the necessary results in this preliminary subsection. In what follows X and Y stand for path-connected, locally path-connected and semi-locally simply-connected topological spaces. Consider a possibly disconnected Galois covering \(p:N\to Y\) with \(\Gamma\) the corresponding group of deck transformations. Let \(y\in Y\) and \(n\in p^{-1}(y)\). Then we denote with \[\Phi_{N,y,n}:\pi_{1}(Y,y)\to\Gamma.\] the homomorphism of groups that assigns to each \([\alpha]\in\pi_{1}(Y,y)\) the unique element \(\gamma\in\Gamma\) such that \(\gamma(n)=n[\alpha]\), where \[p^{-1}(y)\times\pi_{1}(Y,y)\to p^{-1}(y),\quad(n,[\alpha])\mapsto n[\alpha]\] denotes the monodromy action, see [14, Ch. 13]. **Lemma 4.1**.: _Let \(p:N\to Y\) be a possibly disconnected Galois covering. Then \(N\) is connected if and only if \(\Phi_{N,y,n}\) is surjective for any choice of \(y\in Y\) and \(n\in p^{-1}(y)\)._ Proof.: The surjectivity of \(\Phi_{N,y,n}\) provided \(N\) is connected is well known, see [14, SS13.3]. The converse is an easy exercise that we leave to the reader. **Lemma 4.2**.: _Let \(f:X\to Y\) be a continuous map. Let \(x\in X\) be such that \(f(x)=y\) and let \(f_{*}:\pi_{1}(X,x)\to\pi_{1}(Y,y)\) be the homomorphism induced by \(f\). Consider the Galois \(\Gamma\)-covering \(f^{*}N\to X\). Then for any \((x,n)\in f^{*}N\)_ \[\Phi_{f^{*}N,x,(x,n)}=\Phi_{N,y,n}\circ f_{*}.\] Proof.: Let \(r:f^{*}N\to X\) the covering map of \(f^{*}N\) and let \(\widehat{f}:f^{*}N\to N\) be the usual map induced by the right projection. The maps combine into a commutative diagram First of all we note that \(p\circ\widehat{f}=f\circ r\) and that \(\widehat{f}:f^{*}N\to N\) is \(\Gamma\)-equivariant. Let now \([\alpha]\in\pi_{1}(X,x)\) and let \(g=\Phi_{f^{*}N,x,(x,n)}([\alpha])\). Then we have \(g((x,n))=(x,n)[\alpha]\). Consider now \(f_{*}([\alpha])\) and let \(g^{\prime}=\Phi_{N,y,n}(f_{*}([\alpha]))\). Then \(g^{\prime}(n)=n[f\circ\alpha]\). By [14, SS13.1] we know that \(n[f\circ\alpha]=\widehat{f}((x,n)[\alpha])\). Hence we have \[g^{\prime}(n)=n[f\circ\alpha]=\widehat{f}((x,n)[\alpha])=\widehat{f}(g((x,n)))= g(\widehat{f}((x,n)))=g(n).\] Therefore \(g^{\prime}(n)=g(n)\) and thus \(g^{\prime}=g\), as desired. **Lemma 4.3**.: _In the setting of Lemma 4.2 assume that \(f\) is homotopy equivalence. Then \(f^{*}N\) is connected if and only if \(N\) is so._ Proof.: If \(N\) is connected then \(\Phi_{N,y,n}\) is surjective and consequently \(\Phi_{f^{*}N,x_{i}(x,n)}\) is also surjective, as \(\Phi_{f^{*}N,x_{i}(x,n)}=\Phi_{N,y,n}\circ f_{*}\). Thus, thanks to Lemma 4.1, we can conclude that \(f^{*}N\) is connected. Conversely let us assume that \(f^{*}N\) is connected. Then \(\Phi_{f^{*}N,x_{i}(x,n)}\) is surjective and therefore \(\Phi_{N,y,n}\) is also surjective as \(\Phi_{f^{*}N,x_{i}(x,n)}=\Phi_{N,y,n}\circ f_{*}\). We can thus conclude that \(N\) is connected. **Corollary 4.4**.: _Let \(f:X\to Y\) be a homotopy equivalence and let \(p:N\to Y\) be a universal covering of \(Y\). Then \(r:f^{*}N\to X\) is an universal covering of \(X\)._ Proof.: Thanks to Lemma 4.3 we know that \(f^{*}N\) is connected. Let now \(x\in X\) and \(n\in p^{-1}(f(x))\). Let \(\widehat{f}:f^{*}N\to N\) be the map induced by the right projection. We have \(p\circ\widehat{f}=f\circ r\) and thus \((p\circ\widehat{f})_{*}=(f\circ r)_{*}\) as group homomorphism acting between \(\pi_{1}(f^{*}N,(x,n))\) and \(\pi_{1}(Y,f(x))\). Since \((f\circ r)_{*}\) is injective and \(\pi_{1}(N,n)\) is trivial we deduce that \(\pi_{1}(f^{*}N,(x,n))\) is trivial, as well. We can thus conclude that \(r:f^{*}N\to X\) is a universal covering of \(X\) since it is a connected and simply connected covering of \(X\). More generally we have the following result. **Lemma 4.5**.: _Let \(f:X\to Y\) be a homotopy equivalence and let \(q:M\to X\) and \(p:N\to Y\) be two path-connected Galois \(\Gamma\)-covering. Assume that there exists \(x\in X\), \(m\in q^{-1}(x)\) and \(n\in p^{-1}(f(x))\) such that_ \[f_{*}(\ker(\Phi_{M,x,m}))=\ker(\Phi_{N,f(x),n}). \tag{4.1}\] _Then \(M\) and \(f^{*}N\) are isomorphic Galois \(\Gamma\)-coverings._ Proof.: First we note that both \(M\) and \(f^{*}N\) are path-connected. Write \(r:f^{*}N\to X\) for the covering map of \(f^{*}N\). Thanks to [13, Proposition 1.37] we know that if \(q_{*}(\pi_{1}(M,m))=r_{*}(\pi_{1}(f^{*}N,(x,n)))\) then \(M\) and \(f^{*}N\) are isomorphic. Thanks to [13, Proposition 1.39] we know that \(q_{*}(\pi_{1}(M,m))=\ker(\Phi_{M,x,m})\) and \(r_{*}(\pi_{1}(f^{*}N,(x,n)))=\ker(\Phi_{f^{*}N,x_{i}(x,n)})\). Note that from Lemma 4.2 we have \[\ker(\Phi_{f^{*}N,x_{i}(x,n)})=(f_{*})^{-1}(\ker(\Phi_{N,f(x),n}))=(f_{*})^{-1 }(f_{*}(\ker(\Phi_{M,x,m})))=\ker(\Phi_{M,x,m}).\] We can thus conclude that \(M\) and \(f^{*}N\) are isomorphic, as required. **Remark 4.6**.: _In [13, p. 382] it is required that \(\Phi_{M,x,m}=\Phi_{N,f(x),n}\circ f_{*}\). Obviously their assumption implies (4.1)._ ### Construction Consider two compact smoothly stratified spaces \(\overline{M}\) and \(\overline{N}\) with the respective iterated incomplete wedge metrics \(g_{M}\) and \(g_{N}\) in the open interior. We repeat the definition of a smoothly stratified stratum preserving map between \(\overline{M}\) and \(\overline{N}\). **Definition 4.7**.: ([1, Definition 4 and SS9.2], [1, Definitions 2.6, 2.7]) _Let \(\overline{M}\) and \(\overline{N}\) be two smoothly stratified spaces._ 1. _A stratified map between_ \(\overline{M}\) _and_ \(\overline{N}\) _is a continuous map_ \(f:\overline{M}\to\overline{N}\) _which sends a stratum of_ \(\overline{M}\) _into a stratum of_ \(\overline{N}\)_; equivalently, the preimage of any stratum in_ \(\overline{N}\) _is a union of strata in_ \(\overline{M}\)_. We shall also say that_ \(f\) _is stratum preserving._ 2. _We shall say that such a map is codimension preserving if, in addition, for any stratum_ \(S\subset\overline{N}\)__ \[\operatorname{codim}f^{-1}(S)=\operatorname{codim}S.\] 3. _A stratified map_ \(f:\overline{M}\to\overline{N}\) _is smooth if it lifts to a smooth_ \(b\)_-map_ \(\tilde{f}:\widetilde{M}\to\widetilde{N}\) _between the respective resolutions. We shall also say that_ \(f\) _is smoothly stratified. We call a smoothly stratified, codimension preserving map a "smooth strongly stratum preserving map" or, equivalently, a "smooth strongly stratified map". We will always work in this category._ Our setting in this section is as follows. Consider a smooth strongly stratum preserving map \(f:\overline{M}\to\overline{N}\). Consider the disc subbundle \(\mathbb{B}^{N}\subset{}^{e}TN\), with fibres given by open discs of radius \(\delta>0\) with respect to the complete edge metric \(\rho_{N}^{-2}g_{N}\). Here, we obviously assume \(\delta>0\) to be a lower bound for the injectivity radius, which exists since \((N,\rho_{N}^{-2}g_{N})\) is of bounded geometry. Let \(\pi:\widetilde{f}^{*}(\mathbb{B}^{N})\to\widetilde{M}\) be the pullback bundle with the usual associated bundle map \(\widetilde{Bf}:\widetilde{f}^{*}(\mathbb{B}^{N})\to\mathbb{B}^{N}\), induced by the right projection. Consider the exponential \(\exp:\mathbb{B}^{N}|_{N}\to N\) with respect to \(\rho_{N}^{-2}g_{N}\), evaluating tangent vectors at the end points of the corresponding geodesics at time \(\delta\). The map extends to \(\exp:\mathbb{B}^{N}\to\widetilde{N}\), mapping \(\mathbb{B}^{N}|_{\partial\widetilde{N}}\to\partial\widetilde{N}\). These maps combine into a diagram as in Figure 4. **Lemma 4.8**.: _The differential \(df\) in the open interior \(M\) extends to a fiberwise linear map between edge as well as wedge tangent bundles_ \[{}^{e}df:{}^{e}TM\to{}^{e}TN,\qquad{}^{w}df:{}^{w}TM\to{}^{w}TN.\] Figure 4. The pullback bundle \(\widetilde{f}^{*}(\mathbb{B}^{N})\). _The corresponding pullbacks, defined by \({}^{\epsilon}\mathrm{d}f\) and \({}^{w}\mathrm{d}f\), consequently act as follows_ \[{}^{\epsilon}\mathrm{f}^{*}: \mathrm{C}^{\infty}(\widetilde{\mathrm{N}},\Lambda^{*}({}^{ \epsilon}\mathrm{T}^{*}\mathrm{N})\otimes\mathcal{E})\to\mathrm{C}^{\infty}( \widetilde{\mathrm{M}},\Lambda^{*}({}^{\epsilon}\mathrm{T}^{*}\mathrm{M}) \otimes\mathrm{f}^{*}\mathcal{E}), \tag{4.2}\] \[{}^{w}\mathrm{f}^{*}: \mathrm{C}^{\infty}(\widetilde{\mathrm{N}},\Lambda^{*}({}^{w} \mathrm{T}^{*}\mathrm{N})\otimes\mathcal{E})\to\mathrm{C}^{\infty}( \widetilde{\mathrm{M}},\Lambda^{*}({}^{w}\mathrm{T}^{*}\mathrm{M})\otimes \mathrm{f}^{*}\mathcal{E}),\] _where \(\mathcal{E}\) is the Mishchenko bundle over \(\overline{\mathrm{N}}\), as in (2.1), pulled back to \(\widetilde{\mathrm{N}}\) by the blowdown map \(\beta:\widetilde{\mathrm{N}}\to\overline{\mathrm{N}}\) that is associated with the resolution \(\widetilde{\mathrm{N}}\)._ Proof.: We define the \(\epsilon\)-differential \({}^{\epsilon}\mathrm{d}f\) as follows. By assumption in Definition 4.7, \(\mathrm{f}\) preserves the iterated boundary fibrations structures and hence sends curves that are tangent to the fibres at the boundary of \(\mathrm{M}\) to curves that are tangent to the fibres at the boundary of \(\mathrm{N}\). Hence we obtain \({}^{\epsilon}\mathrm{d}f:{}^{\epsilon}\mathrm{TM}\to{}^{\epsilon}\mathrm{TN}\). For the action of the differential between the wedge tangent bundles, recall that the total boundary functions \(\rho_{\mathrm{M}}\in\mathrm{C}^{\infty}(\widetilde{\mathrm{M}})\) and \(\rho_{\mathrm{N}}\in\mathrm{C}^{\infty}(\widetilde{\mathrm{N}})\) are nowhere vanishing in the open interior, and vanish to first order at each boundary face of \(\widetilde{\mathrm{M}}\) and \(\widetilde{\mathrm{N}}\), respectively. We write for any \(\mathrm{V}\in\mathrm{T}_{\mathrm{p}}\mathrm{M}\) \[\mathrm{d}f(\mathrm{V}) =\frac{\rho_{\mathrm{N}}(\mathrm{f}(\mathrm{p}))}{\rho_{\mathrm{ M}}(\mathrm{p})}\cdot\frac{\rho_{\mathrm{M}}(\mathrm{p})}{\rho_{\mathrm{N}}( \mathrm{f}(\mathrm{p}))}\cdot\mathrm{d}f(\mathrm{V}) \tag{4.3}\] \[=\frac{\rho_{\mathrm{N}}(\mathrm{f}(\mathrm{p}))}{\rho_{\mathrm{M }}(\mathrm{p})}\cdot\rho_{\mathrm{N}}^{-1}(\mathrm{f}(\mathrm{p}))\cdot \mathrm{d}f(\rho_{\mathrm{M}}(\mathrm{p})\mathrm{V}).\] If \(\mathrm{V}\) is now a section of \({}^{w}\mathrm{TM}\), \(\rho_{\mathrm{M}}\mathrm{V}\) extends to a section of \({}^{\epsilon}\mathrm{TM}\), and \(\mathrm{d}f(\rho_{\mathrm{M}}\mathrm{V})\) extends to a section \({}^{\epsilon}\mathrm{d}f(\rho_{\mathrm{M}}\mathrm{V})\) of \({}^{\epsilon}\mathrm{TN}\). Consequently, \(\rho_{\mathrm{N}}^{-1}\circ\mathrm{f}\cdot\mathrm{d}f(\mathrm{V})\) extends to a section \(\widetilde{f}^{*}\rho_{\mathrm{N}}^{-1}\cdot{}^{\epsilon}\mathrm{d}f(\rho_{ \mathrm{M}}\mathrm{V})\) of \({}^{w}\mathrm{TN}\). Now, since \(\mathrm{f}\) is smoothly stratified, we have that \(\widetilde{\mathrm{f}}\) sends boundary hypersurfaces of \(\widetilde{\mathrm{M}}\) onto boundary hypersurfaces of \(\widetilde{\mathrm{N}}\). Thus \(\rho_{\mathrm{M}}^{-1}\cdot\widetilde{f}^{*}\rho_{\mathrm{N}}\) is smooth and bounded (note that this is not the case for an arbitrary \(\mathrm{b}\)-map), and we can define in view of (4.3) for any \(\mathrm{V}\in{}^{w}\mathrm{TM}\) \[{}^{w}\mathrm{d}f(\mathrm{V}):=\frac{\widetilde{f}^{*}\rho_{\mathrm{N}}}{\rho_ {\mathrm{M}}}\cdot\widetilde{f}^{*}\rho_{\mathrm{N}}^{-1}\cdot{}^{\epsilon} \mathrm{d}f(\rho_{\mathrm{M}}\mathrm{V})\in{}^{w}\mathrm{TN}. \tag{4.4}\] The action of the pullbacks \({}^{\epsilon}\mathrm{f}^{*}\) and \({}^{w}\mathrm{f}^{*}\), defined by \({}^{\epsilon}\mathrm{d}f\) and \({}^{w}\mathrm{d}f\), respectively, is then obvious. Now, as it is well known even in the smooth closed case, the pullbacks in (4.2) do not necessarily induce bounded maps in \(\mathrm{L}^{2}\) and hence do not define morphisms of \(\mathcal{N}\)-Hilbert complexes. This is precisely where the Hilsum-Skandalis type replacement of the pullback is needed. The next proposition is a central element of the construction. **Proposition 4.9**.: _The differential \(\mathrm{d}(\exp\circ\mathbb{B}\widetilde{\mathrm{f}})\) in the open interior of \(\widetilde{f}^{*}(\mathbb{B}^{\mathrm{N}})\) extends to a well-defined surjection between edge and wedge tangent bundles_ \[{}^{\epsilon}\mathrm{d}(\exp\circ\mathbb{B}\widetilde{\mathrm{f}}):{}^{ \epsilon}\mathrm{T}\widetilde{f}^{*}(\mathbb{B}^{\mathrm{N}})\to{}^{\epsilon} \mathrm{T}\widetilde{\mathrm{N}}, \tag{4.5}\] _Here, \({}^{\epsilon}\mathrm{T}\widetilde{f}^{*}(\mathbb{B}^{\mathrm{N}})\) and \({}^{w}\mathrm{T}\widetilde{f}^{*}(\mathbb{B}^{\mathrm{N}})\) are defined as follows: the disc subbundle \(\mathbb{B}^{\mathrm{N}}\subset{}^{\epsilon}\mathrm{TN}\), as well as its pullback \(\widetilde{f}^{*}(\mathbb{B}^{\mathrm{N}})\) are manifolds with corners and we consider the corresponding edge and wedge tangent bundles. The pullbacks, defined by \({}^{\epsilon}\mathrm{d}(\exp\circ\mathbb{B}\widetilde{\mathrm{f}})\) and \({}^{w}\mathrm{d}(\exp\circ\mathbb{B}\widetilde{\mathrm{f}})\), act as follows_ \[{}^{e}(\exp\circ\mathbb{B}\widetilde{\mathrm{f}})^{*}:C^{\infty}( \widetilde{\mathsf{N}},\mathsf{\Lambda}^{*}({}^{e}\mathsf{T}^{*}\widetilde{ \mathsf{N}})\otimes\mathcal{E})\to C^{\infty}(\widetilde{\mathsf{M}},\mathsf{ \Lambda}^{*}({}^{e}\mathsf{T}^{*}\widetilde{\mathsf{f}}^{*}(\mathbb{B}^{ \mathsf{N}})\otimes\mathcal{E}^{\prime}), \tag{4.6}\] \[{}^{w}(\exp\circ\mathbb{B}\widetilde{\mathrm{f}})^{*}:C^{\infty}( \widetilde{\mathsf{N}},\mathsf{\Lambda}^{*}({}^{w}\mathsf{T}^{*}\widetilde{ \mathsf{N}})\otimes\mathcal{E})\to C^{\infty}(\widetilde{\mathsf{M}},\mathsf{ \Lambda}^{*}({}^{w}\mathsf{T}^{*}\widetilde{\mathsf{f}}^{*}(\mathbb{B}^{ \mathsf{N}})\otimes\mathcal{E}^{\prime}).\] _where \(\mathcal{E}\) is the Mishchenko bundle as in Lemma 4.8 and \(\mathcal{E}^{\prime}\coloneqq(\exp\circ\mathbb{B}\widetilde{\mathrm{f}})^{*} \mathcal{E}\)._ Proof.: By [1, SS4.2], the disc subbundle \(\mathbb{B}^{\mathsf{N}}\) is a manifold with iterated boundary fibration structure, where each boundary hypersurface in \(\mathbb{B}^{\mathsf{N}}\) is the restriction of \({}^{e}\mathsf{TN}\) to a boundary hypersurface of \(\widetilde{\mathsf{N}}\). Note that the disc fibres of \(\mathbb{B}^{\mathsf{N}}\) are open by definition and hence the boundary of the discs is by definition not part of the boundary fibration structure. Since \(\mathsf{f}:\widetilde{\mathsf{M}}\to\overline{\mathsf{N}}\) is a smoothly stratified map, the induced bundle map \(\mathbb{B}\widetilde{\mathrm{f}}\) is again smoothly stratified. By [1, Lemma 4.3] we know that \(\exp\) is also smoothly stratified and \({}^{e}\mathrm{d}\exp\) is surjective (with respect to edge bundles). Surjectivity is not affected by the rescaling as in (4.4). Hence the statement follows by exactly the same argument as in Lemma 4.8. Below, it will become necessary to replace \(\mathcal{E}^{\prime}\coloneqq(\exp\circ\mathbb{B}\widetilde{\mathrm{f}})^{*} \mathcal{E}\) by an isometric pullback bundle, where the isometry commutes with the pullback connections. The notion of connections on \(\mathcal{N}\)-Hilbert module bundles is presented e.g. in [1, SS3]. In fact, [1, Definition 3.2] introduces connections on \(\mathcal{N}\Gamma\)-Hilbert module bundles, [1, Definition 3.4] introduces pullback bundles and pullback connections. **Lemma 4.10**.: _Consider the Mishchenko bundle \(\mathcal{E}=\widetilde{\mathsf{N}}_{\Gamma}\times_{\Gamma}\ell^{2}\Gamma\) with the canonical flat connection, induced by the exterior derivative on \(\widetilde{\mathsf{N}}_{\Gamma}\). Consider the pullback bundles \((\exp\circ\mathbb{B}\widetilde{\mathrm{f}})^{*}\mathcal{E}\) and \((\widetilde{\mathsf{f}}\circ\pi)^{*}\mathcal{E}\) with the pullback connections, see Figure 4 for the various maps. Then there exists an isometry of \(\mathcal{N}\Gamma\)-Hilbert module bundles, commuting with the flat pullback connections_ \[\mathcal{J}:(\exp\circ\mathbb{B}\widetilde{\mathrm{f}})^{*}\mathcal{E}\to( \widetilde{\mathsf{f}}\circ\pi)^{*}\mathcal{E}.\] Proof.: Consider the continuous family \[\mathsf{r}_{\mathsf{t}}:\mathbb{B}^{\mathsf{N}}\to\mathbb{B}^{\mathsf{N}}, \quad\mathbb{B}^{\mathsf{N}}_{\mathsf{p}}\ni v\mapsto\mathsf{t}v\in\mathbb{ B}^{\mathsf{N}}_{\mathsf{p}},\] for \(\mathsf{t}\in[0,1]\). Pullback by homotopic maps gives isomorphic bundles and hence there exists an isomorphism \(\mathcal{J}^{\prime}:\mathsf{r}^{*}_{0}\exp^{*}\mathcal{E}\to\mathsf{r}^{*}_{ \mathsf{t}}\exp^{*}\mathcal{E}\). Consider the flat connection \(\nabla\) on \(\exp^{*}\mathcal{E}\), obtained as the pullback of the canonical flat connection on the Mishchenko bundle \(\mathcal{E}\). We claim that this isomorphism \(\mathcal{J}^{\prime}\) commutes with the pullback connections \(\mathsf{r}^{*}_{0}\nabla\equiv\nabla\) and \(\mathsf{r}^{*}_{\mathsf{t}}\nabla\). Indeed, consider \[\mathsf{r}:[0,1]\times\mathbb{B}^{\mathsf{N}}\to\mathbb{B}^{\mathsf{N}}, \quad(\mathsf{t},v)\mapsto\mathsf{r}_{\mathsf{t}}(v).\] Consider the pullback bundle \(\mathsf{r}^{*}\exp^{*}\mathcal{E}\) and the flat pullback connection \(\mathsf{r}^{*}\nabla\) on that bundle. Clearly, we have the following restrictions \[\mathsf{r}^{*}\exp^{*}\mathcal{E}\Big{|}_{[0]\times\mathbb{B}^{ \mathsf{N}}} =\mathsf{r}^{*}_{0}\exp^{*}\mathcal{E}\equiv\exp^{*}\mathcal{E}, \tau^{*}\nabla\Big{|}_{[0]\times\mathbb{B}^{\mathsf{N}}} =\mathsf{r}^{*}_{0}\nabla\equiv\nabla,\] \[\mathsf{r}^{*}\exp^{*}\mathcal{E}\Big{|}_{[1]\times\mathbb{B}^{ \mathsf{N}}} =\mathsf{r}^{*}_{1}\exp^{*}\mathcal{E}, \tau^{*}\nabla\Big{|}_{[1]\times\mathbb{B}^{\mathsf{N}}} =\mathsf{r}^{*}_{1}\nabla.\] The isomorphism \(\mathcal{J}^{\prime}\) may be constructed via parallel transport \(P\) by \(r^{*}\nabla\): given any \((0,p)\in[0,1]\times\mathbb{B}^{N}\), consider the path \(\gamma(t):=(t,p),t\in[0,1]\) and define \[\mathcal{J}^{\prime}:=P_{\gamma}:r_{0}^{*}\exp^{*}\mathcal{E}\to r_{1}^{*}\exp^ {*}\mathcal{E}.\] It is an isometry, since the metric in each fibre is given by the \(\ell^{2}\Gamma\) inner product and parallel transport preserves the inner product. The isomorphism commutes with the pullback connections \(r_{0}^{*}\nabla\) and \(r_{1}^{*}\nabla\), since it maps parallel sections to parallel sections: indeed, for any path \(\gamma^{\prime}\) between \(p,q,\in\mathbb{B}^{N}\) we have as indicated in Figure 5 since parallel transport for flat connections is trivial along closed paths. Thus, parallel sections along \(\gamma^{\prime}\) with respect to \(r_{0}^{*}\nabla\) are mapped by \(\mathcal{J}^{\prime}\) to parallel sections along \(\gamma^{\prime}\) with respect to \(r_{1}^{*}\nabla\). This proves our claim and we can now finish the proof. We have the following sequence of bundle isomorphisms (note that \(r_{1}=\operatorname{id}\)) \[\exp^{*}\mathcal{E}=r_{1}^{*}\exp^{*}\mathcal{E}\cong r_{0}^{*}\exp^{*} \mathcal{E}=(\exp\circ r_{0})^{*}\mathcal{E}=(\pi_{N}\circ r_{0})^{*}\mathcal{ E}=\pi_{N}^{*}\mathcal{E}.\] From here we conclude, using \(\pi_{N}\circ\mathbb{B}\,\widetilde{f}=\widetilde{f}\circ\pi\) \[(\exp\circ\mathbb{B}\widetilde{f})^{*}\mathcal{E}=\mathbb{B}\widetilde{f}^{*} (\exp^{*}\mathcal{E})\cong\mathbb{B}\widetilde{f}^{*}(\pi_{N}^{*}\mathcal{E}) =(\pi_{N}\circ\mathbb{B}\,\widetilde{f})^{*}\mathcal{E}=(\widetilde{f}\circ \pi)^{*}\mathcal{E}.\] This constructs the isomorphism \(\mathcal{J}\). Since the only non-trivial identification in \(\mathcal{J}\) is the isomorphism \(\mathcal{J}^{\prime}\), it is an isometry and commutes with the pullback connections. We can now define the Hilsum-Skandalis replacement for \(f^{*}\). **Theorem 4.11**.: _Let \(f:\overline{M}\to\overline{N}\) be smooth, strongly stratum preserving. Consider the Thom form \(\tau[\widetilde{f}^{*}(\mathbb{B}^{N})]\) of the disc bundle \(\widetilde{f}^{*}(\mathbb{B}^{N})\). Then the Hilsum-Skandalis maps_ \[{}^{e}HS(f):C^{\infty}(\widetilde{N},\Lambda^{*}(^{e}\Gamma^{*}N)\otimes \mathcal{E})\to C^{\infty}(\widetilde{M},\Lambda^{*}(^{e}\Gamma^{*}M)\otimes \widetilde{f}^{*}\mathcal{E}),\] \[{}^{e}HS(f)\omega:=\pi_{*}\Big{(}\tau\big{[}\widetilde{f}^{*}(\mathbb{B}^{N}) \big{]}\wedge\mathcal{J}\circ{}^{e}(\exp\circ\mathbb{B}\widetilde{f})^{*}\omega \Big{)}\] \[{}^{w}HS(f):C^{\infty}(\widetilde{N},\Lambda^{*}(^{w}\Gamma^{*}N)\otimes \mathcal{E})\to C^{\infty}(\widetilde{M},\Lambda^{*}(^{w}\Gamma^{*}M)\otimes \widetilde{f}^{*}\mathcal{E}),\] \[{}^{w}HS(f)\omega:=\pi_{*}\Big{(}\tau\big{[}\widetilde{f}^{*}(\mathbb{B}^{N}) \big{]}\wedge\mathcal{J}\circ{}^{w}(\exp\circ\mathbb{B}\widetilde{f})^{*} \omega\Big{)},\] _are well-defined and extend to bounded maps on the corresponding \(L^{2}\) completions_ \[\begin{split}{}^{e}HS(f)&:L^{2}(\widetilde{N},\Lambda^{* }({}^{e}T^{*}N)\otimes\mathcal{E},\rho_{N}^{-2}g_{\mathcal{E}})\to L^{2}( \widetilde{M},\Lambda^{*}({}^{e}T^{*}M)\otimes\widetilde{T}^{e}\mathcal{E}, \rho_{M}^{-2}g_{\mathcal{E}}),\\ {}^{w}HS(f)&:L^{2}(\widetilde{N},\Lambda^{*}({}^{w}T^ {*}N)\otimes\mathcal{E},g_{\mathcal{E}})\to L^{2}(\widetilde{M},\Lambda^{*}({}^ {w}T^{*}M)\otimes\widetilde{T}^{e}\mathcal{E},g_{\mathcal{E}}).\end{split} \tag{4.8}\] _The maps commute with the twisted de Rham differentials \(d_{\mathcal{E}}\) and \(d_{\widetilde{T}^{*}\mathcal{E}}\) on smooth compactly supported sections. The metrics \(g_{\mathcal{E}}\) are induced by the wedge metrics \(g_{N},g_{M}\) together with the \(\ell^{2}\Gamma\) inner product along the fibers of \(\mathcal{E}\)._ Proof.: The maps are well-defined by Proposition 4.9. We now prove (4.8). Note first that \((M,\rho_{M}^{-2}g_{M})\) and \((N,\rho_{N}^{-2}g_{N})\) are of bounded geometry. Hence, boundedness of \({}^{e}HS(f)\) is discussed in detail in [11, Proposition 4.5], who extended [13] to non-compact Riemannian manifolds of bounded geometry. Note here, that the construction of \({}^{e}HS(f)\) differs from \(T_{f}\) in [11, (112), (147)] only by the boundary isomorphism \(\beta\). We now establish boundedness of \({}^{w}HS(f)\): note that \((\exp\circ\mathbb{B}\widetilde{f})\) and \(\pi\) are both smooth, strongly stratum preserving. Consequently, writing \(\rho_{N}\) and \(\rho_{\widetilde{T}^{*}(\mathbb{B}^{N})}\) for the total boundary defining functions on \(N\) and \(\widetilde{f}^{*}(\mathbb{B}^{N})\), respectively, we conclude that the quotients \[\frac{(\exp\circ\mathbb{B}\widetilde{f})^{*}\rho_{N}}{\rho_{\widetilde{T}^{*} (\mathbb{B}^{N})}},\quad\frac{\rho_{\widetilde{T}^{*}(\mathbb{B}^{N})}}{\pi^{ *}\rho_{M}},\] are uniformly bounded. Given \(\omega\in C_{c}^{\infty}(N,\Lambda^{k}(T^{*}N)\otimes\mathcal{E})\) with compact support and any fixed degree \(k\), we compute \[{}^{w}HS(f)\rho_{N}^{k}\omega \equiv{}^{e}HS(f)\rho_{N}^{k}\omega\] \[=\pi_{*}\Big{(}(\exp\circ\mathbb{B}\widetilde{f})^{*}\rho_{N}^{k }\cdot\tau\big{[}\widetilde{f}^{*}(\mathbb{B}^{N})\big{]}\wedge\beta\circ( \exp\circ\mathbb{B}\widetilde{f})^{*}\omega\Big{)}\] \[=\rho_{M}^{k}\cdot\pi_{*}\Big{(}\frac{(\exp\circ\mathbb{B} \widetilde{f})^{*}\rho_{N}^{k}}{\pi^{*}\rho_{M}^{k}}\cdot\tau\big{[}\widetilde {f}^{*}(\mathbb{B}^{N})\big{]}\wedge\beta\circ(\exp\circ\mathbb{B}\widetilde{ f})^{*}\omega\Big{)}.\] The presence of the bounded factor \(\frac{(\exp\circ\mathbb{B}\widetilde{f})^{*}\rho_{N}^{k}}{\pi^{*}\rho_{M}^{k}}\) does not affect the argument in [11, Proposition 4.5] and hence \(\rho_{M}^{-k}\cdot{}^{w}HS(f)\rho_{N}^{k}\) is again bounded on the \(L^{2}\) completions of \(k\)-th degree twisted differential forms with respect to \(\rho_{N}^{-2}g_{\mathcal{E}}\) and \(\rho_{M}^{-2}g_{\mathcal{E}}\). This is equivalent to boundedness of \({}^{w}HS(f)\) in (4.8) as claimed. The last claim that \({}^{e/w}HS(f)\) commutes with the de Rham differentials is proved by [1, Proposition 9.3 (i)], where Lemma 4.10 is implicitly used. ### Hilsum-Skandalis maps for homotopic maps Consider a continuous family \(f_{s}:\overline{M}\to\overline{N}\) of smooth strongly stratum preserving maps. Consider the pullback bundles \(\widetilde{f}_{s}^{*}(\mathbb{B}^{N})\). Homotopic maps induce vector bundle isomorphisms, and hence there exists a continuous family \(A_{s}:\widetilde{f}_{0}^{*}(\mathbb{B}^{N})\to\widetilde{f}_{s}^{*}(\mathbb{B} ^{N})\) of vector bundle isomorphisms, leading to the following commutative diagram. Similarly, there exists a continuous family \(\beta_{s}:\widetilde{f}_{s}^{*}\mathcal{E}\to\widetilde{f}_{0}^{*}\mathcal{E}\) of bundle isomorphisms, which by the same argument as in Lemma 4.10 are isometries of \(\mathcal{N}\)-Hilbert module bundles, commuting with the pullback connections. Motivated by Theorem 4.11 we set (we change the construction slightly) \[{}^{w}\mathrm{HS}(f_{\mathrm{s}})^{\prime}:\mathrm{C}^{\infty}( \widetilde{\mathrm{N}},\Lambda^{\ell}({}^{w}\mathrm{T}^{*}\mathrm{N})\otimes \mathcal{E})\to\mathrm{C}^{\infty}(\widetilde{\mathrm{M}},\Lambda^{\ell}({}^{ w}\mathrm{T}^{*}\mathrm{M})\otimes\widetilde{\ell}_{0}^{*}\mathcal{E}), \tag{4.9}\] \[{}^{w}\mathrm{HS}(f_{\mathrm{s}})^{\prime}\omega:=(-1)^{\ell}{}^{ w}\pi_{*}\Big{(}\tau\big{[}\widetilde{\ell}_{0}^{*}(\mathbb{B}^{\mathrm{N}}) \big{]}\wedge\mathcal{J}_{\mathrm{s}}\circ\mathcal{J}\circ{}^{w}(\exp\circ \mathbb{B}\widetilde{\ell}_{\mathrm{s}}\circ\mathrm{A}_{\mathrm{s}})^{*}\omega \Big{)},\] where \(\mathcal{J}\) is constructed in Lemma 4.10. As before, \({}^{w}\mathrm{HS}(f_{\mathrm{s}})^{\prime}\) extends to a bounded map between the \(\mathrm{L}^{2}\)-completions. The following proposition explains how these maps define a chain homotopy. **Proposition 4.12**.: _Write \(\mathfrak{p}_{\mathrm{s}}:=\exp\circ\mathbb{B}\widetilde{\ell}_{\mathrm{s}} \circ\mathrm{A}_{\mathrm{s}}:\widetilde{\ell}_{0}^{*}(\mathbb{B}^{\mathrm{N}}) \to\widetilde{\mathrm{N}}\) and set_ \[\mathfrak{p}:\widetilde{\ell}_{0}^{*}(\mathbb{B}^{\mathrm{N}})\times[0,1]\to \widetilde{\mathrm{N}},\qquad(\mathrm{v},\mathrm{s})\mapsto\mathfrak{p}_{ \mathrm{s}}(\mathrm{v}).\] _Then \({}^{w}\mathrm{HS}(f_{1})^{\prime}-{}^{w}\mathrm{HS}(f_{0})^{\prime}=\mathrm{d }\mathrm{K}+\mathrm{K}\mathrm{d}\) with \(\mathrm{K}\) bounded in \(\mathrm{L}^{2}\) and defined by_ \[\begin{split}&\mathrm{K}:\mathrm{C}^{\infty}(\widetilde{\mathrm{N}},\Lambda^{*}({}^{w}\mathrm{T}^{*}\mathrm{N})\otimes\mathcal{E})\to\mathrm{C}^{ \infty}(\widetilde{\mathrm{M}},\Lambda^{*-1}({}^{w}\mathrm{T}^{*}\mathrm{M}) \otimes\widetilde{\ell}_{0}^{*}\mathcal{E}),\\ &\mathrm{K}\omega:=(-1)^{\ell}{}^{w}\pi_{*}\Big{(}\tau\big{[} \widetilde{\ell}_{0}^{*}(\mathbb{B}^{\mathrm{N}})\big{]}\wedge\mathcal{J}_{ \mathrm{s}}\circ\mathcal{J}\circ\int_{0}^{1}\iota_{\mathfrak{J}_{\mathrm{s}}} \big{(}{}^{w}\mathfrak{p}^{*}\omega\big{)}\Big{)},\end{split} \tag{4.10}\] Proof.: Consider \(\omega\in\mathrm{C}^{\infty}(\widetilde{\mathrm{N}},\Lambda^{\ell}({}^{w} \mathrm{T}^{*}\mathrm{N})\otimes\mathcal{E})\). The statement follows by plugging the next computation into (4.9). We change the notation of our differentials \(\mathrm{d}_{\mathcal{E}}\) by changing the lower index to specify the underlying space and obtain \[\begin{split}&\mathrm{d}_{\widetilde{\ell}_{0}^{*}(\mathbb{B}^{ \mathrm{N}})}\int_{0}^{1}\iota_{\mathfrak{J}_{\mathrm{s}}}\big{(}{}^{w} \mathfrak{p}^{*}\omega\big{)}=\int_{0}^{1}\iota_{\mathfrak{J}_{\mathrm{s}}} \mathrm{d}_{\widetilde{\ell}_{0}^{*}(\mathbb{B}^{\mathrm{N}})}\big{(}{}^{w} \mathfrak{p}^{*}\omega\big{)}\\ &=\int_{0}^{1}\iota_{\mathfrak{J}_{\mathrm{s}}}\Big{(}\mathrm{d}_{ \widetilde{\ell}_{0}^{*}(\mathbb{B}^{\mathrm{N}})\times[0,1]}\big{(}{}^{w} \mathfrak{p}^{*}\omega\big{)}+(-1)^{\ell+1}\frac{\partial\big{(}{}^{w} \mathfrak{p}^{*}\omega\big{)}}{\partial\mathrm{s}}\wedge\mathrm{d}s\Big{)}\\ &=\int_{0}^{1}\iota_{\mathfrak{J}_{\mathrm{s}}}\Big{(}{}^{w} \mathfrak{p}^{*}\big{(}\mathrm{d}_{\widetilde{\mathrm{N}}}\omega\big{)}\Big{)} +{}^{w}\mathfrak{p}_{1}^{*}\omega-{}^{w}\mathfrak{p}_{0}^{*}\omega.\end{split}\] ## 5. Stability of \(\mathrm{L}^{2}\)-Betti numbers and Novikov-Shubin invariants Let us first recall the notion of homotopy equivalences between \(\mathcal{N}\Gamma\)-Hilbert complexes. We refer to [10, SS1 and SS2] and [11, SS4] for more details. We continue in the notation fixed in Definition 2.3. **Definition 5.1**.: _Let \(\Gamma\) be a finitely generated discrete group and recall Definition 2.3._ 1. _A homotopy between two morphisms of_ \(\mathcal{N}\Gamma\)_-Hilbert complexes_ \(f,h:H_{\bullet}\to K_{\bullet}\) _is a sequence of bounded linear operators_ \(T_{k}:H_{k}\to K_{k-1}\) _commuting with the_ \(\Gamma\)_-action such that_ \[f_{k}-h_{k}=T_{k+1}\circ d_{k}+d_{k-1}\circ T_{k}\text{ on }\mathcal{D}(d_{k}) \subset H_{k}.\] 2. _Two_ \(\mathcal{N}\Gamma\)_-Hilbert complexes_ \(H_{\bullet}\) _and_ \(K_{\bullet}\) _are said to be homotopy equivalent if there are morphisms_ \(f:H_{\bullet}\to K_{\bullet}\) _and_ \(h:H_{\bullet}\to K_{\bullet}\) _such that_ \(f\circ h\) _and_ \(h\circ f\) _are homotopic to the identity morphisms of_ \(H_{\bullet}\) _and_ \(K_{\bullet}\) _respectively._ We now introduce homotopy equivalences in the smoothly stratified setting. **Definition 5.2**.: _Consider two compact smoothly stratified spaces \(\overline{M}\) and \(\overline{N}\). We say that \(\overline{M}\) and \(\overline{N}\) are "smoothly stratified codimension-preserving homotopy equivalent" if there exist two smooth strongly stratum preserving maps \(f:\overline{M}\to\overline{N}\) and \(h:\overline{N}\to\overline{M}\), such that \(f\circ h\sim id_{\overline{N}}\) and \(h\circ f\sim id_{\overline{M}}\) with homotopies given by continuous families of smooth strongly stratum preserving maps._ For more on homotopy equivalences in the smoothly stratified setting we refer the reader to [1], where, in particular, a result of approximation of smoothly stratified codimension-preserving homotopy equivalences by smooth ones is established. We show that an equivalence as in Definition 5.2 induces a chain homotopy equivalence of minimal and maximal Hilbert complexes for \(\overline{M}_{\Gamma}\) and \(\overline{N}_{\Gamma}\). **Corollary 5.3**.: _Let \(\overline{M}\) and \(\overline{N}\) be compact smoothly stratified, strongly stratum preserving homotopy equivalent in the sense of Definition 5.2. Let \(\overline{N}_{\Gamma}\) be a Galois \(\Gamma\)-covering on \(\overline{N}\). Let \(\mathcal{E}_{\overline{N}}\) be the associated Mishchenko bundle over \(\overline{N}\). Then the \(\mathcal{N}\Gamma\)-Hilbert complexes_ \[(\mathcal{D}_{\min/\max}(M,f^{*}\mathcal{E}_{\overline{N}}),d_{f^{*}\mathcal{ E}_{\overline{N}}})\text{ and }(\mathcal{D}_{\min/\max}(N,\mathcal{E}_{\overline{N}}),d_{\mathcal{E}_{ \overline{N}}}).\] _are chain homotopy equivalent._ Proof.: Consider the two smooth strongly stratum preserving maps \(f:\overline{M}\to\overline{N}\) and \(h:\overline{N}\to\overline{M}\) that define the smoothly stratified codimension-preserving homotopy equivalence between \(\overline{M}\) and \(\overline{N}\). In particular, \(f\circ h\sim id_{\overline{N}}\) with the homotopy given by a continuous family \(u_{s}:\overline{N}\to\overline{N},s\in[0,1]\), of smooth strongly stratum preserving maps. Similarly, \(h\circ f\sim id_{\overline{M}}\) with the homotopy given by a continuous family \(v_{s}:\overline{M}\to\overline{M},s\in[0,1]\), of smooth strongly stratum preserving maps. We have \[u_{1}=f\circ h,u_{0}=id_{\overline{N}},\qquad v_{1}=h\circ f,v_{0}=id_{ \overline{M}}.\] Consider the Hilsum-Skandalis replacements \({}^{w}HS(u_{s})^{\prime}\) and \({}^{w}HS(v_{s})^{\prime}\) of the homotopies. By Theorem 4.11 these maps are bounded on the \(L^{2}\) completions, commute with the differential by [1, Proposition 9.3 (i)] and hence define bounded maps between the minimal and maximal Hilbert complexes \[{}^{w}HS(u_{s})^{\prime} :(\mathcal{D}_{\min/\max}(N,\mathcal{E}),d_{\mathcal{E}})\to( \mathcal{D}_{\min/\max}(N,\mathcal{E}),d_{\mathcal{E}}),\] \[{}^{w}HS(v_{s})^{\prime} :(\mathcal{D}_{\min/\max}(M,\mathcal{E}),d_{\mathcal{E}})\to( \mathcal{D}_{\min/\max}(M,\mathcal{E}),d_{\mathcal{E}}).\] Moreover, by Proposition 4.12, we find for appropriate \(K_{u},K_{v}\) \[\begin{split}&{}^{w}HS(f\circ h)^{\prime}-{}^{w}HS(id_{\overline{N}})^{ \prime}\equiv{}^{w}HS(u_{1})^{\prime}-{}^{w}HS(u_{0})^{\prime}=d_{N}K_{u}+K_{u}d _{N},\\ &{}^{w}HS(h\circ f)^{\prime}-{}^{w}HS(id_{\overline{M}})^{\prime }\equiv{}^{w}HS(v_{1})^{\prime}-{}^{w}HS(v_{0})^{\prime}=d_{M}K_{v}+K_{v}d_{M}. \end{split} \tag{5.1}\] Note that \(K_{u}\) and \(K_{v}\) do not commute with the differential and hence their boundedness in the \(L^{2}\) spaces, as asserted in Proposition 4.12, does not yet imply that they preserve the minimal and maximal domains. However, their relation (see (5.1)) with \(HS(u_{s})^{\prime}\) and \(HS(v_{s})^{\prime}\), which do preserve the domains, imply \[\begin{split} K_{u}:\mathcal{D}^{*}_{\min/\max}(N,\mathcal{E}) \to\mathcal{D}^{*-1}_{\min/\max}(N,\mathcal{E}),\\ K_{v}:\mathcal{D}^{*}_{\min/\max}(M,\mathcal{E})\to\mathcal{D}^{*-1 }_{\min/\max}(M,\mathcal{E}).\end{split} \tag{5.2}\] If we had \({}^{w}HS(id_{\overline{N}})^{\prime}=Id\) and \({}^{w}HS(id_{\overline{M}})^{\prime}=Id\), this would already mean that the \(\mathcal{N}\Gamma\)-Hilbert complexes \((\mathcal{D}_{\min/\max}(M,\mathcal{E}),d_{\mathcal{E}})\) and \((\mathcal{D}_{\min/\max}(N),d_{\mathcal{E}})\) are chain homotopy equivalent as in Definition 5.1. This would finish the proof. However, this is not the case. In order to compute \(({}^{w}HS(id_{\overline{N}})^{\prime}-Id)\) as well as \(({}^{w}HS(id_{\overline{M}})^{\prime}-Id)\), we consider the exponential maps as above (recall that the disc fibres in the bundles \(\mathbb{B}^{N}\) and \(\mathbb{B}^{M}\) are of radius \(\delta\), some lower bound for the injectivity radius) \[\begin{split}&\exp^{N}:\mathbb{B}^{N}|_{N}\to N,\qquad V\mapsto \gamma_{V}(\delta),\\ &\exp^{M}:\mathbb{B}^{M}|_{M}\to M\qquad V\mapsto\gamma_{V}( \delta),\end{split}\] where \(\gamma_{V}\) denotes the geodesic with respect to the complete edge metrics on either \(N\) or \(M\). Instead of evaluating the geodesics at \(s=\delta\), we evaluate them at any \(s\in[0,\delta]\) and define \[\begin{split}&\exp_{s}^{N}:\mathbb{B}^{N}|_{N}\to N,\qquad V \mapsto\gamma_{V}(s),\\ &\exp_{s}^{M}:\mathbb{B}^{M}|_{M}\to M\qquad V\mapsto\gamma_{V}( s),\end{split}\] Applying Proposition 4.12 to \(p_{s}\), that is defined in terms of the extensions \(\exp_{s}^{N}:\mathbb{B}^{N}\to\widetilde{N}\) and \(\exp_{s}^{M}:\mathbb{B}^{M}\to\widetilde{M}\), gives \[\begin{split}&{}^{w}HS(id_{\overline{N}})^{\prime}-Id=d_{ \mathcal{E}}K_{u}^{\prime}+K_{u}^{\prime}d_{\mathcal{E}},\qquad K_{u}^{\prime }:\mathcal{D}^{*}_{\min/\max}(N,\mathcal{E})\to\mathcal{D}^{*-1}_{\min/\max}( N,\mathcal{E}),\\ &{}^{w}HS(id_{\overline{M}})^{\prime}-Id=d_{\mathcal{E}}K_{v}^{ \prime}+K_{v}^{\prime}d_{\mathcal{E}}\qquad K_{v}^{\prime}:\mathcal{D}^{*}_{ \min/\max}(M,\mathcal{E})\to\mathcal{D}^{*-1}_{\min/\max}(M,\mathcal{E}).\end{split} \tag{5.3}\] Combining (5.1), (5.2) and (5.3) proves the statement. ### Proof of stability Our proof is now a consequence of the abstract stability result in [11, Proposition 4.1], see also [16, Theorem 2.19], which we now state for convenience. **Theorem 5.4**.: _Consider two \(\mathcal{N}\Gamma\) Hilbert complexes \((\mathcal{D},d)\) and \((\mathcal{D}^{\prime},d^{\prime})\). If they are chain homotopy equivalent as \(\mathcal{N}\Gamma\) Hilbert complexes, then their spectral density functions \(F_{*}(\lambda,(\mathcal{D},d))\) and \(F_{*}(\lambda,(\mathcal{D}^{\prime},d^{\prime}))\) are dilatationally equivalent, cf. [16, Definition 2.7], and in particular we have the following._ 1. _The Betti numbers_ \(b^{*}_{(2),\Gamma}(\mathcal{D},d)\) _and_ \(b^{*}_{(2),\Gamma}(\mathcal{D}^{\prime},d^{\prime})\) _are equal._ 2. \((\mathcal{D},d)\) _is Fredholm iff_ \((\mathcal{D}^{\prime},d^{\prime})\) _is so and the Novikov-Shubin invariants_ \(\alpha_{*}(\mathcal{D},d)\) _and_ \(\alpha_{*}(\mathcal{D}^{\prime},d^{\prime})\) _are equal._ We can now prove our second main result, Theorem 1.2. Proof of Theorem 1.2.: **Step 1)** Using (in both lines below) Proposition 2.4 for the first equality and Corollary 5.3 for the second equality (this is the only place where the Hilsum-Skandalis replacement is used), we know that \[\mathsf{b}^{*}_{(2),\max/\min}(\overline{\mathsf{N}}_{\Gamma})= \mathsf{b}^{*}_{(2),\max/\min}(\overline{\mathsf{N}},\mathcal{E}_{\overline{ \mathsf{N}}_{\Gamma}})=\mathsf{b}^{*}_{(2),\max/\min}(\overline{\mathsf{M}}, \mathsf{f}^{*}\mathcal{E}_{\overline{\mathsf{N}}_{\Gamma}}),\] \[\alpha_{*,\max/\min}(\overline{\mathsf{N}}_{\Gamma})=\alpha_{*, \max/\min}(\overline{\mathsf{N}},\mathcal{E}_{\overline{\mathsf{N}}_{\Gamma}}) =\alpha_{*,\max/\min}(\overline{\mathsf{M}},\mathsf{f}^{*}\mathcal{E}_{ \overline{\mathsf{N}}_{\Gamma}}).\] **Step 2)** We claim that \(\mathsf{f}^{*}\mathcal{E}_{\overline{\mathsf{N}}_{\Gamma}}\cong\mathcal{E}_{ \mathsf{f}^{*}\overline{\mathsf{N}}_{\Gamma}}\) and that the isomorphism commutes with the natural flat pullback connections. The proof amounts to understanding the isomorphism explicitly. Consider an atlas \(\{\mathsf{U}_{\alpha}\}_{\alpha}\) on \(\overline{\mathsf{N}}\), consisting of trivializing neighborhoods for the Galois \(\Gamma\)-covering \(\mathsf{p}:\overline{\mathsf{N}}_{\Gamma}\to\overline{\mathsf{N}}\). The covering comes with local trivializations \[\phi_{\alpha}:\overline{\mathsf{N}}_{\Gamma}\Big{|}_{\mathsf{U}_{\alpha}}\to \mathsf{U}_{\alpha}\times\Gamma,\quad\mathsf{x}\mapsto(\mathsf{p}(\mathsf{x}),\phi_{\alpha}(\mathsf{x})),\] and transition functions \(\phi_{\alpha,\beta}:\mathsf{U}_{\alpha}\cap\mathsf{U}_{\beta}\to\Gamma\). Let us recall the construction of the associated flat bundle \(\mathcal{E}_{\overline{\mathsf{N}}_{\Gamma}}=\overline{\mathsf{N}}_{\Gamma} \times_{\Gamma}\ell^{2}\Gamma\) in order to fix notation. The Mishchenko bundle is defined as a set of equivalence classes \[\mathcal{E}_{\overline{\mathsf{N}}_{\Gamma}}:=\{[(\mathsf{x},\mathsf{v})] \subset\overline{\mathsf{N}}_{\Gamma}\times\ell^{2}\Gamma\mid\forall_{\gamma \in\Gamma}:(\mathsf{x},\mathsf{v})\sim(\gamma\cdot\mathsf{x},\gamma^{-1}\cdot \mathsf{v})\},\] where \(\Gamma\) acts naturally on \(\overline{\mathsf{N}}_{\Gamma}\) and \(\ell^{2}\Gamma\). Its bundle structure comes from the local trivializations (the transition functions are given by the action of \(\phi_{\alpha,\beta}\) on \(\ell^{2}\Gamma\)) \[\Phi_{\alpha}:\mathcal{E}_{\overline{\mathsf{N}}_{\Gamma}}\Big{|}_{\mathsf{U }_{\alpha}}\to\mathsf{U}_{\alpha}\times\ell^{2}\Gamma,\quad[(\mathsf{x},\mathsf{ v})]\mapsto(\mathsf{p}(\mathsf{x}),\phi_{\alpha}(\mathsf{x})\cdot\mathsf{v}).\] We now consider the pullbacks of these objects by \(\mathsf{f}\): the pullback \(\mathsf{f}^{*}\overline{\mathsf{N}}_{\Gamma}\) is a Galois \(\Gamma\)-covering over \(\overline{\mathsf{M}}\), and the pullback \(\mathsf{f}^{*}\mathcal{E}_{\overline{\mathsf{N}}_{\Gamma}}\) as a bundle of Hilbert modules. Both are defined as a set as follows \[\mathsf{f}^{*}\overline{\mathsf{N}}_{\Gamma}:=\left\{(\mathsf{q},\mathsf{x}) \in\overline{\mathsf{M}}\times\overline{\mathsf{N}}_{\Gamma}\mid\mathsf{f}( \mathsf{q})=\mathsf{p}(\mathsf{x})\right\},\] Their covering, respectively the bundle structures are given in terms of the atlas \(\{\mathsf{f}^{-1}(\mathsf{U}_{\alpha})_{\alpha}\}\) on \(\overline{\mathsf{M}}\) of trivializing neighborhoods and local trivializations \[\mathsf{f}^{*}\phi_{\alpha}:\mathsf{f}^{*}\overline{\mathsf{N}}_{ \Gamma}\Big{|}_{\mathsf{f}^{-1}(\mathsf{U}_{\alpha})}\to\mathsf{f}^{-1}(\mathsf{ U}_{\alpha})\times\Gamma,\quad(\mathsf{q},\mathsf{x})\mapsto(\mathsf{q},\phi_{\alpha}( \mathsf{x})),\] \[\mathsf{f}^{*}\phi_{\alpha}:\mathsf{f}^{*}\mathcal{E}_{\overline{ \mathsf{N}}_{\Gamma}}\Big{|}_{\mathsf{f}^{-1}(\mathsf{U}_{\alpha})}\to\mathsf{f }^{-1}(\mathsf{U}_{\alpha})\times\ell^{2}\Gamma,\quad(\mathsf{q};[(\mathsf{x}, \mathsf{v})])\mapsto(\mathsf{q},\phi_{\alpha}(\mathsf{x})\cdot\mathsf{v}),\] and transition functions given in both cases by \(\mathsf{f}^{*}\phi_{\alpha,\beta}:\mathsf{f}^{-1}(\mathsf{U}_{\alpha})\cap \mathsf{f}^{-1}(\mathsf{U}_{\beta})\to\Gamma\). Our task is to compare \(\mathsf{f}^{*}\mathcal{E}_{\overline{\mathsf{N}}_{\Gamma}}\) to the bundle \(\mathcal{E}_{\mathsf{f}^{*}\overline{\mathsf{N}}_{\Gamma}}:=\mathsf{f}^{*} \overline{\mathsf{N}}_{\Gamma}\times_{\Gamma}\ell^{2}\Gamma\). The latter bundle is defined as a set of equivalence classes \[\mathcal{E}_{\mathsf{f}^{*}\overline{\mathsf{N}}_{\Gamma}}:=\Big{\{}\Big{[} \big{(}(\mathsf{q},\mathsf{x});\mathsf{v}\big{)}\Big{]}\subset\mathsf{f}^{*} \overline{\mathsf{N}}_{\Gamma}\times\ell^{2}\Gamma\mid\forall_{\gamma\in\Gamma}: \big{(}(\mathsf{q},\mathsf{x});\mathsf{v}\big{)}\sim\big{(}(\mathsf{q},\gamma \cdot\mathsf{x}),\gamma^{-1}\cdot\mathsf{v}\big{)}\Big{\}}.\] Its bundle structure is given by the local trivializations \[\Psi_{\alpha}:\mathcal{E}_{\mathsf{f}^{*}\overline{\mathsf{N}}_{\Gamma}}\Big{|}_ {\mathsf{f}^{-1}(\mathsf{U}_{\alpha})}\to\mathsf{f}^{-1}(\mathsf{U}_{\alpha}) \times\ell^{2}\Gamma,\quad\Big{[}\big{(}(\mathsf{q},\mathsf{x});\mathsf{v} \big{)}\Big{]}\mapsto(\mathsf{q},\phi_{\alpha}(\mathsf{x})\cdot\mathsf{v}),\] with transition functions given as before for \(\mathsf{f}^{*}\mathcal{E}_{\overline{\mathbb{N}}_{\Gamma}}\) by \(\mathsf{f}^{*}\phi_{\alpha,\beta}\). We can now define an isomorphism \(\mathsf{f}^{*}\mathcal{E}_{\overline{\mathbb{N}}_{\Gamma}}\cong\mathcal{E}_{ \mathsf{f}^{*}\overline{\mathbb{N}}_{\Gamma}}\), namely \[\mathrm{I}:\mathcal{E}_{\mathsf{f}^{*}\overline{\mathbb{N}}_{\Gamma}}\to \mathsf{f}^{*}\mathcal{E}_{\overline{\mathbb{N}}_{\Gamma}},\quad\Big{[}\big{(} (\mathsf{q},\mathsf{x});\mathsf{v}\big{)}\Big{]}\mapsto(\mathsf{q};[(\mathsf{x },\mathsf{v})]).\] One can easily check that this is well-defined, since \[\mathrm{I}\big{(}(\mathsf{q},\mathsf{\gamma}\cdot\mathsf{x});\mathsf{v}^{-1} \cdot\mathsf{v}\big{)}=\big{(}\mathsf{q};(\mathsf{\gamma}\cdot\mathsf{x}, \mathsf{\gamma}^{-1}\cdot\mathsf{v})\big{)}\in(\mathsf{q};[(\mathsf{x}, \mathsf{v})]).\] By construction we have a commutative diagram as in Figure 7. Since \(\mathrm{I}\) is acting as identity in local trivializations, it commutes with the natural flat pullback connections and hence we have a functoriality result \[\mathsf{b}^{*}_{(2),\max/\min}(\overline{\mathsf{M}},\mathsf{f}^{*} \mathcal{E}_{\overline{\mathbb{N}}_{\Gamma}}) =\mathsf{b}^{*}_{(2),\max/\min}(\overline{\mathsf{M}},\mathcal{E} _{\mathsf{f}^{*}\overline{\mathbb{N}}_{\Gamma}}),\] \[\alpha_{*,\max/\min}(\overline{\mathsf{M}},\mathsf{f}^{*} \mathcal{E}_{\overline{\mathbb{N}}_{\Gamma}}) =\alpha^{*}_{*,\max/\min}(\overline{\mathsf{M}},\mathcal{E}_{ \mathsf{f}^{*}\overline{\mathbb{N}}_{\Gamma}}).\] **Step 3)** Now we have \[\mathsf{b}^{*}_{(2),\max/\min}(\overline{\mathsf{N}},\mathcal{E} _{\mathsf{f}^{*}\overline{\mathbb{N}}_{\Gamma}})=\mathsf{b}_{(2),\max/\min}( \overline{\mathsf{M}},\mathcal{E}_{\overline{\mathsf{M}}_{\Gamma}})=\mathsf{b }^{*}_{(2),\max/\min}(\overline{\mathsf{M}}_{\Gamma}),\] \[\alpha_{*,\max/\min}(\overline{\mathsf{N}},\mathcal{E}_{\mathsf{ f}^{*}\overline{\mathbb{N}}_{\Gamma}})=\alpha_{*,\max/\min}(\overline{ \mathsf{M}},\mathcal{E}_{\overline{\mathsf{M}}_{\Gamma}})=\alpha_{*,\max/\min} (\overline{\mathsf{M}}_{\Gamma}),\] since \(\overline{\mathsf{M}}_{\Gamma}\) and \(\mathsf{f}^{*}\overline{\mathbb{N}}_{\Gamma}\) are isomorphic as Galois \(\Gamma\)-coverings. Proof of Corollary 1.3.: This follows immediately by the above proof and Corollary 4.4.
2306.09825
Classical gravitational anomalies of Liouville theory
We show that for classical Liouville field theory, diffeomorphism invariance, Weyl invariance and locality cannot hold together. This is due to a genuine Virasoro center, present in the theory, that leads to an energy\hyp{}momentum tensor with non-tensorial conformal transformations, in flat space, and with a non-vanishing trace, in curved space. Our focus is on a field-independent term, proportional to the square of the Weyl gauge field, $W_\mu W^\mu$, that makes the action Weyl-invariant and was disregarded in previous investigations of Weyl and conformal symmetry. We show this term to be related to the classical center of the Virasoro algebra. The mechanism uncovered here is a classical version of the quantum anomalous phenomenon: the generalization to curved space only allows to keep one of the two symmetries enjoyed by the flat space theory, either Lorentz (diffeomorphism) or conformal invariance.
Pavel Haman, Alfredo Iorio
2023-06-16T13:09:22Z
http://arxiv.org/abs/2306.09825v2
# Classical gravitational anomalies of Liouville theory ###### Abstract We show that for classical Liouville field theory, diffeomorphism invariance, Weyl invariance and locality cannot hold together. This is due to a genuine Virasoro center, present in the theory, that leads to an energy-momentum tensor with non-tensorial conformal transformations, in flat space, and with a non-vanishing trace, in curved space. Our focus is on a field-independent term, proportional to the square of the Weyl gauge field, \(W_{\mu}W^{\mu}\), that makes the action Weyl-invariant and was disregarded in previous investigations of Weyl and conformal symmetry. We show this term to be related to the classical center of the Virasoro. Gravitational anomalies; diffeomorphic invariance; conformal symmetry; classical Virasoro algebra; Liouville field theory. Liouville field theory, with flat space action1 Footnote 1: Here we use \(\eta_{\mu\nu}=\mathrm{diag}(+1,-1)\). \[A_{ L}[\Phi]=\int\mathrm{d}^{2}x\left(\frac{1}{2}\partial_{\mu}\Phi\partial^{ \mu}\Phi-\frac{m^{2}}{\beta^{2}}e^{\beta\Phi}\right), \tag{1}\] is an exactly solvable two-dimensional model that enjoys a prominent role in many fields of the theoretical and mathematical investigations. Among those, the geometry of surfaces [1], two-dimensional (quantum) gravity, see, e.g., [2] and [3], string theory, see, e.g., [4], conformal field theories, such as the Wess-Zumino-Witten and the Toda models, see e.g., [5], and therefore the AdS/CFT correspondence [6]. It is then of great importance to know its symmetries in all details, already at the classical level. In particular, Liouville theory is known to enjoy both scale and full (global) conformal symmetries in flat space, hence it belongs to the cases studied in [7]. There it is assumed that Weyl and diffemorphism invariances hold together. Even though full (global) conformal symmetry is known to be in place, in a more recent work [8] it is conjectured that Liouville theory might not be made both diffeomorphic and Weyl invariant, evoking a generic "classical anomaly" as the reason for that. In this letter we prove that conjecture and provide explicit formulae for such classical gravitational anomalies. We leave to a longer forthcoming paper [9] the more detailed discussion on how such classical anomalies arise, in general and for Liouville. Let us start by considering the Liouville action on curved background \[\mathcal{A}_{ L}[\Phi]=\int\mathrm{d}^{2}x\,\sqrt{-g}\!\left(\frac{1}{2}g^{\mu\nu} \nabla_{\mu}\Phi\nabla_{\nu}\Phi-\frac{m^{2}}{\beta^{2}}e^{\beta\Phi}+\frac{1 }{\beta}R\Phi\right), \tag{2}\] routinely employed to obtain the energy-momentum tensor (EMT) \[T_{ L}^{\mu\nu}=-\frac{2}{\sqrt{-g}}\frac{\delta\mathcal{A}_{ L}}{\delta g_{\mu\nu}}=\nabla^{\mu}\Phi\nabla^{\nu}\Phi-g^{\mu\nu}\!\left(\frac{1}{2}g^ {\alpha\beta}\nabla_{\alpha}\Phi\nabla_{\beta}\Phi-\frac{m^{2}}{\beta^{2}}e^{ \beta\Phi}\right)+\frac{2}{\beta}(g^{\mu\nu}\nabla_{\rho}\nabla^{\rho}-\nabla^ {\mu}\nabla^{\nu})\Phi\,, \tag{3}\] that _on-shell_ gives \[T_{ L}{}^{\mu}{}_{\mu}=\frac{2}{\beta^{2}}R\,, \tag{4}\] ensuring a zero trace in the flat limit, but _not Weyl invariance_, for the curved background. To remedy it, this _ad hoc_ procedure could be traded for the one of [7], based on the Weyl-gauging of the curvilinear expression for the action (1) \[\mathcal{A}_{W}[\Phi,W_{\mu}]=\int\mathrm{d}^{2}x\,\sqrt{-g}\Bigg{(}\frac{1}{2} g^{\mu\nu}\nabla_{\mu}\Phi\nabla_{\nu}\Phi-\frac{m^{2}}{\beta^{2}}e^{\beta\Phi}+ \frac{2}{\beta}\Phi\nabla_{\mu}W^{\mu}+\frac{2}{\beta^{2}}g^{\mu\nu}W_{\mu}W_ {\nu}\Bigg{)}\,. \tag{5}\] Since under Weyl transformations, \(g_{\mu\nu}\to e^{2\omega}g_{\mu\nu}\) and \(\Phi\to\Phi-\frac{2}{\beta}\omega\), one has \(2\nabla_{\mu}W^{\mu}\xrightarrow{\mathrm{Weyl}}e^{-2\omega}(2\nabla_{\mu}W^{ \mu}-2\nabla_{\mu}\nabla^{\mu}\omega)\), this should be compared to \(R[g_{\mu\nu}]\xrightarrow{\mathrm{Weyl}}e^{-2\omega}(R[g_{\mu\nu}]-2\nabla_{ \mu}\nabla^{\mu}\omega)\) and the (Ricci gauged, in the language of [7]) action \[\mathcal{A}_{R}[\Phi]=\int\mathrm{d}^{2}x\,\sqrt{-g}\Bigg{(}\frac{1}{2}g^{\mu \nu}\nabla_{\mu}\Phi\nabla_{\nu}\Phi-\frac{m^{2}}{\beta^{2}}e^{\beta\Phi}+ \frac{1}{\beta}\Phi R+\frac{2}{\beta^{2}}g^{\mu\nu}W_{\mu}W_{\nu}\Bigg{)}\,, \tag{6}\] enjoys Weyl invariance, \(T_{R}{}^{\mu}{}_{\mu}=0\), provided \[2\nabla_{\mu}W^{\mu}=R\,, \tag{7}\] holds. Notice that, contrary to [7], we keep here the last, \(\Phi\)-independent term, that is precisely the one that ensures Weyl invariance. A solution of (7) can be found [8] using the Green's function \(K(x,y)\), such that \(\nabla_{x}^{2}K(x,y)=\frac{1}{\sqrt{-g(x)}}\delta^{(2)}(x-y)\). Assuming that \(W_{\mu}=\partial_{\mu}w\), with \(w\) transforming as \(w\xrightarrow{\mathrm{Weyl}}w-\omega\), the solution is \(w(x)=1/2\int\mathrm{d}^{2}y\,\sqrt{-g(y)}\,K(x,y)\,R(y)\). It follows that the extra term in the action proportional to \(W^{\mu}W_{\mu}\) is \[\int\mathrm{d}^{2}x\,W^{\mu}(x)W_{\mu}(x)=\frac{1}{4}\int\mathrm{d}^{2}x\, \mathrm{d}^{2}y\,\sqrt{-g(x)}R(x)K(x,y)\sqrt{-g(y)}R(y)\,, \tag{8}\] which is the well-known _Polyakov string effective action_[4]. The EMT associated to the action (6) with (8), is traceless and covariantly conserved [4]. The price we pay is the evident nonlocality. A _local solution_ to (7) was found by Deser and Jackiw in [10] \[W^{\mu}_{DJ}=\frac{\varepsilon^{\mu\nu}}{2\sqrt{-g}}\Bigg{[}\frac{ \varepsilon^{\alpha\beta}}{\sqrt{-g}}\Gamma_{\beta\alpha\nu}+(\cosh\sigma-1) \partial_{\nu}\gamma+\partial_{\nu}r\Bigg{]}\,, \tag{9}\] where \(\varepsilon^{01}=+1\) is the Levi-Civita symbol and a "conformal" parametrization of the metric gives \(g_{++}/\sqrt{-g}=e^{\gamma}\sinh\sigma\), \(g_{+-}/\sqrt{-g}=\cosh\sigma\), \(g_{--}/\sqrt{-g}=e^{-\gamma}\sinh\sigma\), and, from there, \(\gamma=\ln\sqrt{g_{++}/g_{--}}\) (see Supplemental Material). The expression (9) includes the derivative of a generic Weyl scalar, \(r\), to take into account the invariance of (7) for \(W^{\mu}_{DJ}\to W^{\mu}_{DJ}+\frac{\varepsilon^{\mu\nu}}{2\sqrt{-g}} \partial_{\nu}r\). \(W^{\mu}_{DJ}\) enjoys proper Weyl transformations, \(g_{\mu\nu}W^{\nu}_{DJ}\xrightarrow{\mathrm{Weyl}}g_{\mu\nu}W^{\nu}_{DJ}- \partial_{\mu}\omega\,\), but it does not transform as a general (contravariant) vector under infinitesimal diffeomorphisms, \(x^{\mu}\to x^{\prime\mu}=x^{\mu}-f^{\mu}(x)\), \[W^{\prime\mu}_{DJ}(x^{\prime})=\,\frac{\partial x^{\prime\mu}}{\partial x^{ \nu}}W^{\nu}_{DJ}(x)+\frac{\varepsilon^{\mu\nu}}{2\sqrt{-g}}\partial_{\nu} \bigg{[}\Big{(}\partial_{-}-e^{-\gamma}\tanh\frac{\sigma}{2}\,\partial_{+} \Big{)}f^{-}-\bigg{(}\partial_{+}-e^{\gamma}\tanh\frac{\sigma}{2}\,\partial_{- }\Big{)}f^{+}\bigg{]}\,. \tag{10}\] It follows that the term \(g_{\mu\nu}W^{\mu}_{DJ}W^{\nu}_{DJ}\) in (6), although it keeps Weyl invariance and locality of \(\mathcal{A}_{R}[\Phi]\), cannot be a world scalar, hence it breaks diffeomorphism invariance. To investigate and quantify such breaking, let us start with the contribution to the EMT coming from the extra term \[T^{\mu\nu}_{\text{extra}}\equiv-\frac{2}{\sqrt{-g}}\frac{\delta}{\delta g_{\mu\nu }}\int\text{d}^{2}x\,\frac{2}{\beta^{2}}\sqrt{-g}W^{\mu}W_{\mu}\,. \tag{11}\] In the Supplemental Material it is shown that \[\frac{\beta^{2}}{2}T^{\mu\nu}_{\text{extra}}=g^{\mu\nu}W_{\rho}W^{\rho}-2W^{\mu }W^{\nu}-Rg^{\mu\nu}+\nabla^{\mu}W^{\nu}+\nabla^{\nu}W^{\mu}+2\frac{\varepsilon ^{\alpha\beta}}{\sqrt{-g}}\partial_{\beta}W_{\alpha}[(\cosh\sigma-1)\Gamma^{ \mu\nu}+r^{\mu\nu}]\,, \tag{12}\] where \(W_{\rho}\equiv g_{\rho\lambda}W^{\lambda}\), \(2\Gamma^{\mu\nu}=\left(g_{--}g_{++}\right)^{-1/2}\begin{pmatrix}-\sinh\gamma& \cosh\gamma\\ \cosh\gamma&-\sinh\gamma\end{pmatrix}\) and \(r^{\mu\nu}\equiv\delta r/\delta g_{\mu\nu}\). One would like to compute \(\nabla_{\mu}T^{\mu\nu}_{\text{extra}}\) and compare it with known expressions of the _quantum_ gravitational anomalies, such as [11]\(\nabla_{\mu}T^{\mu\nu}_{\phantom{\mu}\nu}=\frac{1}{48\pi}\frac{\varepsilon^{ \sigma\rho}}{2\sqrt{-g}}\partial_{\rho}\partial_{\lambda}\Gamma^{\lambda}{}_{ \nu\sigma}\) or [12]\(\nabla_{\mu}T^{\mu\nu}_{\phantom{\mu}\nu}=\frac{1}{48\pi}\partial_{\nu}R\). Rather than attempting a direct computation, we take a simpler road. First, we move to isothermal light-cone coordinates, that we know to always exist in two dimensions. There one has \(\hat{g}_{\pm\pm}(x)=e^{2\rho(x)}\begin{pmatrix}0&1\\ 1&0\end{pmatrix}\), and we indicate with a hat all quantities evaluated there2. If we set \(r=0\) for a moment, we have Footnote 2: For this choice, \(\varepsilon^{\alpha\beta}\partial_{\beta}\hat{W}_{\alpha}=0\). \[\frac{\beta^{2}}{4}\hat{T}^{\mu\nu}_{\text{extra}}=(\hat{g}^{\mu\alpha} \partial_{\alpha}\rho)(\hat{g}^{\nu\beta}\partial_{\beta}\rho)-\frac{1}{2} \hat{g}^{\mu\nu}(\hat{g}^{\alpha\beta}\partial_{\alpha}\rho\partial_{\beta} \rho)+\left[\hat{g}^{\mu\nu}(\hat{g}^{\alpha\beta}\partial_{\alpha}\partial_{ \beta})-\hat{g}^{\mu\alpha}\hat{g}^{\nu\beta}\partial_{\alpha}\partial_{\beta} \right]\!\rho\,, \tag{13}\] and the computation becomes trivial, \(\hat{\nabla}_{\mu}\hat{T}^{\mu\nu}_{\text{extra}}=0\). Of course, this would not guarantee general covariance, until we have a frame-independent result (see later). On the other hand, including \(r\) in the computation gives \[\beta^{2}\hat{\nabla}_{\mu}\hat{T}^{\mu\nu}_{\text{extra}}=\frac{\varepsilon^ {\nu\mu}}{\sqrt{-\hat{g}}}\partial_{\mu}\Big{(}\hat{g}^{\alpha\beta}\partial_ {\alpha}\partial_{\beta}\hat{r}\Big{)}-2\hat{g}^{\alpha\beta}\partial_{\mu}( \hat{r}^{\mu\nu}\partial_{\alpha}\partial_{\beta}\hat{r})+2\partial_{\mu}\hat{ g}^{\alpha\beta}\hat{r}^{\mu\nu}\partial_{\alpha}\partial_{\beta}\hat{r}\,, \tag{14}\] and this expression, although it differs from the recalled anomalous quantum expressions [11; 12], it is clearly nonzero, in general. In this coordinate frame, \(\hat{T}^{\mu\nu}_{\text{extra}}\) not only guarantees Weyl invariance, through a traceless EMT, but for harmonic \(r\)s, \(\hat{\square}\hat{r}=0\) \[\hat{\nabla}_{\mu}\hat{T}^{\mu\nu}_{\text{extra}}\bigg{|}_{\hat{\square}\hat{r} =0}=0\,. \tag{15}\] As for the previous case, for \(r=0\), this is not enough to have general covariance. We have no guarantee that (15) holds in all frames. We have to look for how much such divergence differs from a tensor, when we move away from the isothermal frame, \(\Delta\hat{\nabla}_{\mu}\hat{T}^{\mu\nu}_{\text{extra}}(x)\equiv\nabla^{\prime }_{\mu}T^{\prime\mu\nu}_{\text{extra}}(x^{\prime})-(\partial x^{\prime\nu}/ \partial x^{\rho})\hat{\nabla}_{\sigma}\hat{T}^{\sigma\rho}_{\text{extra}}(x)\). This has to be, at least partially, expressible in terms of \(\Delta\hat{W}^{\mu}(x)\equiv W^{\prime\mu}(x^{\prime})-(\partial x^{\prime\mu}/ \partial x^{\nu})\hat{W}^{\nu}(x)\). For infinitesimal diffeomorphisms, \(W^{\mu}\) transforms as (10) and, defining \(\Delta r(x)\equiv r^{\prime}(x^{\prime})-r(x)\), \[\Delta\hat{W}^{\mu}=\frac{\varepsilon^{\mu\nu}}{2\sqrt{-\hat{g}}}\partial_{\nu} \bigg{[}\varepsilon^{\alpha\beta}\partial_{\beta}\bigg{(}\frac{f_{\alpha}}{ \sqrt{-\hat{g}}}\bigg{)}+\Delta\hat{r}\bigg{]}\equiv\frac{\varepsilon^{\mu\nu} }{2\sqrt{-\hat{g}}}\partial_{\nu}\xi(r,f)\,. \tag{16}\] With these \[\beta^{2}\Delta\hat{\nabla}_{\mu}\hat{T}^{\mu\nu}_{\text{extra}}=\frac{ \varepsilon^{\nu\mu}}{\sqrt{-\hat{g}}}\partial_{\mu}\Big{[}\hat{g}^{\alpha\beta} \partial_{\alpha}\partial_{\beta}\xi(r,f)\Big{]}-2\hat{g}^{\alpha\beta}\partial_ {\mu}(\hat{r}^{\mu\nu}\partial_{\alpha}\partial_{\beta}\xi(r,f))+2\hat{r}^{\mu \nu}\partial_{\mu}\hat{g}^{\alpha\beta}\partial_{\alpha}\partial_{\beta}\xi(r,f)\,, \tag{17}\] that, for the choice (9), and for \(r=0\), eventually gives a compact expression \[\Delta\hat{\nabla}_{\mu}\hat{T}^{\mu\nu}_{\text{extra}}=\frac{1}{\beta^{2}}\, \frac{\varepsilon^{\nu\mu}}{\sqrt{-\hat{g}}}\partial_{\mu}\Big{[}\partial^{ \alpha\beta}\partial_{\alpha}\partial_{\beta}\Big{(}\partial_{-}f^{-}- \partial_{+}f^{+}\Big{)}\Big{]}\,. \tag{18}\] This expression does not vanish for a general \(f^{\mu}\). _This proves the loss of diffeomorphism invariance in the Weyl invariant formulation of Liouville theory (6), with local solution (9)_. Quadratic transformations, \(f^{\mu}=a^{\mu}{}_{\alpha}\beta x^{\alpha}x^{\beta}+b^{\mu}{}_{\alpha}x^{ \alpha}+c^{\mu}\), which include Poincare transformations, \(f^{\mu}=\omega^{\mu}{}_{\nu}x^{\nu}+c^{\mu}\), preserve the tensorial nature of \(\hat{\nabla}_{\mu}\hat{T}^{\mu\nu}_{\text{extra}}\) and so do conformal transformations, obeying \(\Box f^{\mu}=0\). Therefore, for infinitesimal conformal and Poincare transformations in flat space, the extra term, \(T^{\mu\nu}_{\text{extra}}\), is covariantly conserved, regardless of the choice of \(r\). In other words, \(T^{\mu\nu}_{\text{extra}}\) does not violate the symmetries of \(T^{\mu\nu}\) in the flat limit, that is the same conclusion of [7]. To complete the proof that this lack of diffeomorphic invariance is indeed the classical version of the quantum anomaly, we need to relate it to a nontrivial center of the Virasoro algebra. To do so, let us first consider the flat limit of the EMT (3) \[\Theta_{\mu\nu}=\partial_{\mu}\Phi\partial_{\nu}\Phi-\eta_{\mu\nu}\!\left( \frac{1}{2}\partial_{\alpha}\Phi\partial^{\alpha}\Phi-\frac{m^{2}}{\beta^{2}} e^{\beta\Phi}\right)+\frac{2}{\beta}(\eta_{\mu\nu}\Box-\partial_{\mu}\partial_{ \nu})\Phi\,, \tag{19}\] that is traceless on-shell. The associated Noether charges, written in the light-cone frame3 Footnote 3: Light-cone coordinates are defined in Supplemental Material. Further details on the expression of these charges in this frame can be found in [9] \[Q^{\pm}[f]=\int\mathrm{d}x^{\pm}\,\Theta_{\pm\pm}f^{\pm}=\int\mathrm{d}x^{\pm }\left((\partial_{\pm}\Phi)^{2}-\frac{2}{\beta}\partial_{\pm}^{2}\Phi\right) f^{\pm}\,, \tag{20}\] through the Poisson brackets \(\left\{\Phi(x),\Phi(y)\right\}_{|_{x^{+}=y^{+}}}=-\frac{1}{4}\operatorname{ sgn}(x^{-}-y^{-})\) and \(\left\{\Phi(x),\Phi(y)\right\}_{|_{x^{-}=y^{-}}}=-\frac{1}{4}\operatorname{ sgn}(x^{+}-y^{+})\), generate the right transformations \[\delta_{f}\Phi\equiv\left\{\Phi(x^{+},x^{-}),Q^{\pm}[f]\right\}=f^{\pm}(x^{ \pm})\partial_{\pm}\Phi(x^{+},x^{-})+\frac{1}{\beta}\partial_{\pm}f^{\pm}(x^ {\pm})\,. \tag{21}\] They are made of two terms, both necessary for the invariance of the flat action (1): the usual Lie derivative of the scalar field, \(f^{\alpha}\partial_{\alpha}\Phi\), and an affine term. It is easy to verify that these charges obey an algebra with a genuine central extension \[\left\{Q^{\pm}[f],Q^{\pm}[g]\right\}=Q^{\pm}[k]+\frac{1}{\beta^{2}}\Delta^{\pm }[f,g]\,, \tag{22}\] where \(k^{\mu}=f^{\nu}\partial_{\nu}g^{\mu}-g^{\nu}\partial_{\nu}f^{\mu}\) and \(\Delta^{\pm}[f,g]=\int\mathrm{d}x^{\pm}\,(g^{\pm}\partial_{\pm}^{3}f^{\pm}-f^ {\pm}\partial_{\pm}^{3}g^{\pm})\). By restricting to a periodic manifold, with a periodicity \(P\), \(x^{\pm}\propto x^{\pm}+P\), generators can be decomposed into \[Q^{\pm}_{n}\equiv\frac{P}{2\pi}\int\mathrm{d}x^{\pm}\,\Theta_{\pm\pm}e^{i\frac {2\pi}{P}nx^{\pm}}=\frac{P}{2\pi}Q^{\pm}[e^{i\frac{2\pi}{P}nx^{\pm}}]\,, \tag{23}\] and the algebra (22) can be recast into the following form \[i\!\left\{Q^{\pm}_{n},Q^{\pm}_{m}\right\}=(n-m)Q^{\pm}_{n+m}+\frac{4\pi}{\beta ^{2}}n^{3}\delta_{n+m,0} \tag{24}\] that is just the Virasoro algebra with genuine central charge \[c=\frac{48\pi}{\beta^{2}}\,. \tag{25}\] It is the algebra of the flat Liouville EMT components that inevitably includes the genuine center (25) \[\left\{\Theta_{\pm\pm}(x),\Theta_{\pm\pm}(y)\right\}\bigg{|}_{x^{\mp}=y^{\mp}}= \Theta^{\prime}_{\pm\pm}(x)\delta(x^{\pm}-y^{\pm})+2\Theta_{\pm\pm}(x)\delta^{ \prime}(x^{\pm}-y^{\pm})-\frac{c}{24\pi}\delta^{\prime\prime\prime}(x^{\pm}-y ^{\pm})\,, \tag{26}\] and so do its transformations \[\delta_{f}\Theta_{\pm\pm}=f^{\pm}\partial_{\pm}\Theta_{\pm\pm}+2\Theta_{\pm \pm}\partial_{\pm}f^{\pm}-\frac{c}{24\pi}\partial_{\pm}^{3}f^{\pm}\,. \tag{27}\] This center is not there in the trace of flat EMT (19), but it is proportional to the trace (4) of the curved space EMT (3). A deeper study of this, including the general framework for classically anomalous transformations, we have done it in [9]. Here we want to show that the above is indeed related to the extra term \(T^{\mu\nu}_{\rm extra}\) that, in curved space, preserves Weyl invariance but breaks diffeomorphic invariance. To do so, let us first rewrite (27) as the difference between the full transformation and its tensorial part \[\Delta\Theta_{\pm\pm}\equiv\delta_{f}\Theta_{\pm\pm}-f^{\pm}\partial_{\pm} \Theta_{\pm\pm}-2\Theta_{\pm\pm}\partial_{\pm}f^{\pm}=-\frac{c}{24\pi} \partial_{\pm}^{3}f^{\pm}\,, \tag{28}\] as we did earlier in the curved context. We then simply notice that, for the infinitesimal diffeomorphism, \(x^{\mu}\to x^{\mu}-f^{\mu}(x)\), the non-tensorial transformation of \(T^{\mu\nu}_{\rm extra}\) is \[\beta^{2}\Delta\hat{T}^{\mu\nu}_{\rm extra}(x)=\partial_{\alpha}\partial_{ \beta}\xi(r,f)\Bigg{(}\hat{g}^{\mu\alpha}\frac{\varepsilon^{\nu\beta}}{\sqrt{- \hat{g}}}+\hat{g}^{\nu\alpha}\frac{\varepsilon^{\mu\beta}}{\sqrt{-\hat{g}}} \Bigg{)}+2\hat{g}^{\mu\nu}\frac{\varepsilon^{\alpha\beta}}{\sqrt{-\hat{g}}} \partial_{\alpha}\rho\partial_{\beta}\xi(r,f)\,, \tag{29}\] where the same notation of (16) and (17) has been used. Assuming conformal diffeomorphisms and taking the flat limit we have \[\Delta\hat{T}^{\pm\pm}_{\rm extra}(x)\bigg{|}_{\rho\to 0}=-\frac{2}{\beta^{2}} \,\partial_{\mp}^{3}f^{\mp}=\Delta\Theta_{\mp\mp}\,, \tag{30}\] which is exactly4 (28) with \(c\) given by (25). Footnote 4: Due to the signature of the light-cone metric, \(\Theta_{\mp\mp}=\Theta^{\pm\pm}\). See Supplemental Material. This center was removed from the trace (4) but re-emerged in (30). We have then proved that, also for _classical_ Liouville theory, lack of Weyl invariance or of diffeomorphism invariance is related to the Virasoro center, like in the quantum case. This gives a precise mathematical meaning to what we are now entitled to call "classical gravitational anomalies". Whether this is possible for more general classical systems, it is an important open question. Another direction for further research that we are considering is the connection of such anomalous transformation of the EMT with "classical Unruh and Hawking effects". _Acknowledgments._ We gladly acknowledge support from Charles University Research Center (UNCE/SCI/013).
2310.15532
Role of sea quarks in the nucleon transverse spin
We present a phenomenological extraction of transversity distribution functions and Collins fragmentation functions by simultaneously fitting to semi-inclusive deep inelastic scattering and electron-positron annihilation data. The analysis is performed within the transverse momentum dependent factorization formalism, and sea quark transversity distributions are taken into account for the first time. We find the $\bar u$ quark favors a negative transversity distribution while that of the $\bar d$ quark is consistent with zero according to the current accuracy. In addition, based on a combined analysis of world data and simulated data, we quantitatively demonstrate the impact of the proposed Electron-ion Collider in China on precise determinations of the transversity distributions, especially for sea quarks, and the Collins fragmentation functions.
Chunhua Zeng, Hongxin Dong, Tianbo Liu, Peng Sun, Yuxiang Zhao
2023-10-24T05:36:53Z
http://arxiv.org/abs/2310.15532v1
# Role of sea quarks in the nucleon transverse spin ###### Abstract We present a phenomenological extraction of transversity distribution functions and Collins fragmentation functions by simultaneously fitting to semi-inclusive deep inelastic scattering and electron-positron annihilation data. The analysis is performed within the transverse momentum dependent factorization formalism, and sea quark transversity distributions are taken into account for the first time. We find the \(\bar{u}\) quark favors a negative transversity distribution while that of the \(\bar{d}\) quark is consistent with zero according to the current accuracy. In addition, based on a combined analysis of world data and simulated data, we quantitatively demonstrate the impact of the proposed Electron Collider in China on precise determinations of the transversity distributions, especially for sea quarks, and the Collins fragmentation functions. ## I Introduction How the nucleon is built up with quarks and gluons, the fundamental degrees of freedom of the quantum chromodynamics (QCD), is one of the most important questions in modern hadronic physics. Although the color confinement and nonperturbative feature of the strong interaction at hadronic scales makes it a challenging problem, the QCD factorization is established to connect quarks and gluons that participate high energy scatterings at sub-femtometer scales and the hadrons observed by advanced detectors in experiments. In this framework, the cross section is approximated as a convolution of perturbatively calculable short-distance scattering off partons and universal long-distance functions [1; 2]. Therefore, it provides an approach to extract the partonic structures of the nucleon through various experimental measurements. The spin as a fundamental quantity of the nucleon plays an important role in unraveling its internal structures and then in understanding the properties of the strong interaction. For instance, the so-called _proton spin crisis_ arose from the measurement of longitudinally polarized deep inelastic scattering (DIS) [3; 4] and is still an active frontier after more than three decades. As an analog to the helicity distribution, which can be interpreted as the density of longitudinally polarized quark in a longitudinally polarized nucleon, the transversity distribution describes the net density of transversely polarized quark in a transversely polarized proton. The integral of the transversity distribution equals to the tensor charge, which characterizes the coupling to a tensor current. As the matrix element of a local tensor current operator, it has been calculated in lattice QCD with high accuracy [5; 6; 7; 8; 9; 10; 11] and is often referred to as a benchmark. In addition, a precise determination of the nucleon tensor charge will also shed light on the search of new physics beyond the standard model [12; 13]. The transversity distribution has both collinear and transverse momentum dependent (TMD) definitions. As a chiral-odd quantity [14], its contribution to inclusive DIS is highly suppressed by powers of \(m/Q\), where \(m\) represents the quark mass and \(Q\) is the virtuality of the exchanged photon between the scattered lepton and the nucleon. A practical way to access the transversity distribution is by coupling with another chiral-odd quantity, either a fragmentation function (FF) in semi-inclusive DIS (SIDIS) process [15; 16] or a distribution function in hadron-hadron collisions [17; 18; 19]. In the last two decades, many efforts have been made by HERMES [20], COMPASS [21; 22], and Jefferson Lab (JLab) [23; 24] via the measurement of SIDIS process on transversely polarized targets. At low transverse momentum of the produced hadron, a target transverse single spin asymmetry (SSA), named as the Collins asymmetry, can be expressed as the convolution of the transversity distribution and the Collins FF within the TMD factorization. The Collins FF, which describes a transversely polarized quark fragmenting to an unpolarized hadron, also leads to an azimuthal asymmetry in semi-inclusive \(e^{+}e^{-}\) annihilation (SIA) process, and such asymmetry has been measured by BELLE [25], BABAR [26; 27], and BESIII [28] collaborations. Therefore, the transversity distribution as well as the tensor charge can be determined through a simultaneous analysis of the Collins asymmetries in SIDIS and SIA processes. We note that one can alternatively work in the collinear factorization to extract the transversity distribution via dihadron productions [29; 30; 31; 32; 33]. Restricted in the TMD framework, many global analyses were performed in recent years to extract the transversity distribution with or without the TMD evolution effect [34; 35; 36; 37; 38]. Since quark transversity distributions do not mix with gluons in the evolution, the sea quark transversity distributions were usually assumed to be zero in these analyses. This assumption might be reasonable in the exploration era, but it should eventually be tested by experiments, especially when high precision data become available at future facilities. After the COMPASS data taking with a transversely polarized deuteron target in 2022-2023 run, the next generation of high-precision measurements will be the multi-hall SIDIS programs at the 12-GeV upgraded JLab and future electron-ion colliders. The JLab experiments will mainly cover large x region with relatively low \(Q^{2}\). The electron-ion collider (EIC) to be built at the Brookhaven National Laboratory (BNL) [39; 40] will provide moderate and large \(x\) coverage with high \(Q^{2}\). Meanwhile, it can also reach small \(x\) values down to about \(10^{-4}\). The electron-ion collider in China (EicC) [41] is proposed to deliver a 3.5 GeV polarized electron beam colliding with a 20 GeV polarized proton beam or a 40 GeV polarized \({}^{3}\)He beam, as well as a series of unpolarized ion beams, with designed instantaneous luminosity at about 2\(\times 10^{33}\,\mathrm{cm}^{-2}\mathrm{s}^{-1}\). Its kinematic coverage will be complementary to the experiments at JLab and the EIC at BNL. In this paper, we perform a global analysis of the Collins asymmetries in SIDIS and SIA measurement within the TMD factorization to extract the transversity distribution functions and the Collins fragmentation functions. As will be shown, there is a hint of negative \(\bar{u}\) transversity distribution with about two standard deviations away from zero, while the \(\bar{d}\) transversity distribution is consistent with zero according to the current accuracy from existing world data. Furthermore, we quantitatively study potential improvement of the EicC, which was claimed to have significant impact on the measurement of sea quark distributions. The remaining paper is organized as follows. In Sec. II, we briefly summarize the theoretical framework for the extraction of transversity distribution functions and Collins FFs from SIDIS and SIA data, leaving some detailed formulas in the Appendix. In Sec. III, we present the global analysis of world data, followed by an impact study of the EicC projected pseudodata in Sec. IV. A summary is provided in Sec. V. ## II Theoretical formalism In this section, the asymmetries originated from transversity TMDs and Collins FFs in SIDIS and SIA processes will be briefly reviewed, including the TMD evolution formalism to be adopted in the analysis. ### Collins asymmetry in SIDIS The SIDIS process is \[e(l)+N(P)\to e(l^{\prime})+h(P_{h})+X, \tag{1}\] where \(e\) denotes the incoming and outgoing lepton, \(N\) is the nucleon, and \(h\) is the detected final-state hadron. The four-momenta are given in the parentheses. Some commonly used kinematic variables are defined as \[x=\frac{Q^{2}}{2P\cdot q},\quad y=\frac{P\cdot q}{P\cdot l},\quad z =\frac{P\cdot P_{h}}{P\cdot q},\quad\gamma=\frac{2xM}{Q}, \tag{2}\] where \(Q^{2}=-q^{2}=-(l-l^{\prime})^{2}\) is the transferred four-momentum square and \(M\) is the nucleon mass. Taking the one-photon exchange approximation, we adopt the virtual photon-nucleon frame, as illustrated in Fig. 1, and for convenience introduce the transverse metric \[g_{\perp}^{\mu\nu}=g^{\mu\nu}-\frac{q^{\mu}P^{\nu}+P^{\mu}q^{ \nu}}{P\cdot q\,(1+\gamma^{2})}+\frac{\gamma^{2}}{1+\gamma^{2}}\left(\frac{q^ {\mu}q^{\nu}}{Q^{2}}-\frac{P^{\mu}P^{\nu}}{M^{2}}\right), \tag{3}\] and the transverse antisymmetric tensor \[\epsilon_{\perp}^{\mu\nu}=\epsilon^{\mu\nu\rho\sigma}\frac{P_{ \rho}q_{\sigma}}{P\cdot q\sqrt{1+\gamma^{2}}}, \tag{4}\] with the convention \(\epsilon^{0123}=1\). Then the transverse momentum \(P_{h\perp}\) and \(l_{\perp}\) and azimuthal angles \(\phi_{h}\) and the \(\phi_{S}\) can be expressed in Lorentz invariant forms as \[P_{h\perp} =\sqrt{-g_{\perp}^{\mu\nu}P_{h\mu}P_{h\nu}}, \tag{5}\] \[l_{\perp} =\sqrt{-g_{\perp}^{\mu\nu}l_{\mu}l_{\nu}},\] (6) \[\cos\phi_{h} =-\frac{l_{\mu}P_{\nu\rho}g_{\perp}^{\mu\nu}}{l_{\perp}P_{h \perp}},\quad\sin\phi_{h}=-\frac{l_{\mu}P_{h\nu}\epsilon_{\perp}^{\mu\nu}}{l_{ \perp}P_{h\perp}},\] (7) \[\cos\phi_{S} =-\frac{l_{\mu}S_{\perp\nu}g_{\perp}^{\mu\nu}}{l_{\perp}S_{\perp }},\quad\sin\phi_{S}=-\frac{l_{\mu}S_{\perp\nu}\epsilon_{\perp}^{\mu\nu}}{l_{ \perp}S_{\perp}}, \tag{8}\] where are known as the Trento conventions [42]. The differential cross section can be written as \[\frac{d\sigma}{dxdydzd\phi_{h}d\phi_{s}dP_{h\perp}^{2}}=\frac{ \alpha^{2}}{xyQ^{2}}\frac{y^{2}}{2(1-\epsilon)}(1+\frac{\gamma^{2}}{2x})\{\] \[F_{UU,T}+\epsilon F_{UU,L}+\sqrt{2\epsilon(1+\epsilon)}\cos( \phi_{h})F_{UU}^{\cos\phi_{h}}\] \[+\epsilon\cos(2\phi_{h})F_{UU}^{\cos\phi_{h}}+\lambda_{\phi}\sqrt{ 2\epsilon(1-\epsilon)}\sin(\phi_{h})F_{UU}^{\sin\phi_{h}}\] \[+S_{||}[\sqrt{2\epsilon(1+\epsilon)}\sin(\phi_{h})F_{UL}^{\sin\phi_{h} }+\epsilon\sin(2\phi_{h})F_{UL}^{\sin 2\phi_{h}}]\] \[+S_{||}\lambda_{e}[\sqrt{1-\epsilon^{2}}F_{LL}+\sqrt{2\epsilon(1- \epsilon)}\cos(\phi_{h})F_{UL}^{\cos\phi_{h}}]\] \[+|S_{\perp}|[\sin(\phi_{h}-\phi_{s})(F_{UT,T}^{\sin(\phi_{h}-\phi_ {s})}+\epsilon F_{UT,L}^{\sin(\phi_{h}-\phi_{s})})\] \[+\epsilon\sin(\phi_{h}+\phi_{s})F_{UT}^{\sin(\phi_{h}+\phi_{s})}+ \epsilon\sin(3\phi_{h}-\phi_{s})F_{UT}^{\sin(3\phi_{h}-\phi_{s})}\] \[+\sqrt{2\epsilon(1+\epsilon)}\sin(\phi_{s})F_{UT}^{\phi_{s}}\] \[+\sqrt{2\epsilon(1+\epsilon)}\sin(2\phi_{h}-\phi_{s})F_{UT}^{ \sin(2\phi_{h}-\phi_{s})}]\] \[+|S_{\perp}|\lambda_{e}[\sqrt{1-\epsilon^{2}}\cos(\phi_{h}-\phi_ {s})F_{LT}^{\cos(\phi_{h}-\phi_{s})}\] \[+\sqrt{2\epsilon(1-\epsilon)}\cos(\phi_{s})F_{LT}^{\cos\phi_{s}}\] \[+\sqrt{2\epsilon(1-\epsilon)}\cos(2\phi_{h}-\phi_{s})F_{LT}^{ \cos(2\phi_{h}-\phi_{s})}]\}, \tag{9}\] where \(\alpha\) is the electromagnetic fine structure constant, \(\lambda_{e}\) is the lepton helicity, \(S_{||(\perp)}\) is the nucleon polarization, and the structure functions \(F\) are corresponded to different azimuthal modulations indicated by the superscripts and polarization configurations indicated by the subscripts. The third subscript appeared in some terms represents the polarization of the virtual photon, and the ratio of the longitudinal and the transverse photon flux is given by \[\epsilon=\frac{1-y-\frac{1}{4}\gamma^{2}y^{2}}{1-y+\frac{1}{2}y^{2}+\frac{1}{4 }\gamma^{2}y^{2}}. \tag{10}\] For unpolarized lepton beam scattered from a transversely polarized nucleon, the SSA can be measured by flipping the transverse polarization of the nucleon as \[A_{UT}=\frac{1}{|S_{\perp}|}\frac{d\sigma(\phi_{h},\phi_{s})-d\sigma(\phi_{h},\phi_{s}+\pi)}{d\sigma(\phi_{h},\phi_{s})+d\sigma(\phi_{h},\phi_{s}+\pi)}= \frac{\sigma^{-}_{UT}}{\sigma^{+}_{UT}}, \tag{11}\] where \[\sigma^{+}_{UT}= F_{UU,T}+\epsilon F_{UU,L}+\sqrt{2\epsilon(1+\epsilon)}\cos( \phi_{h})F_{UU}^{\cos\phi_{h}}\] \[+\epsilon\cos(2\phi_{h})F_{UU}^{\cos 2\phi_{h}}, \tag{12}\] \[\sigma^{-}_{UT}= \sin(\phi_{h}-\phi_{s})(F_{UT,T}^{\sin(\phi_{h}-\phi_{s})}+ \epsilon F_{UT,L}^{\sin(\phi_{h}-\phi_{s})})\] \[+\epsilon\sin(\phi_{h}+\phi_{s})F_{UT}^{\sin(\phi_{h}+\phi_{s})}\] \[+\epsilon\sin(3\phi_{h}-\phi_{s})F_{UT}^{\sin(3\phi_{h}-\phi_{s})}\] \[+\sqrt{2\epsilon(1+\epsilon)}\sin(\phi_{s})F_{UT}^{\phi_{s}}\] \[+\sqrt{2\epsilon(1+\epsilon)}\sin(2\phi_{h}-\phi_{s})F_{UT}^{\sin( 2\phi_{h}-\phi_{s})}. \tag{13}\] After separating different azimuthal modulations, one can extract the Collins asymmetry as \[\epsilon A_{UT}^{\sin(\phi_{h}+\phi_{s})}= \frac{2\int d\phi_{S}d\phi_{h}\sin(\phi_{h}+\phi_{S})\sigma^{-}_{ UT}}{\int d\phi_{S}d\phi_{h}\sigma^{+}_{UT}}\] \[= \frac{\epsilon F_{UT}^{\sin(\phi_{h}+\phi_{s})}}{F_{UU,T}+\epsilon F _{UU,L}}. \tag{14}\] In this work, we neglect the term \(F_{UU,L}\) and thus \[A_{UT}^{\sin(\phi_{h}+\phi_{s})}=\frac{F_{UT}^{\sin(\phi_{h}+\phi_{s})}}{F_{UU,T}}. \tag{15}\] To implement the TMD evolution, we perform the transverse Fourier transform and the \(P_{h\perp}\)-dependent structure functions can be expressed in terms of distribution and fragmentation functions in \(b\)-space as \[F_{UU,T}=\mathcal{C}[f_{1}D_{1}]\] \[=x\sum_{q}\frac{e_{q}^{2}}{2\pi}\int_{0}^{\infty}bJ_{0}(bP_{h \perp}/z)f_{1,q\gets N}(x,b)\] \[\qquad\times D_{1,q\to h}(z,b)db, \tag{16}\] \[F_{UT}^{\sin(\phi_{h}+\phi_{S})}=\mathcal{C}\Big{[}\frac{\hat{ \mathbf{h}}\cdot\mathbf{p}_{T}}{zM_{h}}h_{1}H_{1}^{\perp}\Big{]}\] \[=x\sum_{q}\frac{M_{h}e_{q}^{2}}{2\pi}\int_{0}^{\infty}b^{2}J_{1}( bP_{h\perp}/z)h_{1,q\gets N}(x,b)\] \[\qquad\times H_{1,q\to h}^{\perp}(z,b)db, \tag{17}\] where \(f_{1}\) is the unpolarized distribution function, \(D_{1}\) is the unpolarized FF, \(h_{1}\) is the transversity distribution, and \(H_{1}^{\perp}\) is the Collins FF, with \(q\) running over all active quark flavors: \(u\), \(d\), \(s\), \(\bar{u}\), \(\bar{d}\), and \(\bar{s}\), and \(e_{q}\) being the charge. The transverse momentum convolution, denoted by \(\mathcal{C}[\cdots]\), is defined as \[\mathcal{C}[wfD]= x\sum_{q}e_{q}^{2}\int d^{2}\mathbf{p}_{T}d^{2}\mathbf{k}_{\perp}\delta^{(2 )}(\mathbf{p}_{T}+z\mathbf{k}_{\perp}-\mathbf{P}_{h\perp})\] \[\times w(\mathbf{p}_{T},\mathbf{k}_{\perp})f_{q\gets N}(x,k_{\perp})D_ {q\to h}(z,p_{T}). \tag{18}\] Here \(b\) is the Fourier conjugate variable to the transverse momentum of parton, \(\mathbf{k}_{\perp}\) is the transverse momentum of the quark inside the nucleon, \(\mathbf{p}_{T}\) is the transverse momentum of the final-state hadron with respect to the parent quark momentum, and \(\hat{\mathbf{h}}=\mathbf{P}_{h\perp}/|\mathbf{P}_{h\perp}|\) represents the transverse direction of the final-state hadron. More details of these expressions are given in Appendix B and C. ### Collins asymmetries in SIA Considering the SIA process, Figure 1: The Trento convention for the definition of SIDIS kinematic variables. \[e^{+}(l_{e^{+}})+e^{-}(l_{e^{-}})\to h_{1}(P_{h1})+h_{2}(P_{h2})+X, \tag{19}\] one can introduce the variables \(z_{i}=2P_{hi}\cdot q/Q\) (\(i=1,2\)) with \(q=l_{e^{+}}+l_{e^{-}}\) and \(Q^{2}=q^{2}\). With one-photon exchange approximation, the differential cross section can be expressed in terms of the structure functions \(F_{uu}^{h_{1}h_{2}}\) and \(F_{Collins}^{h_{1}h_{2}}\) as \[\frac{d^{5}\sigma}{dz_{1}dz_{2}d^{2}\mathbf{P}_{h\perp}d\cos\theta}= \frac{3\pi\alpha^{2}z_{1}^{2}z_{2}^{2}}{2Q^{2}}\Big{[}(1+\cos^{2} \theta)F_{uu}^{h_{1}h_{2}}\] \[+\sin^{2}\theta\cos(2\phi_{0})F_{Collins}^{h_{1}h_{2}}\Big{]}. \tag{20}\] As illustrated in Fig. 2, \(\theta\) is the polar angle between the hadron \(h_{2}\) and the beam of \(e^{+}e^{-}\), \(\phi_{0}\) is the azimuthal angle from the lepton plane to the hadron plane, and \(P_{h\perp}\) is the transverse momentum of hadron \(h_{1}\). When the two hadrons are nearly back-to-back, where the TMD factorization is appropriate, one can express the structure functions \(F_{uu}^{h_{1}h_{2}}\) and \(F_{Collins}^{h_{1}h_{2}}\) in terms of TMD FFs as \[F_{uu}^{h_{1}h_{2}} =\mathcal{C}[D_{1}D_{1}]=\frac{1}{2\pi}\sum_{q}e_{q}^{2}\int J_{0} (P_{h\perp}b/z_{1})\] \[\times D_{1,q\to h_{1}}(z_{1},b)D_{1,\bar{q}\to h_{2}}(z_{2},b)bdb, \tag{21}\] \[F_{Collins}^{h_{1}h_{2}} =\mathcal{C}[\frac{2(\hat{\mathbf{h}}\cdot\mathbf{p}_{1T})(\hat{\mathbf{h}} \cdot\mathbf{p}_{2T})-\mathbf{p}_{1T}\cdot\mathbf{p}_{2T}}{z_{1}z_{2}M_{h_{1}}M_{h_{2}}}H_ {1}^{\perp}H_{1}^{\perp}]\] \[=\frac{M_{h_{1}}M_{h_{2}}}{2\pi}\sum_{q}e_{q}^{2}\int J_{2}(P_{h \perp}b/z_{1})\] \[\times H_{1,q\to h_{1}}^{\perp}(z_{1},b)H_{1,\bar{q}\to h_{2}}^{ \perp}(z_{2},b)b^{3}db, \tag{22}\] where the transverse momentum convolution \(\mathcal{C}[\cdots]\) is defined as \[\mathcal{C}[wDD]= \sum_{q}e_{q}^{2}\int\frac{d^{2}\mathbf{p}_{1T}}{z_{1}^{2}}\frac{d^{ 2}\mathbf{p}_{2T}}{z_{2}^{2}}\delta^{(2)}(-\frac{\mathbf{p}_{1T}}{z_{1}}-\frac{\mathbf{p}_ {2T}}{z_{2}}+\frac{\mathbf{P}_{h\perp}}{z_{1}})\] \[\times w(\mathbf{p}_{1T},\mathbf{p}_{2T})D_{q\to h_{1}}(z_{1},p_{1T})D_{ \bar{q}\to h_{2}}(z_{2},p_{2T}). \tag{23}\] More details are provided in Appendix C. In order to extract Collins effect corresponding to the \(\cos 2\phi_{0}\) azimuthal dependence, one can rewritten the differential cross section (20) as \[\frac{d^{5}\sigma}{dz_{1}dz_{2}d^{2}\mathbf{P}_{h\perp}d\cos\theta}= \frac{3\pi\alpha^{2}z_{1}^{2}z_{2}^{2}}{2Q^{2}}(1+\cos^{2}\theta)F_{uu}^{h_{1}h _{2}}R^{h_{1}h_{2}}, \tag{24}\] where \[R^{h_{1}h_{2}}(z_{1},z_{2},\theta,P_{h\perp})=1+\cos(2\phi_{0}) \frac{\sin^{2}\theta}{1+\cos^{2}\theta}\frac{F_{Collins}^{h_{1}h_{2}}}{F_{uu}^ {h_{1}h_{2}}}. \tag{25}\] The \(P_{h\perp}\)-integrated modulation can be accordingly defined as \[R^{h_{1}h_{2}}(z_{1},z_{2},\theta)\] \[= 1+\cos(2\phi_{0})\frac{\sin^{2}\theta}{1+\cos^{2}\theta}\frac{ \int dP_{h\perp}P_{h\perp}F_{Collins}^{h_{1}h_{2}}}{\int dP_{h\perp}P_{h\perp }F_{uu}^{h_{1}h_{2}}}. \tag{26}\] To reduce the systematic uncertainty caused by false asymmetry, the ratio between the hadron pair production with unlike-sign, labeled by "\(U\)", and that with like-sign, labeled by "\(L\)", is usually measured in experiment. Following the above formalism, it can be written as \[R^{UL}=\frac{R^{U}}{R^{L}} =\frac{1+\cos(2\phi_{0})\frac{<\sin^{2}\theta>}{<1+\cos^{2}\theta> }P_{U}}{1+\cos(2\phi_{0})\frac{<\sin^{2}\theta>}{<1+\cos^{2}\theta>}P_{L}}\] \[\simeq 1+\cos(2\phi_{0})\frac{<\sin^{2}\theta>}{<1+\cos^{2}\theta>} (P_{U}-P_{L})\] \[=1+\cos(2\phi_{0})A_{0}^{UL}, \tag{27}\] where \[P_{U}(z_{1},z_{2},P_{h\perp})=\frac{F_{Collins}^{U}}{F_{uu}^{U}}, \tag{28}\] \[P_{L}(z_{1},z_{2},P_{h\perp})=\frac{F_{Collins}^{L}}{F_{uu}^{L}},\] (29) \[P_{U}(z_{1},z_{2})=\frac{\int dP_{h\perp}P_{h\perp}F_{Collins}^{U }}{\int dP_{h\perp}P_{h\perp}F_{uu}^{U}},\] (30) \[P_{L}(z_{1},z_{2})=\frac{\int dP_{h\perp}P_{h\perp}F_{Collins}^{L }}{\int dP_{h\perp}P_{h\perp}F_{uu}^{L}}, \tag{31}\] and \[A_{0}^{UL}=\frac{<\sin^{2}\theta>}{<1+\cos^{2}\theta>}(P_{U}-P_{L}), \tag{32}\] is referred to as the Collins asymmetry in the SIA process. For \(\pi\pi\) channels, one has \[F_{uu}^{U}=F_{uu}^{\pi^{+}\pi^{-}}+F_{uu}^{\pi^{-}\pi^{+}}, \tag{33}\] \[F_{uu}^{L}=F_{uu}^{\pi^{+}\pi^{+}}+F_{uu}^{\pi^{-}\pi^{-}},\] (34) \[F_{Collins}^{U}=F_{Collins}^{\pi^{+}\pi^{-}}+F_{Collins}^{\pi^{-} \pi^{+}}, \tag{35}\] Figure 2: The reference frame for the SIA process. \[F^{L}_{Collins}=F^{*^{+}*^{+}}_{Collins}+F^{*^{-}\pi^{-}}_{Collins}, \tag{36}\] for \(KK\) channels one has \[F^{U}_{uu} =F^{K^{+}K^{-}}_{uu}+F^{K^{-}K^{+}}_{uu}, \tag{37}\] \[F^{L}_{uu} =F^{K^{+}K^{+}}_{uu}+F^{K^{-}K^{-}}_{uu},\] (38) \[F^{U}_{Collins} =F^{K^{+}K^{-}}_{Collins}+F^{K^{-}K^{+}}_{Collins},\] (39) \[F^{L}_{Collins} =F^{K^{+}K^{+}}_{Collins}+F^{K^{-}K^{-}}_{Collins}, \tag{40}\] and for \(K\pi\) channel one has \[F^{U}_{uu} =F^{\pi^{+}K^{-}}_{uu}+F^{\pi^{-}K^{+}}_{uu}+F^{K^{+}\pi^{-}}_{uu} +F^{K^{-}\pi^{+}}_{uu}, \tag{41}\] \[F^{L}_{uu} =F^{\pi^{+}K^{+}}_{uu}+F^{\pi^{-}K^{-}}_{uu}+F^{K^{+}\pi^{+}}_{uu} +F^{K^{-}\pi^{-}}_{uu},\] (42) \[F^{U}_{Collins} =F^{\pi^{+}K^{-}}_{Collins}+F^{\pi^{-}K^{+}}_{Collins}+F^{K^{+}\pi ^{-}}_{Collins}+F^{K^{-}\pi^{+}}_{Collins},\] (43) \[F^{L}_{Collins} =F^{\pi^{+}K^{+}}_{Collins}+F^{\pi^{-}K^{-}}_{Collins}+F^{K^{+} \pi^{+}}_{Collins}+F^{K^{-}\pi^{-}}_{Collins}. \tag{44}\] ### TMD evolution formalism The TMD evolution is implemented in the \(b\)-space. There are two types of energy dependence in TMDs, namely (\(\mu\),\(\zeta\)), where \(\mu\) is the renormalization scale related to the corresponding collinear PDFs and FFs, and \(\zeta\) serves as a cutoff scale to regularize the light-cone singularity in the operator definition of TMDs. In order to minimize the uncertainty from the scale dependence, the scales are usually set as \(\mu^{2}=\zeta=Q^{2}\). Besides, for a fixed order perturbative expansion, one will find terms containing \([\alpha_{s}\ln^{2}(Qb)]^{n}\) and \([\alpha_{s}\ln(Qb)]^{n}\) at the \(n\)th order in powers of the strong coupling constant \(\alpha_{s}\). To ensure accurate predictions in perturbation theory, we have to resum these large logarithms of all orders into an evolution factor \(R[b;(\mu_{i},\zeta_{i})\to(Q,Q^{2})]\), which is determined by the equations \[\mu^{2}\frac{dF(x,b;\mu,\zeta)}{d\mu^{2}} =\frac{\gamma_{F}(\mu,\zeta)}{2}F(x,b;\mu,\zeta), \tag{45}\] \[\zeta\frac{dF(x,b;\mu,\zeta)}{d\zeta} =-\mathcal{D}(b,\mu)F(x,b;\mu,\zeta), \tag{46}\] where \(\gamma_{F}(\mu,\zeta)\) and \(\mathcal{D}(b,\mu)\) are respectively the TMD anomalous dimension and the rapidity anomalous dimension, and \(F\) stands for some TMD function, i.e. \(f_{1}(x,b;\mu,\zeta)\), \(h_{1}(x,b;\mu,\zeta)\), \(D_{1}(z,b;\mu,\zeta)\) or \(H^{\perp}_{1T}(z,b;\mu,\zeta)\) in this work. By solving the equations above, the TMD evolution can be expressed as a path integral from \((\mu_{i},\zeta_{i})\) to \((Q,Q^{2})\) as \[R[b;(\mu_{i},\zeta_{i})\to(Q,Q^{2})]\] \[=\exp\left[\int_{\mathcal{P}}\Big{(}\frac{\gamma_{F}(\mu,\zeta)}{ \mu}d\mu-\frac{\mathcal{D}(\mu,b)}{\zeta}d\zeta\Big{)}\right], \tag{47}\] Then one can formally relate the TMD functions between \((Q,Q^{2})\) and \((\mu_{i},\zeta_{i})\) via \[f_{1}(x,b;Q,Q^{2})D_{1}(z,b;Q,Q^{2})\] \[= R^{2}[b;(\mu_{i},\zeta_{i})\to(Q,Q^{2})]f_{1}(x,b;\mu_{i},\zeta_ {i})D_{1}(z,b;\mu_{i},\zeta_{i}),\] \[h_{1}(x,b;Q,Q^{2})H^{\perp}_{1T}(z,b;Q,Q^{2})\] \[= R^{2}[b;(\mu_{i},\zeta_{i})\to(Q,Q^{2})]h_{1}(x,b;\mu_{i},\zeta_ {i})H^{\perp}_{1T}(z,b;\mu_{i},\zeta_{i}),\] \[D_{1}(z,b;Q,Q^{2})D_{1}(z,b;Q,Q^{2})\] \[= R^{2}[b;(\mu_{i},\zeta_{i})\to(Q,Q^{2})]D_{1}(z,b;\mu_{i},\zeta_ {i})D_{1}(z,b;\mu_{i},\zeta_{i}),\] \[H^{\perp}_{1T}(z,b;Q,Q^{2})H^{\perp}_{1T}(z,b;Q,Q^{2})\] \[= R^{2}[b;(\mu_{i},\zeta_{i})\to(Q,Q^{2})]H^{\perp}_{1T}(z,b;\mu_ {i},\zeta_{i})H^{\perp}_{1T}(z,b;\mu_{i},\zeta_{i}). \tag{48}\] The evolution factor \(R\) is path independent if the complete perturbative expansion is taken into account, and then one can in principle arbitrarily choose the path \(\mathcal{P}\) in Eq. (47). However, this property is compromised when the perturbative expansion is truncated, while it is evident that the discrepancies from path to path diminish as more terms are incorporated in the perturbative expansions. The precision for the perturbative calculation of the factors in powers of \(\alpha_{s}\) in evolution of this work is summarized in Table 1. In \(\zeta\)-prescription [43], a special path \(\mathcal{P}\) is suggested, so that Eq. (47) has a simple form, \[R[b;(\mu_{i},\zeta_{i})\to(Q,Q^{2})]=\Big{(}\frac{Q^{2}}{\zeta_{\mu}(Q,b)} \Big{)}^{-\mathcal{D}(Q,b)}\, \tag{49}\] where \(\zeta_{\mu}(Q,b)\) is determined by solving the equation, \[\frac{d\ln\zeta_{\mu}(\mu,b)}{d\ln\mu^{2}}=\frac{\gamma_{F}(\mu,\zeta_{\mu}(\mu,b ))}{2\mathcal{D}(\mu,b)}\, \tag{50}\] with the boundary conditions, \[\mathcal{D}(\mu_{0},b)=0\,\quad\gamma_{F}(\mu_{0},\zeta_{\mu}(\mu_{0},b))=0\, \tag{51}\] where \(\mathcal{D}(\mu,b)\) is expressed as \[\mathcal{D}(\mu,b)=\mathcal{D}_{\text{resum}}(\mu,b^{*})+d_{\text{NP}}(b)\, \tag{52}\] with \(d_{\text{NP}}(b)=c_{0}bb^{*}\), and \(\zeta_{\mu}(\mu,b)\) is expressed as \[\zeta_{\mu}(\mu,b) =\zeta_{\mu}^{\text{pert}}(\mu,b)e^{-b^{2}/B_{\text{NP}}^{2}}\] \[\qquad+\zeta_{\mu}^{\text{exact}}(\mu,b)\Big{(}1-e^{-b^{2}/B_{ \text{NP}}^{2}}\Big{)}. \tag{53}\] The free parameters are set as \(B_{\text{NP}}=1.93\,\text{GeV}^{-1}\) and \(c_{0}=0.0391\,\text{GeV}^{2}\) as determined in [43] by fitting unpolarized SIDIS and Drell-Yan data. More details on \(\mathcal{D}(\mu,b)\) and \(\zeta_{\mu}(\mu,b)\) can be found in Appendix A. \begin{table} \begin{tabular}{c|c c c c c} \hline \hline Evolution & \(\Gamma_{\text{cusp}}\) & \(\gamma_{V}\) & \(\mathcal{D}_{\text{resum}}\) & \(\zeta_{\mu}^{\text{pert}}\) & \(\zeta_{\mu}^{\text{exact}}\) \\ NNLO & \(\alpha_{s}^{3}\) & \(\alpha_{s}^{2}\) & \(\alpha_{s}^{2}\) & \(\alpha_{s}^{2}\) & \(\alpha_{s}^{2}\) & \(\alpha_{s}^{2}\) \\ \hline \hline \end{tabular} \end{table} Table 1: The precision of various factors in powers of \(\alpha_{s}\) for the evolution. ### Unpolarized TMD PDFs and FFs According to the phenomenological ansatzes in Ref. [43], the unpolarized TMDs and FFs can be written as \[f_{1,f\gets h}(x,b;\mu_{i},\zeta_{i}) =\sum_{f^{\prime}}\int_{x}^{1}\frac{dy}{y}C_{f\gets f^{\prime} }(y,b,\mu_{\rm OPE}^{\rm PDF})\] \[\times f_{1,f^{\prime}\gets h}\Big{(}\frac{x}{y},\mu_{\rm OPE }^{\rm PDF}\Big{)}f_{\rm NP}(x,b),\] \[D_{1,f\to h}(z,b;\mu_{i},\zeta_{i}) =\frac{1}{z^{2}}\sum_{f^{\prime}}\int_{z}^{1}\frac{dy}{y}y^{2} \mathbb{C}_{f\to f^{\prime}}(y,b,\mu_{\rm OPE}^{\rm FF})\] \[\times d_{1,f^{\prime}\to h}\Big{(}\frac{z}{y},\mu_{\rm OPE }^{\rm FF}\Big{)}D_{\rm NP}(z,b), \tag{54}\] where \(f_{\rm NP}(x,b)\) and \(D_{\rm NP}(z,b)\) are nonperturbative functions, \(f_{1,f^{\prime}\gets h}(x,\mu)\) and \(d_{1,f^{\prime}\to h}(z,\mu)\) are collinear PDFs and FFs, \(C_{f\gets f^{\prime}}(y,b,\mu)\) and \(\mathbb{C}_{f\to f^{\prime}}(y,b,\mu)\) are matching coefficients calculated via the operator product expansion methods [44]. The \(C(\mathbb{C})\) functions are taken into account up to the one-loop order, with explicit expressions given in Appendix D. The evolution scales \(\mu_{\rm OPE}^{\rm PDF}\) and \(\mu_{\rm OPE}^{\rm FF}\) within the \(\zeta\)-prescription can be written as [43] \[\mu_{\rm OPE}^{\rm PDF} =\frac{2e^{-\gamma_{E}}}{b}+2\,{\rm GeV}\, \tag{55}\] \[\mu_{\rm OPE}^{\rm FF} =\frac{2e^{-\gamma_{E}}z}{b}+2\,{\rm GeV}\, \tag{56}\] and the \(2\,{\rm GeV}\) is a large-\(b\) offset of \(\mu_{\rm OPE}\) which is a typical reference scale for PDFs and FFs. The parameterized form of the nonperturbative functions \(f_{\rm NP}(x,b)\) and \(D_{\rm NP}(z,b)\) can be adopted as [43] \[f_{\rm NP}(x,b) =\exp\Big{[}-\frac{\lambda_{1}(1-x)+\lambda_{2}x+x(1-x)\lambda_{ 5}}{\sqrt{1+\lambda_{3}x^{\lambda_{4}}b^{2}}}b^{2}\Big{]}, \tag{57}\] \[D_{\rm NP}(z,b) =\exp\Big{[}-\frac{\eta_{1}z+\eta_{2}(1-z)}{\sqrt{1+\eta_{3}(b/ z)^{2}}}\frac{b^{2}}{z^{2}}\Big{]}\big{(}1+\eta_{4}\frac{b^{2}}{z^{2}}\big{)}, \tag{58}\] where the parameters \(\lambda\) and \(\eta\) are extracted from the fit of unpolarized SIDIS and Drell-Yan data, specifically at low transverse momentum. Their values are listed in Table 2. ## III Extraction of Transversity Distributions and Collins FFs In this section, we present the global analysis of the SIDIS and SIA data using the above theoretical formalism. The transversity distribution functions and the Collins FFs are parametrized at an initial energy scale. A \(\chi^{2}\) minimization is then performed to simultaneously determine the parameters for the transversity distributions and Collins FFs. For the uncertainty estimation, we use the replica method. According to Eq. (15) and the evolution equation (48), the Collins asymmetry in SIDIS process can be written as \[A_{UT}^{\sin(\phi_{h}+\phi_{s})}=M_{h}\frac{\sum_{q}e_{q}^{2}\int_{0}^{\infty} \frac{b\mbox{\scriptsize\rm\rm\rm\rm\it\it\it\it\it\it\it\it\it\it\it\it\it \it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it \it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it \it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it \it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it \it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it \it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it \it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it \it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it \it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it \it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it \it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it \it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it\it \it form to the unpolarized ones in Eq. (54) as \[h_{1,q\gets h}(x,b) = \sum_{q^{\prime}}\int_{x}^{1}\frac{dy}{y}C_{q\gets q^{\prime}}(y,b,\mu_{0}) \tag{62}\] \[\times h_{1,q^{\prime}\gets h}\Big{(}\frac{x}{y},\mu_{0}\Big{)}h_{ \text{NP}}(x,b),\] \[H_{1,q\to h}^{\perp}(z,b) = \frac{1}{z^{2}}\sum_{q^{\prime}}\int_{z}^{1}\frac{dy}{y}y^{2} \mathbb{C}_{q\to q^{\prime}}(y,b,\mu_{0})\] (63) \[\times \hat{H}_{1,q^{\prime}\to h}^{(3)}\Big{(}\frac{z}{y},\mu_{0}\Big{)}H_{ \text{NP}}(z,b),\] where \(h_{\text{NP}}(x,b)\) and \(H_{\text{NP}}(z,b)\) are nonperturbative functions, \(h_{1,q^{\prime}\gets h}(x,\mu_{0})\) and \(\hat{H}_{1,q^{\prime}\to h}^{(3)}(z,\mu_{0})\) are collinear transversity distribution functions and twist-3 FFs, and \(\mu_{0}\) is chosen as \(2\,\text{GeV}\). The coefficients \(C(\mathbb{C})\) are considered at the leading order [45]: \[C(\mathbb{C})_{q\gets q^{\prime}}=\delta_{qq^{\prime}}\delta(1-y). \tag{64}\] Then we have \[h_{1,q\gets p}(x,b) = h_{1,q\gets p}(x,\mu_{0})h_{\text{NP}}(x,b), \tag{65}\] \[H_{1,q\to h}^{\perp}(z,b) = \frac{1}{z^{2}}\hat{H}_{1,q\to h}^{(3)}(z,\mu_{0})H_{\text{NP}}(z,b), \tag{66}\] where \(h_{1,q\gets p}(x,b)\) are the transversity distributions of the proton, while the transversity distributions of the neutron, the deuteron, and the \({}^{3}\)He are approximated by \(h_{1,q\gets p}(x,b)\) assuming the isospin symmetry and neglecting the nuclear modification, with explicit relations provided in Appendix E. Then we parameterize \(h_{1,q\gets p}(x,\mu_{0})\) and \(\tilde{H}_{1,q\to h}^{(3)}(z,\mu_{0})\) as \[h_{1,u\gets p}(x,\mu_{0}) = N_{u}\frac{(1-x)^{\alpha_{u}}x^{\beta_{u}}(1+\epsilon_{u}x)}{n( \beta_{u},\epsilon_{u},\alpha_{u})} \tag{67}\] \[\times f_{1,u\gets p}(x,\mu_{0}),\] \[h_{1,d\gets p}(x,\mu_{0}) = N_{d}\frac{(1-x)^{\alpha_{d}}x^{\beta_{d}}(1+\epsilon_{d}x)}{n( \beta_{d},\epsilon_{d},\alpha_{d})}\] (68) \[\times f_{1,d\gets p}(x,\mu_{0}),\] \[h_{1,\bar{u}\leftarrow p}(x,\mu_{0}) = N_{\bar{u}}\frac{(1-x)^{\alpha_{u}}x^{\beta_{u}}(1+\epsilon_{u}x )}{n(\beta_{\bar{u}},\epsilon_{\bar{u}},\alpha_{\bar{u}})}\] (69) \[\times \Big{(}f_{1,u\gets p}(x,\mu_{0})-f_{1,\bar{u}\gets p}(x, \mu_{0})\Big{)},\] \[h_{1,\bar{d}\leftarrow p}(x,\mu_{0}) = N_{\bar{d}}\frac{(1-x)^{\alpha_{d}}x^{\beta_{d}}(1+\epsilon_{d} x)}{n(\beta_{d},\epsilon_{d},\alpha_{d})}\] (70) \[\times \Big{(}f_{1,d\gets p}(x,\mu_{0})-f_{1,\bar{d}\gets p}(x, \mu_{0})\Big{)},\] \begin{table} \begin{tabular}{l l l l l} \hline \hline Data set & Energy & dependence & Data points & Reaction \\ \hline BELLE [25] & \(10.58\,\text{GeV}\) & \(z\) & \(16\) & \(e^{+}e^{-}\to\pi\pi X\) \\ BABAR [26] & \(10.6\,\text{GeV}\) & \(z\) & \(36\) & \(e^{+}e^{-}\to\pi\pi X\) \\ & & \(P_{h\perp}\) & \(9\) & \(e^{+}e^{-}\to\pi\pi X\) \\ BABAR [27] & \(10.6\,\text{GeV}\) & \(z\) & \(48\) & \(e^{+}e^{-}\to\pi\pi X\) \\ & & \(z\) & & \(e^{+}e^{-}\to\pi KX\) \\ BESIII [28] & \(3.68\,\text{GeV}\) & \(z\) & \(6\) & \(e^{+}e^{-}\to\pi\pi X\) \\ & & \(P_{h\perp}\) & \(5\) & \(e^{+}e^{-}\to\pi\pi X\) \\ \hline \hline \end{tabular} \end{table} Table 4: The world SIA data used in our analysis. \begin{table} \begin{tabular}{l l l l l l} \hline \hline Data set & Target & Beam & Data points & Reaction & measurement \\ \hline COMPASS [21] & \({}^{6}\)LiD & \(160\,\text{GeV}\)\(\mu^{+}\) & \(92\) & \(\mu^{+}d\to\mu^{+}\pi^{+}X\) & \(A_{UT}^{\sin(\phi_{h}+\phi_{s}-\pi)}\) \\ & & & & \(\mu^{+}d\to\mu^{+}\pi^{-}X\) & \\ & & & & \(\mu^{+}d\to\mu^{+}K^{+}X\) & \\ & & & & \(\mu^{+}d\to\mu^{+}K^{-}X\) & \\ \hline COMPASS [22] & NH\({}_{3}\) & \(160\,\text{GeV}\)\(\mu^{+}\) & \(92\) & \(\mu^{+}p\to\mu^{+}\pi^{+}X\) & \(A_{UT}^{\sin(\phi_{h}+\phi_{s}-\pi)}\) \\ & & & & \(\mu^{+}p\to\mu^{+}\pi^{-}X\) & \\ & & & & \(\mu^{+}p\to\mu^{+}K^{+}X\) & \\ & & & & \(\mu^{+}p\to\mu^{+}K^{-}X\) & \\ \hline HERMES [20] & H\({}_{2}\) & \(27.6\,\text{GeV}\)\(e^{\pm}\) & \(80\) & \(e^{\pm}p\to e^{\pm}\pi^{+}X\) & \(A_{UT}^{\sin(\phi_{h}+\phi_{s})}\) \\ & & & & \(e^{\pm}p\to e^{\pm}\pi^{-}X\) & \\ & & & & \(e^{\pm}p\to e^{\pm}K^{+}X\) & \\ & & & & \(e^{\pm}p\to e^{\pm}K^{-}X\) & \\ & & & & \(e^{\pm}p\to e^{\pm}K^{-}X\) & \\ \hline JLab [23] & \({}^{3}\)He & \(5.9\,\text{GeV}\)\(e^{-}\) & \(8\) & \(e^{-}n\to e^{-}\pi^{+}X\) & \(\epsilon A_{UT}^{\sin(\phi_{h}+\phi_{s})}\) \\ & & & \(e^{-}n\to e^{-}\pi^{-}X\) & \\ \hline JLab [24] & \({}^{3}\)He & \(5.9\,\text{GeV}\)\(e^{-}\) & \(5\) & \(e^{-3}\text{He}\to e^{-}K^{+}X\) & \(\epsilon A_{UT}^{\sin(\phi_{h}+\phi_{s})}\) \\ & & & & \(e^{-3}\text{He}\to e^{-}K^{-}X\) & \\ \hline \hline \end{tabular} \end{table} Table 3: The world SIDIS data used in our analysis. \[\hat{H}^{(3)}_{1,q\to h}(z,\mu_{0})=N^{h}_{q}\frac{(1-z)^{\alpha^{h}_{q}}z^{\beta^{ h}_{q}}(1+\epsilon^{h}_{q}z)}{n(\beta^{h}_{q},\epsilon^{h}_{q},\alpha^{h}_{q})}, \tag{71}\] where \(f_{1,q\gets p}(x,\mu_{0})\) are collinear unpolarized PDFs. The nonperturbative functions \(h_{\rm NP}\) and \((H_{\rm NP})\) for each flavor take the same form as \(f_{\rm NP}\) and \(D_{\rm NP}\) in Eqs. (57) and (58). However, since the existing world data with limited amount are not precise enough to determine so many parameters, we simplify the parametrization form by setting \(\eta_{2}=\eta_{1}\) for the each Collins FF, \(\lambda_{1}=\lambda_{2}=r\) and \(\lambda_{3}=\lambda_{4}=\lambda_{5}=0\) for the transversity distribution of each flavor. Furthermore, we use the same \(r_{\bar{u}}=r_{d}=r_{\rm sea}\) for \(\bar{u}\) and \(\bar{d}\) transversity distributions, and set the \(s\) and \(\bar{s}\) transversity distributions to zero. The factor \[n(\beta,\epsilon,\alpha)=\frac{\Gamma(\alpha+1)(2+\alpha+\beta+ \epsilon+\epsilon\beta)\Gamma(\beta+1)}{\Gamma(\beta+\alpha+3)}, \tag{72}\] is introduced to reduce the correlation between the parameters controlling the shape and the normalization. For the parametrizations of Collins FFs, we use favored and unfavored configurations as \[H^{\perp}_{1,u\to\pi^{+}}=H^{\perp}_{1,\bar{d}\to\pi^{+}}=H^{ \perp}_{1,d\to\pi^{-}}=H^{\perp}_{1,\bar{u}\to\pi^{-}}\equiv H^{\pi}_{fav},\] \[H^{\perp}_{1,d\to\pi^{+}}=H^{\perp}_{1,\bar{u}\to\pi^{+}}=H^{ \perp}_{1,u\to\pi^{-}}=H^{\perp}_{1,\bar{d}\to\pi^{-}}=H^{\perp}_{1,s\to\pi^{+ }}\] \[=H^{\perp}_{1,\bar{s}\to\pi^{+}}=H^{\perp}_{1,s\to\pi^{-}}=H^{ \perp}_{1,\bar{s}\to\pi^{-}}\equiv H^{\pi}_{unf},\] \[H^{\perp}_{1,u\to K^{+}}=H^{\perp}_{1,\bar{u}\to K^{-}}=H^{ \perp}_{1,\bar{s}\to K^{+}}=H^{\perp}_{1,s\to K^{-}}\equiv H^{K}_{fav},\] \[H^{\perp}_{1,d\to K^{+}}=H^{\perp}_{1,\bar{d}\to K^{+}}=H^{ \perp}_{1,d\to K^{-}}=H^{\perp}_{1,\bar{d}\to K^{-}}=H^{\perp}_{1,\bar{u}\to K ^{+}}\] \[=H^{\perp}_{1,u\to K^{-}}=H^{\perp}_{1,s\to K^{+}}=H^{ \perp}_{1,\bar{s}\to K^{-}}\equiv H^{K}_{unf}. \tag{73}\] As listed in Tables 5 and 6, there are in total 35 free parameters in this fit. Due to limited statistics and phase space coverage, many experimental data were analyzed in one-dimensional binning in variables of \(x\), \(z\), and/or \(P_{h\perp}\) respectively. In order to maximally use the data information from binning in different kinematic variables and \begin{table} \begin{tabular}{c|c c c c c} \hline \hline Transversity & \(r\) & \(\beta\) & \(\epsilon\) & \(\alpha\) & \(N\) \\ \hline \(u\) & \(r_{u}\) & \(\beta_{u}\) & \(\epsilon_{u}\) & \(\alpha_{u}\) & \(N_{u}\) \\ \(d\) & \(r_{d}\) & \(\beta_{d}\) & \(\epsilon_{d}\) & \(\alpha_{d}\) & \(N_{d}\) \\ \(\bar{u}\) & \(r_{\rm sea}\) & 0 & 0 & 0 & \(N_{u}\) \\ \(\bar{d}\) & \(r_{\rm sea}\) & 0 & 0 & 0 & \(N_{\bar{d}}\) \\ \hline \hline \end{tabular} \end{table} Table 5: Free parameters for the transversity parametrizations. Figure 4: Comparison of COMPASS Collins asymmetry data [22] to theoretical calculations for \(\pi^{+}\), \(\pi^{-}\), \(K^{+}\), and \(K^{-}\) productions from a proton target. The markers and bands have the same meaning as in Fig. 3. Figure 5: Comparison of COMPASS Collins asymmetry data [21] to theoretical calculations for \(\pi^{+}\), \(\pi^{-}\), \(K^{+}\) and \(K^{-}\) productions from a deuteron target. The markers and bands have the same meaning as in Fig. 3. but meanwhile to avoid a duplicate usage of the same data set, we assign a weight factor when calculating the \(\chi^{2}\) from each data set. The COMPASS and HERMES data sets are given the weight of 1/3 since the binning in \(x\), \(z\), and \(P_{h\perp}\) are provided respectively from the same collected events. The BABAR [26] and BESIII data are given the weight of 1/2 since the binning in \(z\) and \(P_{h\perp}\) are provided respectively from the same events. To estimate the uncertainty, we randomly shift the central values of the data points by Gaussian distributions with the Gaussian widths given by the experimental uncertainties, and then perform a fit to the smeared data. By repeating this procedure, we create 1000 replicas. The central values of the parameters together with their uncertainties out of the fit are listed in Table 7. The total \(\chi^{2}/N\) of the fit and its value for various experimental data sets are listed in Table 8. Here, \(N\) denotes the number of experimental data points. The comparisons between experimental data and the theoretical calculations using the 1000 replicas are shown in Figs. 3-10. The first transverse moment of Collins FF \(H_{1}^{\perp(1)}(z)\) and that of the transversity distribution \(h_{1}(x)\) are defined as \[H_{1}^{\perp(1)}(z) =\int d^{2}\mathbf{p}_{T}\frac{p_{T}^{2}}{2z^{2}M_{h}^{2}}H_{1}^{ \perp}(z,p_{T}), \tag{74}\] \[h_{1}(x) =\int d^{2}\mathbf{k}_{\perp}h_{1}(x,k_{\perp}). \tag{75}\] Their results are shown in Fig. 11 and Fig. 12. One can observe that the \(\bar{u}\) transversity distribution favors a negative value about 2 \(\sigma\) away from zero, while the \(\bar{d}\) transversity distribution is consistent with zero in 1 \(\sigma\) band. The \(u\) and \(d\) transversity distributions are consistent with previous global analyses within the uncertainties. The tensor charge can be evaluated from the integral of the transversity distributions as \[\delta u =\int_{0}^{1}dx(h_{1}^{u}(x)-h_{1}^{\bar{u}}(x)), \tag{76}\] \[\delta d =\int_{0}^{1}dx(h_{1}^{d}(x)-h_{1}^{\bar{d}}(x)), \tag{77}\] and the isovector combination is given by \[g_{T}=\delta u-\delta d. \tag{78}\] The extracted tensor charges from our analysis are compared with the results from previous phenomenological studies, lattice calculations, and Dyson-Schwinger equations are shown in Fig. 13 and Fig. 14. It is not a surprise that the uncertainties of our result are larger than those from previous phenomenological studies of SIDIS and SIA data, because we include more flavors, \(\bar{u}\) and \(\bar{d}\), and thus the functions are less constrained. We would like to note that the negative \(\bar{u}\) transversity distribution shift \(\delta u\) as well as \(g_{T}\) to a greater value though with large uncertainties. The tension between lattice QCD calculations and TMD phenomenological extractions disappears when the antiquark transversity distributions are taken into account. In previous works, such tension is only resolved by imposing the lattice data in the fit [37]. \begin{table} \begin{tabular}{c|c c c c c c c} \hline \hline Collins & \(\eta_{1}\) & \(\eta_{3}\) & \(\eta_{4}\) & \(\beta\) & \(\epsilon\) & \(\alpha\) & \(N\) \\ \hline \(\pi_{fav}\) & \(\eta_{1}^{e}\) & \(\eta_{3f}^{e}\) & \(\eta_{4f}^{e}\) & \(\beta_{L}^{e}\) & 0 & \(\alpha_{l}^{e}\) & \(N_{l}^{e}\) \\ \(\pi_{uuf}\) & \(\eta_{1u}^{e}\) & \(\eta_{3u}^{e}\) & \(\eta_{3u}^{e}\) & \(\beta_{u}^{e}\) & 0 & \(\alpha_{u}^{e}\) & \(N_{l}^{e}\) \\ \(K_{fav}\) & \(\eta_{1f}^{f}\) & 0 & \(\eta_{4f}^{e}\) & \(\beta_{L}^{e}\) & 0 & \(\alpha_{l}^{e}\) & \(N_{l}^{e}\) \\ \(K_{unf}\) & \(\eta_{1u}^{K}\) & 0 & \(\eta_{4u}^{K}\) & \(\beta_{u}^{K}\) & 0 & \(\alpha_{u}^{K}\) & \(N_{u}^{K}\) \\ \hline \hline \end{tabular} \end{table} Table 6: Free parameters for the parametrizations of Collins FFs. The label “\(f\)” and “\(u\)” stand for favored and unflavored, respectively. Figure 6: Comparison of JLab Collins asymmetry data [23; 24] to theoretical calculations for \(\pi^{+}\), \(\pi^{-}\), \(K^{+}\) and \(K^{-}\) productions from a \({}^{3}\)He target. The asymmetries for \(\pi^{+}\) and \(\pi^{-}\) productions in the left panel have been extracted at the neutron level while the Kaon results are at \({}^{3}\)He level due to limited statistics. The markers and bands have the same meaning as in Fig. 3. Figure 7: Comparison of BELLE \(\pi\pi\) channel Collins asymmetry data [25] to theoretical calculations in the SIA process. The markers and bands have the same meaning as in Fig. 3. Figure 8: Comparison of BABAR \(\pi\pi\) channel Collins asymmetry data [26] to theoretical calculations in the SIA process as a function of \(z\)(left) and \(P_{h\perp}\)(right). The markers and bands have the same meaning as in Fig. 3. Figure 10: Comparison of BESIII \(\pi\pi\) channel Collins asymmetry data [28] to theoretical calculations in the SIA process as a function of \(z\)(left) and \(P_{h\perp}\)(right). The markers and bands have the same meaning as in Fig. 3. \begin{table} \begin{tabular}{l l|l l l l} \hline \hline Transversity & Value & Collins & Value & Collins & Value \\ \(r_{u}\) & \(0.12^{+0.04}_{-0.04}\) & \(\eta_{1f}^{\pi}\) & \(0.06^{+0.02}_{-0.01}\) & \(\beta_{u}^{K}\) & \(9.18^{+15.5}_{-7.59}\) \\ \(r_{d}\) & \(0.14^{+0.85}_{-0.14}\) & \(\eta_{3f}^{\pi}\) & \(0.09^{+0.12}_{-0.03}\) & \(\alpha_{f}^{\pi}\) & \(1.83^{+0.45}_{-0.31}\) \\ \(r_{\rm sea}\) & \(0.70^{+0.28}_{-0.38}\) & \(\eta_{4f}^{\pi}\) & \(3.27^{+0.07}_{-0.07}\) & \(\alpha_{u}^{\pi}\) & \(6.11^{+1.22}_{-1.22}\) \\ \(\beta_{u}\) & \(1.13^{+0.38}_{-0.32}\) & \(\eta_{1u}^{\pi}\) & \(0.03^{+0.01}_{-0.01}\) & \(\alpha_{f}^{\pi}\) & \(0.70^{+1.68}_{-0.51}\) \\ \(\beta_{d}\) & \(3.43^{+8.58}_{-1.74}\) & \(\eta_{3u}^{\pi}\) & \(0.04^{+0.02}_{-0.02}\) & \(\alpha_{u}^{K}\) & \(28.21^{+44.14}_{-22.15}\) \\ \(\epsilon_{u}\) & \(0.17^{+1.44}_{-1.42}\) & \(\eta_{4u}^{\pi}\) & \(0.005^{+0.013}_{-0.002}\) & \(N_{f}^{\pi}\) & \(0.007^{+0.003}_{-0.002}\) \\ \(\epsilon_{d}\) & \(1.17^{+4.27}_{-1.42}\) & \(\eta_{4f}^{\pi}\) & \(0.03^{+0.03}_{-0.03}\) & \(N_{u}^{\pi}\) & \(-3.83^{+1.06}_{-1.06}\) \\ \(\alpha_{u}\) & \(0.28^{+1.04}_{-0.40}\) & \(\eta_{4f}^{\pi}\) & \(1.15^{+0.71}_{-0.94}\) & \(N_{f}^{K}\) & \(0.06^{+0.10}_{-0.04}\) \\ \(\alpha_{d}\) & \(5.77^{+2.88}_{-4.91}\) & \(\eta_{1u}^{K}\) & \(0.02^{+0.08}_{-0.02}\) & \(N_{u}^{K}\) & \(-0.02^{+0.01}_{-0.05}\) \\ \(N_{u}\) & \(0.34^{+0.69}_{-0.36}\) & \(\eta_{4u}^{K}\) & \(0.71^{+3.80}_{-0.61}\) & & \\ \(N_{d}\) & \(-1.37^{+1.23}_{-0.60}\) & \(\beta_{f}^{\pi}\) & \(2.82^{+1.17}_{-0.64,24}\) & & \\ \(N_{u}\) & \(-0.12^{+0.40}_{-0.46}\) & \(\beta_{u}^{\pi}\) & \(-0.23^{+0.34}_{-0.34}\) & & \\ \(N_{\bar{d}}\) & \(0.10^{+0.46}_{-0.16}\) & \(\beta_{f}^{K}\) & \(-0.38^{+1.31}_{-0.37}\) & & \\ \hline \hline \end{tabular} \end{table} Table 7: The values of free parameters out of the fit to the World SIDIS and SIA data. The central values are the averaged result from 1000 replicas, and the uncertainties are the standard deviation from 1000 replicas. The values of \(r\) and \(\eta\) are provided in unit of GeV\({}^{2}\) and the others are unit-less. Figure 9: Comparison of BABAR \(\pi\pi\), \(K\pi\) and \(KK\) channels Collins asymmetry data [27] to theoretical calculations in the SIA process. The markers and bands have the same meaning as in Fig. 3. ## IV EicC Projections on Transversity Distributions and Collins Ffts The EicC SIDIS pseudodata are produced by the Monte Carlo event generator \(\mathtt{SIDIS-RC\,EvGen}\)[55], in which the unpolarized SIDIS differential cross section used in the generator is derived from a global fit to the multiplicity data from HERMES and COMPASS experiments. Based on the EicC conceptual design, the electron beam energy is \(3.5\,\mathrm{GeV}\), and the proton beam energy is \(20\,\mathrm{GeV}\), and the \({}^{3}\)He beam energy is \(40\,\mathrm{GeV}\). Physical cuts \(Q^{2}>1\,\mathrm{GeV}^{2}\), \(0.3<z<0.7\), \(W>5\,\mathrm{GeV}\) and \(W^{{}^{\prime}}>2\,\mathrm{GeV}\) are adopted to select events in the deep inelastic region. We estimate the statistics by assuming \(50\,\mathrm{fb}^{-1}\) for \(ep\) collisions and \(50\,\mathrm{fb}^{-1}\) for \(e^{3}\mathrm{He}\) collisions. Based on the designed instantaneous luminosity of \(2\times 10^{33}\,\mathrm{cm}^{-2}\mathrm{s}^{-1}\), it is estimated that \(50\,\mathrm{fb}^{-1}\) of accumulated luminosity can be attained in approximately one year of operation. Keeping the statistical uncertainty at \(10^{-3}\) level, we obtain 4627 data points in four-dimensional bins in \(x\), \(Q^{2}\), \(z\), and \(P_{h\perp}\). The EicC pseudo-data provides significantly more data points with higher precision, enabling us to impose more rigorous kinematic cuts for a more precise selection of data in the TMD region. In this study, only small transverse momentum data with \(\delta=|P_{h\perp}|/(zQ)<0.3\) are selected. After applying this data selection cut, there are 1347 EicC pseudo-data points left. The distributions of all Figure 11: Collins functions as defined in Eq. (74) with the \(\mathbf{p}_{T}\)-integral truncated at \(1\mathrm{GeV}\) and \(Q=2\mathrm{GeV}\). The green bands represent the uncertainties of the fit to the World SIDIS and SIA data, the red bands represent the EicC projections with only statistical uncertainties, and the blue bands represent the EicC projections including systematic uncertainties as described in the text. 4627 EicC pseudo-data points are shown in Fig. 15, where the colored points are selected in the fit while the gray ones are not. The Collins asymmetry values of the EicC pseudo-data are calculated using the central value of the 1000 replicas from the fit to the World data. For systematic uncertainties, we assign 3% relative uncertainty for the proton data mainly due to the precision from beam polarimetry, and 5% relative uncertainty for the neutron data mainly due to the precision from beam polarimetry and nuclear effects. Total uncertainties are evaluated via the quadrature combination of statistical uncertainties and systematic uncertainties. The precise EicC data with wide kinematics coverage allow us to adopt a more flexible parametrization of the transversity functions. Therefore, we open the channels of \(s\) and \(\bar{s}\) transversity functions in the fit with the following parametrizations, \[h_{1,s\gets p}(x,b) =N_{s}\frac{(1-x)^{\alpha_{s}}x^{\beta_{s}}(1+\epsilon_{s}x)}{n( \beta_{s},\epsilon_{s},\alpha_{s})}\exp\Big{(}-r_{\rm sea}b^{2}\Big{)}\] \[\times\Big{(}f_{1,u\gets p}(x,\mu_{0})-f_{1,\bar{u} \gets p}(x,\mu_{0})\Big{)}, \tag{79}\] \[h_{1,\bar{s}\gets p}(x,b) =N_{\bar{s}}\frac{(1-x)^{\alpha_{s}}x^{\beta_{s}}(1+\epsilon_{ \bar{s}}x)}{n(\beta_{\bar{s}},\epsilon_{\bar{s}},\alpha_{\bar{s}})}\exp\Big{(} -r_{\rm sea}b^{2}\Big{)}\] \[\times\Big{(}f_{1,u\gets p}(x,\mu_{0})-f_{1,\bar{u} \gets p}(x,\mu_{0})\Big{)}. \tag{80}\] Figure 12: Transversity functions as defined in Eq. (75) with the \(\mathbf{k}_{\perp}\)-integral truncated at \(1\,\mathrm{GeV}\) and \(Q=2\,\mathrm{GeV}\). The green bands represent the uncertainties of the fit to the World SIDIS and SIA data, the red bands represent the EicC projections with only statistical uncertainties, and the blue bands represent the EicC projections including systematic uncertainties as described in the text. Figure 13: Tensor charge for \(u\)-quark and \(d\)-quark from our study at 68% C.L. along with the results from Dyson-Schwinger equation calculations [46; 47; 48], lattice QCD calculations [6; 7; 8; 9; 10; 11], and phenomenological extractions from data [31; 32; 33; 34; 35; 36; 37; 38; 49; 50]. Then, we have 37 free parameters for the EicC pseudo-data fit, as listed in Table 9 and 6. To estimate the impact of the EicC on the extraction of the transversity distribution functions and Collins FFs, we perform a simultaneous fit to the world data and the EicC pseudo-data as described above. Following the same procedure, 300 replicas are created by randomly shifting the values according to the simulated statistical uncertainty and total uncertainty, respectively. The EicC projections for \(H_{1}^{\perp(1)}(z)\), \(h_{1}(x)\), and tensor charges are shown in Fig. 11-14 respectively. The transverse momentum distribution of the Collins and transversity functions are shown in Fig. 16 and 17 via slices at various \(x\) and \(z\) values. The mean value of transversity functions for \(u\) and \(d\) quark with different \(Q\) is shown in Fig. 18, where one can observe that the transversity functions are expected to have stronger signals in the kinematics region covered by the EicC. ## V Summary In this paper, we present a global analysis of transversity distribution functions and Collins FFs by simultaneously fitting to SIDIS and SIA data within the TMD factorization. Nonzero \(\bar{u}\) and \(\bar{d}\) transversity distributions are taken into account. The result favors a negative \(\bar{u}\) transversity distribution with a significance of two standard deviations, while no hint is found for nonvanishing \(\bar{d}\) transversity distribution with the current accuracy. The results of \(u\) and \(d\) transversity distributions and the results of Collins FFs are consistent with previous phenomenological analyses by other groups. The tensor charges evaluated from the moment of transversity distributions are consistent with lattice QCD calculations as well as other global fits within the uncertainties, and thus no tension exists between lattice calculation and TMD extractions once antiquark contributions are taken into account. We note that these findings are based on the exploratory measurements worldwide. To make decisive conclusions, data with high precision in a wide phase space coverage are desired, which can be achieved at the future JLab programs and the EICs. Based on the fit of existing world data, we investigated the impact of the proposed EicC on the extraction of transversity TMDs and the Collins FFs. With the EicC pseudo-data, one can extract the transversity functions at high precision for various quark flavors, thus determine the proton tensor charge with precision comparable to the lattice calculations. Moreover, the precise and wide kinematics coverage of the EicC pseudo-data allows us to use much more flexible parametrizations, which can minimize the bias on the transversity function, and have a cleaner selection of data for TMDs study by applying a more strict requirement on \(\delta\equiv|P_{h\perp}|/(zQ)\) to restrict data in the low transverse momentum region, suitable for the application of TMD-factorization. EicC will fill the kinematics gap between the coverage between the JLab-12GeV program and the EIC at BNL. Combining all these measurements, we will be able to have a complete physical picture of the three-dimensional structures of the nucleon. On the other hand, in the \(x-Q^{2}\) region covered by the EicC, the transversity functions are expected to have significant signals, which is an advantage for the TMDs study with a moderate center-of-mass energy collider but with high instantaneous luminosity [56]. ###### Acknowledgements. C.Z. is grateful for the valuable discussions with Zhi Hu at the Institute of Modern Physics. This work is supported by the Strategic Priority Research Program of the Chinese Academy of Sciences under grant number XDB34000000, the Guangdong Major Project of Basic and Applied Basic Research No. 2020B0301030008, the Guangdong Provincial Key Laboratory of Nuclear Science with No. 2019B121203010, the National Natural Science Foundation of China under Contracts No. 12175117, No. 12321005, No. 11975127, and No. 12061131006, and by the Shandong Provincial Natural Science Foundation under contract ZFJH202303. The authors also acknowledge the computing resources available at the Southern Nuclear Science Computing Center. Figure 15: Kinematic distributions of the EicC pseudo-data in \(x-Q^{2}\) (left) and \(z-P_{h\perp}\) (right) planes. Each bin is plotted as a point at the bin center kinematic values. The blue points are the proton data with \(\delta<0.3\), the red points are the neutron data with \(\delta<0.3\), and the gray points are the data with \(\delta>0.3\). \begin{table} \begin{tabular}{c|c c c c c} \hline \hline Transversity & \(r\) & \(\beta\) & \(\epsilon\) & \(\alpha\) & \(\bar{N}\) \\ \hline \(u\) & \(r_{u}\) & \(\beta_{u}\) & \(\epsilon_{u}\) & \(\alpha_{u}\) & \(N_{u}\) \\ \(d\) & \(r_{d}\) & \(\beta_{d}\) & \(\epsilon_{d}\) & \(\alpha_{d}\) & \(N_{d}\) \\ \(\bar{u}\) & \(r_{\rm sea}\) & 0 & 0 & 0 & \(N_{u}\) \\ \(\bar{d}\) & \(r_{\rm sea}\) & 0 & 0 & 0 & \(N_{d}\) \\ \(s\) & \(r_{\rm sea}\) & 0 & 0 & 0 & \(N_{s}\) \\ \(\bar{s}\) & \(r_{\rm sea}\) & 0 & 0 & 0 & \(N_{s}\) \\ \hline \hline \end{tabular} \end{table} Table 9: Free parameters for the transversity parametrization for the fit to EicC pseudo-data. ## Appendix A Evolution and resummation Through the integrability condition (also known as Collins-Soper (CS) equation [57]), \[\zeta\frac{d}{d\zeta}\gamma_{F}(\mu,\zeta)=-\mu\frac{d}{d\mu}\mathcal{D}(\mu,b)=- \Gamma_{\rm cusp}(\mu), \tag{10}\] the anomalous dimension \(\gamma_{F}(\mu,\zeta)\) can be written as \[\gamma_{F}(\mu,\zeta)=\Gamma_{\rm cusp}(\mu)\ln\Big{(}\frac{\mu^{2}}{\zeta} \Big{)}-\gamma_{V}(\mu), \tag{11}\] where \(\Gamma_{\rm cusp}(\mu)\) is the cusp anomalous dimension and \(\gamma_{V}(\mu)\) is the finite part of the renormalization of the vector form factor. These factors can be expanded using a series expansion in terms of the strong coupling constant \(\alpha_{s}\), \[\Gamma_{\rm cusp}(\mu) =\sum_{n=0}^{\infty}a_{s}^{n+1}\Gamma_{n}, \tag{12}\] \[\gamma_{V}(\mu) =\sum_{n=1}^{\infty}a_{s}^{n}\gamma_{n}, \tag{13}\] where \(a_{s}=\alpha_{s}/(4\pi)\). When \(\mu\gg\Lambda_{\rm QCD}\), the coefficients \(\Gamma_{n}\) and \(\gamma_{n}\) can be calculated via perturbative QCD order by order, and up to two-loop order, they are \[\Gamma_{0} =4C_{F}, \tag{14}\] \[\Gamma_{1} =4C_{F}\Big{[}\big{(}\frac{67}{9}-\frac{\pi^{2}}{3}\big{)}C_{A}- \frac{20}{9}T_{R}N_{f}\Big{]},\] (15) \[\gamma_{1} =-6C_{F},\] (16) \[\gamma_{2} =C_{F}^{2}(-3+4\pi^{2}-48\zeta_{3})\] \[+C_{F}C_{A}\Big{(}-\frac{961}{27}-\frac{11\pi^{2}}{3}+52\zeta_{3} \Big{)}\] \[+C_{F}T_{R}N_{f}\Big{(}\frac{260}{27}+\frac{4\pi^{2}}{3}\Big{)}, \tag{17}\] where \(C_{F}=4/3\), \(C_{A}=3\), and \(T_{R}=1/2\) are color factors of the \(SU(3)\). In this work, we choose \(N_{f}=4\) ignoring heavy quark contribution, and \(\zeta_{3}\approx 1.202\) is the Apery's constant. Meanwhile, the integrability condition Eq. (10) is satisfied with the renormalization group equation, \[\mu^{2}\frac{d\mathcal{D}(\mu,b)}{d\mu^{2}}=\frac{\Gamma_{\rm cusp}(\mu)}{2}, \tag{18}\] and consequently the rapidity anomalous dimension \(\mathcal{D}(\mu,b)\) can be calculated at small-\(b\) perturbatively with a similar expression in power of \(a_{s}\), \[\mathcal{D}_{\rm pert}(\mu,b)=\sum_{n=0}^{\infty}a_{s}^{n}d_{n}(\mathbf{L}_{ \mu}), \tag{19}\] where \[\mathbf{L}_{\mu}=\ln(\frac{\mu^{2}b^{2}}{4e^{-2\gamma_{E}}}), \tag{20}\] Figure 16: The transverse momentum distribution of the Collins functions at different \(z\) values and \(Q=2\,\mathrm{GeV}\). The green bands represent the uncertainties of the fit to the world SIDIS and SIA data, the red bands represent the EicC projections with only statistical uncertainties, and the blue bands represent the EicC projections including systematic uncertainties as described in the text. with the Euler-Mascheroni constant \(\gamma_{E}\). The function \(d_{n}(\mathbf{L}_{\mu})\) can be expressed up to two-loop order as \[d_{0}(\mathbf{L}_{\mu}) =0, \tag{110}\] \[d_{1}(\mathbf{L}_{\mu}) =\frac{\Gamma_{0}}{2}\mathbf{L}_{\mu},\] (111) \[d_{2}(\mathbf{L}_{\mu}) =\frac{\Gamma_{0}}{4}\beta_{0}\mathbf{L}_{\mu}^{2}+\frac{\Gamma_ {1}}{2}\mathbf{L}_{\mu}+d_{2}(0), \tag{112}\] where \[d_{2}(0)=C_{F}C_{A}\Big{(}\frac{404}{27}-14\zeta_{3}\Big{)}\frac{112}{27}T_{R} N_{f}C_{F}. \tag{113}\] To improve the convergence properties of \(\mathcal{D}_{\rm pert}(\mu,b)\), we employ the resummed expression. The resummed expression \(\mathcal{D}_{\rm resum}\) can be obtained by adopting the approach outlined in [58], \[\mathcal{D}_{\rm resum}(\mu,b)=-\frac{\Gamma_{0}}{2\beta_{0}}\ln( 1-X)\] \[+\frac{a_{s}}{2\beta_{0}(1-X)}\Big{[}-\frac{\beta_{1}\Gamma_{0}} {\beta_{0}}(\ln(1-X)+X)+\Gamma_{1}X\Big{]}\] \[+\frac{a_{s}^{2}}{(1-X)^{2}}\Big{[}\frac{\Gamma_{0}\beta_{1}^{2} }{4\beta_{0}^{3}}(\ln^{2}(1-X)-X^{2})\] \[+\frac{\beta_{1}\Gamma_{1}}{4\beta_{0}^{2}}\big{(}X^{2}-2X-2\ln( 1-X)\big{)}\] \[+\frac{\Gamma_{0}\beta_{2}}{4\beta_{0}^{2}}X^{2}-\frac{\Gamma_{2 }}{4\beta_{0}}X(X-2)\] \[+C_{F}C_{A}\Big{(}\frac{404}{27}-14\zeta_{3}\Big{)}-\frac{112}{27 }T_{R}N_{f}C_{F}\Big{]}, \tag{114}\] where \(X=\beta_{0}a_{s}\mathbf{L}_{\mu}\) and the QCD \(\beta\) function can be ex Figure 17: The transverse momentum distribution of the transversity functions at different \(x\) values and \(Q=2\mathrm{GeV}\). The green bands represent the uncertainties of the fit to the World SIDIS and SIA data, the red bands represent the EicC projections with only statistical uncertainties, and the blue bands represent the EicC projections including systematic uncertainties as described in the text. pressed as \[\beta(\alpha_{s}) =-2\alpha_{s}\sum_{n=1}^{\infty}\beta_{n-1}(\frac{\alpha_{s}}{4\pi})^ {n}, \tag{108}\] \[\beta_{0} =\frac{11}{3}C_{A}-\frac{4}{3}T_{R}N_{f},\] \[\beta_{1} =\frac{34}{3}C_{A}^{2}-\frac{20}{3}C_{A}T_{R}N_{f}-4C_{F}T_{R}N_{f},\] \[\beta_{2} =\frac{2857}{54}C_{A}^{3}+\big{(}2C_{F}^{2}-\frac{205}{9}C_{F}C_{A }-\frac{1415}{27}C_{A}^{2}\big{)}T_{R}N_{f}\] \[+\big{(}\frac{44}{9}C_{F}+\frac{158}{27}C_{A}\big{)}T_{R}^{2}N_{f }^{2}. \tag{109}\] \(\mathcal{D}_{\rm resum}\) is valid only in the small \(b\) region. Therefore, a nonperturbative function is required to model the large \(b\) contribution, which is adopted as \(d_{\rm NP}\) with the form of a linear function according to Refs. [59; 60; 61; 62; 63], \[d_{\rm NP}(b)=c_{0}bb^{*}, \tag{110}\] where \[b^{*}=\frac{b}{\sqrt{1+b^{2}/B_{\rm NP}^{2}}}. \tag{111}\] For arbitrary large \(b\) one has \(b^{*}<B_{\rm NP}\) and \(b^{*}\approx b\) for small \(b\). Finally, \(\mathcal{D}(\mu,b)\) can be expanded as \[\mathcal{D}(\mu,b)=\mathcal{D}_{\rm resum}(\mu,b^{*})+d_{\rm NP}(b). \tag{112}\] According to the \(\zeta\)-prescription [43], the TMD evolution can be written as the following simple form, \[R[b;(\mu_{i},\zeta_{i})\rightarrow(Q,Q^{2})]=\Big{(}\frac{Q^{2}}{\zeta_{\mu}( Q,b)}\Big{)}^{-\mathcal{D}(Q,b)}\, \tag{113}\] where \(\zeta_{\mu}(Q,b)\) is obtained by solving the equation, \[\frac{d\ln\zeta_{\mu}(\mu,b)}{d\ln\mu^{2}}=\frac{\gamma_{F}(\mu,\zeta_{\mu}( \mu,b))}{2\mathcal{D}(\mu,b)}\, \tag{114}\] with using Eq. (112) as an input and the boundary conditions, \[\mathcal{D}(\mu_{0},b)=0\,\quad\gamma_{F}(\mu_{0},\zeta_{\mu}(\mu_{0},b))=0. \tag{115}\] In order to utilize the perturbative solution in the small \(b\) region for \(\zeta_{\mu}(\mu,b)\), we apply the formulas as Ref. [64], \[\zeta_{\mu}(\mu,b) =\zeta_{\mu}^{\rm pert}(\mu,b)e^{-b^{2}/B_{\rm NP}^{2}}\] \[\quad+\zeta_{\mu}^{\rm exact}(\mu,b)\Big{(}1-e^{-b^{2}/B_{\rm NP} ^{2}}\Big{)}. \tag{116}\] The perturbative solution of Eq. (114) can be written as \[\zeta_{\mu}^{\rm pert}(\mu,b)=\frac{2\mu e^{-\gamma_{E}}}{b}e^{-v(\mu,b)}\, \tag{117}\] which is consistent with the pQCD result by construction [65]. Up to two-loop order, \(v(\mu,b)\) can be written as \[v(\mu,b)=\frac{\gamma_{1}}{\Gamma_{0}}+a_{s}\Big{[}\frac{\beta_{0}}{12}\mathbf{L}_ {\mu}^{2}+\frac{\gamma_{2}+d_{2}(0)}{\Gamma_{0}}-\frac{\gamma_{1}\Gamma_{1}}{ \Gamma_{0}^{2}}\Big{]}. \tag{118}\] And according to the approach in Ref. [64], \(\zeta_{\mu}^{\rm exact}(\mu,b)\) can be written as \[\zeta_{\mu}^{\rm exact}(\mu,b)=\mu^{2}e^{-g(\mu,b)/\mathcal{D}(\mu,b)}. \tag{119}\] Up to two-loop order, \(g(\mu,b)\) can be written as \[g(\mu,b)=\frac{1}{a_{s}}\frac{\Gamma_{0}}{2\beta_{0}^{2}}\Bigg{\{} e^{-p}-1+p+a_{s}\Big{[}\frac{\beta_{1}}{\beta_{0}}\big{(}e^{-p}-1+p-\frac{p^{2}}{2 }\big{)}\] \[-\frac{\Gamma_{1}}{\Gamma_{0}}(e^{-p}-1+p)+\frac{\beta_{0}\gamma_ {1}}{\Gamma_{0}}p\Big{]}+a_{s}^{2}\bigg{[}\Big{(}\frac{\Gamma_{1}^{2}}{\Gamma_ {0}^{2}}-\frac{\Gamma_{2}}{\Gamma_{0}}\Big{)}(\cosh p-1)\] \[+\Big{(}\frac{\beta_{1}\Gamma_{1}}{\beta_{0}\Gamma_{0}}-\frac{ \beta_{2}}{\beta_{0}}\Big{)}(\sinh p-p)\] \[+\Big{(}\frac{\beta_{0}\gamma_{2}}{\Gamma_{0}}-\frac{\beta_{0} \gamma_{1}\Gamma_{1}}{\Gamma_{0}^{2}}\Big{)}(e^{p}-1)\bigg{]}\Bigg{\}}\, \tag{120}\] where \[p=\frac{2\beta_{0}\mathcal{D}(\mu,b)}{\Gamma_{0}}. \tag{121}\] ## Appendix B Fourier transforms for PDFs and FFs The Fourier transforms for PDFs and FFs are \[f_{1}(x,k_{\perp})= \frac{1}{4\pi^{2}}\int e^{i\mathbf{b}\cdot\mathbf{k}_{\perp}}f_{1}(x,b)d^{ 2}\mathbf{b}\] \[= \frac{1}{2\pi}\int_{0}^{+\infty}J_{0}(bk_{\perp})f_{1}(x,b)bdb, \tag{122}\] \[f_{1}(x,b)= \int e^{-i\mathbf{b}\cdot\mathbf{k}_{\perp}}f_{1}(x,k_{\perp})d^{2}\mathbf{k} _{\perp}\] Figure 18: The mean value of transversity functions for \(u\) and \(d\) quark as defined in Eq. (75) with different \(Q^{2}\). \[= 2\pi\int_{0}^{+\infty}J_{0}(bk_{\perp})f_{1}(x,k_{\perp})k_{\perp }dk_{\perp}, \tag{12}\] \[h_{1}(x,k_{\perp})= \frac{1}{4\pi^{2}}\int e^{i\mathbf{b}\cdot\mathbf{k}_{\perp}}h_{1}(x,b)d^{2 }\mathbf{b}\] \[= \frac{1}{2\pi}\int_{0}^{+\infty}J_{0}(bk_{\perp})h_{1}(x,b)bdb,\] (13) \[h_{1}(x,b)= \int e^{-i\mathbf{b}\cdot\mathbf{k}_{\perp}}h_{1}(x,k_{\perp})d^{2}\mathbf{k}_ {\perp}\] \[= 2\pi\int_{0}^{+\infty}J_{0}(bk_{\perp})h_{1}(x,k_{\perp})k_{ \perp}dk_{\perp}\] (14) \[,D_{1}(z,zp_{\perp})= \frac{1}{4\pi^{2}}\int e^{-i\mathbf{b}\cdot\mathbf{p}_{\perp}}D_{1}(z,b)d ^{2}\mathbf{b}\] \[= \frac{1}{2\pi}\int_{0}^{+\infty}J_{0}(bp_{\perp})D_{1}(z,b)bdb,\] (15) \[D_{1}(z,b)= \int e^{i\mathbf{b}\cdot\mathbf{p}_{\perp}}D_{1}(z,zp_{\perp})d^{2}\mathbf{p} _{\perp}\] \[= 2\pi\int_{0}^{+\infty}J_{0}(bp_{\perp})D_{1}(x,zp_{\perp})p_{ \perp}dp_{\perp},\] (16) \[\frac{\mathbf{p}_{\perp}}{M_{h}}H_{1}^{\perp}(z,zp_{\perp})= \frac{1}{4\pi^{2}}\int e^{-i\mathbf{b}\cdot\mathbf{p}_{\perp}}i\mathbf{b}M_{h} H_{1}^{\perp}(z,b)d^{2}\mathbf{b}\] \[H_{1}^{\perp}(z,zp_{\perp})= \frac{M_{h}^{2}}{2\pi p_{\perp}}\int_{0}^{\infty}J_{1}(bp_{\perp} )b^{2}H_{1}^{\perp}(z,b)db,\] (17) \[iM_{h}\mathbf{b}H_{1}^{\perp}(z,b)= \int e^{i\mathbf{b}\cdot\mathbf{p}_{\perp}}\frac{\mathbf{p}_{\perp}}{M_{h}}H_ {1}^{\perp}(z,zp_{\perp})d^{2}\mathbf{p}_{\perp}\] \[H_{1}^{\perp}(z,b)= \frac{2\pi}{M_{h}^{2}b}\int_{0}^{\infty}J_{1}(bp_{\perp})p_{ \perp}^{2}H_{1}^{\perp}(z,zp_{\perp})dp_{\perp}, \tag{18}\] where hadron \(h\) and flavor \(q\) dependencies in TMDs are omitted for convenience in Appendix B, and \(\mathbf{p}_{\perp}\) is the transverse momentum of the final-state quark. ## Appendix C Expression of structure functions For SIDIS process we have \[F_{UU,T}=\mathcal{C}[f_{1}D_{1}]\] \[=x\sum_{q}e_{q}^{2}\int d^{2}\mathbf{p}_{T}d^{2}\mathbf{k}_{\perp}\delta^ {(2)}(\mathbf{p}_{T}+z\mathbf{k}_{\perp}-\mathbf{P}_{h\perp})\] \[\quad\times f_{1,q\gets h_{1}}(x,k_{\perp})D_{1,q\to h_{2}}(z,p_{T})\] \[=x\sum_{q}\frac{e_{q}^{2}}{4\pi^{2}}\int d^{2}\mathbf{p}_{\perp}d^{2} \mathbf{k}_{\perp}d^{2}\mathbf{b}e^{i\mathbf{b}\cdot(\mathbf{p}_{\perp}-\mathbf{k}_{\perp}+\mathbf{P }_{h\perp}/z)}\] \[\quad\times f_{1,q\gets h_{1}}(x,k_{\perp})D_{1,q\to h_{2}}(z,zp_{\perp})\] \[=x\sum_{q}\frac{e_{q}^{2}}{2\pi}\int_{0}^{\infty}bJ_{0}(bP_{h \perp}/z)f_{1,q\gets h_{1}}(x,b)D_{1,q\to h_{2}}(z,b)db, \tag{19}\] \[F_{UT}^{\sin(\phi_{h}+\phi_{S})}=\mathcal{C}\Big{[}\frac{\hat{ \mathbf{h}}\cdot\mathbf{p}_{T}}{zM_{h}}h_{1}H_{1}^{\perp}\Big{]}\] \[=x\sum_{q}e_{q}^{2}\int d^{2}\mathbf{p}_{T}d^{2}\mathbf{k}_{\perp}\delta^ {(2)}(\mathbf{p}_{T}+z\mathbf{k}_{\perp}-\mathbf{P}_{h\perp})\] \[\quad\times\frac{\hat{\mathbf{h}}\cdot\mathbf{p}_{T}}{zM_{h}}h_{1,q \gets h_{1}}(x,k_{\perp})H_{1,q\to h_{2}}^{\perp}(z,p_{T})\] \[=-x\sum_{q}\frac{e_{q}^{2}}{4\pi^{2}}\int d^{2}\mathbf{b}d^{2}\mathbf{p}_{ \perp}d^{2}\mathbf{k}_{\perp}e^{i\mathbf{b}\cdot\mathbf{p}_{\perp}}e^{-i\mathbf{b}\cdot\mathbf{k} _{\perp}}e^{i\mathbf{b}\cdot\mathbf{P}_{h\perp}/z}\] \[\quad\times\frac{\hat{\mathbf{h}}\cdot\mathbf{p}_{\perp}}{M_{h}}h_{1,q \gets h_{1}}(x,k_{\perp})H_{1,q\to h_{2}}^{\perp}(z,zp_{\perp})\] \[=x\sum_{q}e_{q}^{2}\int_{0}^{\infty}\frac{M_{h}}{2\pi}J_{1}(bP_{h \perp}/z)b^{2}\] \[\quad\times h_{1,q\gets h_{1}}(x,b)H_{1,q\to h_{2}}^{\perp}(z,b)db, \tag{20}\] where one use the following equation \[-\mathbf{p}_{\perp}=\mathbf{p}_{T}/z,\qquad\mathbf{P}_{h\perp}=\mathbf{p}_{T}+z\mathbf{k}_{\perp}. \tag{21}\] For SIA process we have \[F_{uu}^{h_{1}h_{2}}=\mathcal{C}[D_{1}D_{1}]\] \[=\sum_{q}e_{q}^{2}\int\frac{d^{2}\mathbf{p}_{1T}}{z_{1}^{2}}\frac{d^{2 }\mathbf{p}_{2T}}{z_{2}^{2}}\delta^{(2)}(-\frac{\mathbf{p}_{1T}}{z_{1}}-\frac{\mathbf{p}_{2T }}{z_{2}}+\frac{\mathbf{P}_{h\perp}}{z_{1}})\] \[\quad\times D_{1,q\to h_{1}}(z_{1},p_{1T})D_{1,\bar{q}\to h_{2}}(z_{2},p_{2T})\] \[=\frac{1}{4\pi^{2}}\sum_{q}e_{q}^{2}\int d^{2}\mathbf{p}_{1\perp}d^{2} \mathbf{p}_{2\perp}e^{i\mathbf{b}\cdot(\mathbf{p}_{1\perp}+\mathbf{p}_{2\perp}+\mathbf{P}_{h \perp}/z_{1})}\] \[\quad\times D_{1,q\to h_{1}}(z_{1},p_{1\perp})D_{1,\bar{q} \to h_{2}}(z_{2},p_{2\perp})d^{2}\mathbf{b}\] \[=\frac{1}{2\pi}\sum_{q}e_{q}^{2}\int J_{0}(P_{h\perp}b/z_{1})\] \[\quad\times D_{1,q\to h_{1}}(z_{1},b)D_{1,\bar{q}\to h_{2}}(z_{2},b)bdb, \tag{22}\] \[F_{Collins}^{h_{1}h_{2}}=\mathcal{C}[\frac{2(\hat{\mathbf{h}}\cdot\mathbf{p} _{1T})(\hat{\mathbf{h}}\cdot\mathbf{p}_{2T})-\mathbf{p}_{1T}\cdot\mathbf{p}_{2T}}{z_{1}z_{2}M_{h _{1}}M_{h_{2}}}H_{1}^{\perp}H_{1}^{\perp}]\] \[=2F_{Collins}^{h_{1}h_{2}}-F_{coll2}^{h_{1}h_{2}}\] \[=\frac{M_{h_{1}}h_{h_{2}}}{2\pi}\sum_{q}e_{q}^{2}\int J_{2}(P_{h \perp}b/z_{1})\] \[\quad\times H_{1,q\to h_{1}}^{\perp}(z_{1},b)H_{1,\bar{q} \to h_{2}}^{\perp}(z_{2},b)b^{3}db, \tag{23}\] where \[F_{coll1}^{h_{1}h_{2}}=\sum_{q}e_{q}^{2}\int\frac{d^{2}\mathbf{p}_{1T}}{z_{1}^{2}} \frac{d^{2}\mathbf{p}_{2T}}{z_{2}^{2}}\delta^{(2)}(-\frac{\mathbf{p}_{1T}}{z_{1}}-\frac{ \mathbf{p}_{2T}}{z_{2}}+\frac{\mathbf{P}_{h\perp}}{z_{1}})\] \[\quad\times\frac{\hat{\mathbf{h}}\cdot\mathbf{p}_{1T}}{z_{1}M_{h_{1}}} \[\times\frac{\hat{\mathbf{h}}\cdot\mathbf{p}_{1\perp}}{M_{h_{1}}}H^{\perp}_{1,q \to h_{1}}(z_{1},zp_{1\perp})\frac{\hat{\mathbf{h}}\cdot\mathbf{p}_{2\perp}}{M_{h_{2}}}H ^{\perp}_{1,\bar{q}\to h_{2}}(z_{2},zp_{2\perp})\] \[= \frac{M_{h_{1}}M_{h_{2}}}{2\pi}\sum_{q}e_{q}^{2}\int_{0}^{\infty} db\ b^{3}\Big{(}J_{2}(bP_{h\perp}/z_{1})\] \[-J_{1}(bP_{h\perp}/z_{1})/(bP_{h\perp}/z_{1})\Big{)}H^{\perp}_{1,q \to h_{1}}(z_{1},b)H^{\perp}_{1,\bar{q}\to h_{2}}(z_{2},b), \tag{100}\] \[F^{h_{1}h_{2}}_{col2}= \sum_{q}e_{q}^{2}\int\frac{d^{2}\mathbf{p}_{1T}}{z_{1}^{2}}\frac{d^{2 }\mathbf{p}_{2T}}{z_{2}^{2}}\delta^{(2)}(-\frac{\mathbf{p}_{1T}}{z_{1}}-\frac{\mathbf{p}_{ 2T}}{z_{2}}+\frac{\mathbf{P}_{h\perp}}{z_{1}})\] \[\times\frac{\mathbf{p}_{1T}\cdot\mathbf{p}_{2T}}{z_{1}z_{2}M_{h_{1}}M_{h_{ 2}}}H^{\perp}_{1,q\to h_{1}}(z_{1},p_{1T})H^{\perp}_{1,\bar{q}\to h_{2}}(z_{2}, p_{2T})\] \[= \sum_{q}e_{q}^{2}\int d^{2}\mathbf{p}_{1\perp}d^{2}\mathbf{p}_{2\perp} \delta^{(2)}(\mathbf{p}_{1\perp}+\mathbf{p}_{2\perp}+\frac{\mathbf{P}_{h\perp}}{z_{1}})\] \[\times\frac{\mathbf{p}_{1\perp}\cdot\mathbf{p}_{2\perp}}{M_{h_{1}}M_{h_{ 2}}}H^{\perp}_{1,q\to h_{1}}(z_{1},p_{1\perp})H^{\perp}_{1,\bar{q}\to h_{2}}(z_ {2},zp_{2\perp})\] \[= -\frac{M_{h_{1}}M_{h_{2}}}{2\pi}\sum_{q}e_{q}^{2}\int db\ b^{3}J_{0 }(bP_{h_{\perp}}/z_{1})\] \[\times H^{\perp}_{1,q\to h_{1}}(z_{1},b)H^{\perp}_{1,\bar{q} \to h_{2}}(z_{2},b), \tag{101}\] where \(J_{n}(X)\) is Bessel functions and we use the following relation \[2\frac{J_{1}(X)}{X}=J_{2}(X)+J_{0}(X), \tag{102}\] and similar to Eq. (100) one can have the following equation \[-\mathbf{p}_{1\perp}=\mathbf{p}_{1T}/z_{1},\qquad-\mathbf{p}_{2\perp}=\mathbf{p}_ {2T}/z_{2},\] \[\mathbf{P}_{h\perp}/z_{1}=\mathbf{p}_{1T}/z_{1}+\mathbf{p}_{2T}/z_{2}. \tag{103}\] ## Appendix D Expression of matching functions For TMD PDFs, the coefficient function \(C\) up to NLO is [43] \[C_{f\gets f^{\prime}}(x,b,\mu)=\delta(1-x)\delta_{ff^{ \prime}}\] \[\qquad\qquad+a_{s}(\mu)\Big{(}-\mathbf{L}_{\mu}P^{(1)}_{f\gets f^ {\prime}}+C^{(1,0)}_{f\gets f^{\prime}}\Big{)}\, \tag{104}\] where \[C^{(1,0)}_{q\gets q^{\prime}}(x) =C_{F}\Big{[}2(1-x)-\delta(1-x)\frac{\pi^{2}}{6}\Big{]}\delta_{qq^ {\prime}}, \tag{105}\] \[C^{(1,0)}_{q\gets g}(x) =2x(1-x),\] (106) \[P^{(1)}_{q\gets q^{\prime}}(x) =2C_{F}\Big{[}\frac{2}{(1-x)_{+}}-1-x+\frac{3}{2}\delta(1-x) \Big{]}\delta_{qq^{\prime}},\] (107) \[P^{(1)}_{q\gets g}(x) =1-2x+2x^{2}. \tag{108}\] For TMD FFs, the matching coefficient \(\mathbb{C}\) up to NLO follows the same pattern as in Eq. (104) with the replacement of the PDF DGLAP kernels \(P^{(1)}_{f\gets f^{\prime}}(x)\) by the FF DGLAP kernels [66], \[\mathbb{P}^{(1)}_{q\to q^{\prime}}(z)= \frac{2C_{F}}{z^{2}}\Big{[}\frac{1+z^{2}}{1-z}\Big{)}_{+}\delta_{qq^ {\prime}}, \tag{109}\] \[\mathbb{P}^{(1)}_{q\to g}(z)= \frac{2C_{F}}{z^{2}}\frac{1+(1-z)^{2}}{z}, \tag{110}\] and the replacement of \(C^{(1,0)}_{f\gets f^{\prime}}(x)\) by [43] \[\mathbb{C}^{(1,0)}_{q\to q^{\prime}}(z)= \frac{C_{F}}{z^{2}}\Big{[}2(1-z)+\frac{4(1+z^{2})\ln z}{1-z}\] \[-\delta(1-z)\frac{\pi^{2}}{6}\Big{]}\delta_{qq^{\prime}}, \tag{111}\] \[\mathbb{C}^{(1,0)}_{q\to g}(z)= \frac{2C_{F}}{z^{2}}\Big{[}z+2\big{(}1+(1-z)^{2}\big{)}\frac{\ln z }{z}\Big{]}. \tag{112}\] The "\(+\)"-prescription is defined as \[\int_{x_{0}}^{1}dx\,[g(x)]_{+}f(x)\] \[=\int_{0}^{1}dx\,g(x)[f(x)\Theta(x-x_{0})-f(1)], \tag{113}\] where \(\Theta(x-x_{0})\) is the Heaviside step function. ## Appendix E Transversity function with different target The isospin symmetry is also assumed to relate the transversity function of the neutron and the transversity function of the proton as (\(\mu_{i}\) and \(\zeta_{i}\) dependencies in TMDs are omitted for convenience) \[h_{1,u\gets n}(x,b) =h_{1,d\gets p}(x,b),\] \[h_{1,\bar{u}\gets n}(x,b) =h_{1,\bar{d}\gets p}(x,b),\] \[h_{1,d\gets n}(x,b) =h_{1,u\gets p}(x,b),\] \[h_{1,\bar{d}\gets n}(x,b) =h_{1,\bar{u}\gets p}(x,b),\] \[h_{1,s\gets n}(x,b) =h_{1,s\gets p}(x,b),\] \[h_{1,\bar{s}\gets n}(x,b) =h_{1,\bar{s}\gets p}(x,b). \tag{114}\] Since a free neutron target is not available for SIDIS experiments, the polarized deuteron and polarized \({}^{3}\)He are commonly used to obtain parton distributions in the neutron. As an approximation, the transversity functions of the deuteron and the \({}^{3}\)He are set via the weighted combination of the proton transversity function and the neutron transversity function. For a deuteron, the transversity function is expressed as \[h_{1,qt\gets d}(x,b)=\frac{P^{n}_{d}h_{1,qt\gets n}(x,b)+P^{p}_{d}h_{1,qt \gets p}(x,b)}{2}, \tag{115}\] where \(P^{n}_{d}=P^{p}_{d}=0.925\) are effective polarizations of the neutron and the proton in a polarized deuteron [67]. Similarly, the transversity function of a \({}^{3}\)He is \[h_{1,qt\gets 3{\rm He}}(x,b)=\frac{P^{n}_{\rm He}h_{1,qt\gets n}(x,b)+2P^{p}_{ \rm He}h_{1,qt\gets p}(x,b)}{3}, \tag{116}\] where \(P_{\rm He}^{n}=0.86\) and \(P_{\rm He}^{p}=-0.028\) are effective polarizations of the neutron and the proton in a polarized \({}^{3}\)He [68]. This parametrization setup is applied for both the fit to world SIDIS data and the fit to EicC pseudodata.
2308.14466
Improving the performance of object detection by preserving label distribution
Object detection is a task that performs position identification and label classification of objects in images or videos. The information obtained through this process plays an essential role in various tasks in the field of computer vision. In object detection, the data utilized for training and validation typically originate from public datasets that are well-balanced in terms of the number of objects ascribed to each class in an image. However, in real-world scenarios, handling datasets with much greater class imbalance, i.e., very different numbers of objects for each class , is much more common, and this imbalance may reduce the performance of object detection when predicting unseen test images. In our study, thus, we propose a method that evenly distributes the classes in an image for training and validation, solving the class imbalance problem in object detection. Our proposed method aims to maintain a uniform class distribution through multi-label stratification. We tested our proposed method not only on public datasets that typically exhibit balanced class distribution but also on custom datasets that may have imbalanced class distribution. We found that our proposed method was more effective on datasets containing severe imbalance and less data. Our findings indicate that the proposed method can be effectively used on datasets with substantially imbalanced class distribution.
Heewon Lee, Sangtae Ahn
2023-08-28T10:04:06Z
http://arxiv.org/abs/2308.14466v1
# Improving the performance of object detection by preserving label distribution ###### Abstract Object detection is a task that performs position identification and label classification of objects in images or videos. The information obtained through this process plays an essential role in various tasks in the field of computer vision. In object detection, the data utilized for training and validation typically originate from public datasets that are well-balanced in terms of the number of objects ascribed to each class in an image. However, in real-world scenarios, handling datasets with much greater class imbalance, i.e., very different numbers of objects for each class, is much more common, and this imbalance may reduce the performance of object detection when predicting unseen test images. In our study, thus, we propose a method that evenly distributes the classes in an image for training and validation, solving the class imbalance problem in object detection. Our proposed method aims to maintain a uniform class distribution through multi-label stratification. We tested our proposed method not only on public datasets that typically exhibit balanced class distribution but also on custom datasets that may have imbalanced class distribution. We found that our proposed method was more effective on datasets containing severe imbalance and less data. Our findings indicate that the proposed method can be effectively used on datasets with substantially imbalanced class distribution. Keywords:object detection; imbalanced class distribution; + Footnote †: journal: ## 1 Introduction Computer vision is a field of artificial intelligence that enables machines to understand and interpret visual information [1]. Computer vision involves the recognition and extraction of patterns from images or videos, and it has been applied in various fields [2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17]. Examples of its usage include object detection [18], image classification [19], face recognition [20], or image generation [21]. Among these, object detection is an important task in the field of computer vision, which aims to identify and localize multiple objects in images or videos. To develop a model for object detection, the first step is to collect data and then appropriately split them into training and validation datasets. When splitting the original data, maintaining a similar class distribution between the training and validation datasets is crucial because datasets for object detection are presented as multi-label problems in that multiple objects can be present in a single image. In particular, the number of objects for each label could be imbalanced (Figure 1). Maintaining a similar class balance when dividing the dataset could be difficult. For image classification problems, a strategy called stratification can be used to maintain similar class distributions. However, it is unknown how stratification is applied to an object detection problem and how stratification influences the perofmrance of object detection. Here, we present a study that proposes a new method called stratification for object detection (SOD), which applies stratification to the object detection problem. This method uses a multi-label stratification technique to preprocess labeled data in object detection and then applies stratification. This method facilitates the preservation of the class distribution among the split datasets, yielding better performance on public and custom datasets. Our proposed method can solve the problem of class imbalance better than existing methods, improving the performance of trained models. Our method was applied to object detection problems, particularly implemented in the YOLO format because the YOLO algorithm is a widely used platform in object detection studies. Therefore, our proposed method is expected to be a useful tool that solves an important problem in the field of object detection and guarantees improved performance. Furthermore, it can contribute to improving the accuracy and reliability of the object detection model. The main contributions of this research are as follows: * We propose a method for preserving class distribution in object detection tasks. * We experimentally demonstrate the effectiveness of our proposed method on both public datasets and custom datasets. ## 2 Related Work ### Real-time object detection Object detection is a computer vision task that involves detecting multiple objects in images or videos and classifying their positions and types. It is a key technological tool that enables computers to understand the real world and has been applied in various fields such as autonomous driving[22], surveillance systems[23], robotics[24], augmented reality[25], and medical image analysis[26]. Fundamentally, object recognition involves representing specific objects in an image as rectangular bounding boxes and characterizing them into different classes. While there are various algorithms for this task, we mainly employ the YOLO algorithm in this study. YOLO[27], first proposed in 2015, is an algorithm that enables real-time object recognition. YOLO divides an image into a grid and applies a method to simultaneously predict bounding boxes and class probabilities for each grid cell. Unlike two-stage detectors[28], YOLO adopts a one-stage detector approach, whereby it considers the entire image at once and performs predictions. This unique feature of YOLO allows for real-time processing. Since 2015, YOLO has undergone many improvements. In YOLOv2[29], the first update of YOLO, the ability to detect objects of various sizes was enhanced. The concept of anchor boxes[30] was introduced in YOLOv2, and multi-scale training methods were used to train on images with different resolutions, resulting in improved detection of objects of different sizes. Additionally, YOLOv2 utilized the WordTree model to jointly train on the MSCOCO and ImageNet datasets, enabling the detection of over 9,000 different classes. YOLOv3[31] improved the detection of objects of various sizes by predicting bounding boxes in three different-sized feature maps. YOLOv3 also introduced a method to predict multi-labels for each box, facilitating the handling of more complex classification problems. Figure 1: Each image has a different number of bounding boxes for each class. For example, the number of bounding boxes for a person may be high while the number of bounding boxes for a chair may be low (left) or vice versa (right). Images are taken from the Pascal Visual Object Classes(VOC) 2012 dataset. In YOLOv4[32], several optimizations were introduced to improve both performance and speed over those of previous versions. For YOLOv4, several features were introduced, namely, a new backbone network called CSPDarknet53, a new neck structure using PANet and SAM blocks, and the Mish activation function. These new structures and features made YOLOv4 more efficient and capable of accurately recognizing objects. The improved training speed, inference time, and model size were improved in YOLOv5[33], while user-friendly features were added using the PyTorch framework. This enabled faster and more efficient object recognition, expanding the practical application of object detection. Recently, the trainable bag-of-freebies method was introduced in YOLOv7[34; 35] to significantly improve detection accuracy without increasing the inference cost. In this version, methods were also proposed to address issues arising from re-parameterized modules replacing original modules and apply dynamic label assignment strategies for different output layer assignments. Furthermore, extended and compound scaling methods were formulated to effectively utilize parameters and computations, reduce parameters and computational cost, and improve inference speed and accuracy. In this manner, YOLO-based models have continued to evolve and become the standard in real-time object detection. By examining these research trends, we can observe that object detection algorithms are steadily advancing to become faster, more accurate, and more applicable in diverse environments. ### Stratification Studies applying stratification to datasets have emphasized the significance of the class distribution of the dataset, which is particularly important in the treatment of classification problems in the field of machine learning[36]. Stratification is a type of data sampling technique that seeks to maintain the class distribution of the entire dataset in the training and validation sub-datasets. The key advantage of this technique is that it ensures that all classes are represented in the training and validation subsets as they are over the entire dataset, allowing the model to earn each class fairly. The utilization of stratification in deep learning is mainly focused on addressing class imbalance issues[37]. In scenarios in which the number of samples for a specific class is significantly larger than that for other classes, i.e., class imbalance, the performance of the model can be excessively influenced by the dominant class. Stratification can alleviate such problems and ensure that all classes are fairly represented. Furthermore, applying stratification to cross-validation is a highly effective strategy. Cross-validation[38] is a technique used to evaluate the generalization performance of a model by dividing the data into multiple folds and using each fold alternately as a training set and a validation set. By applying stratification to this process, the class distribution of the data in each fold can accurately reflect the distribution of the entire dataset. This can help verify whether each model performs equally well for all classes. Various methods for applying stratification in deep learning have been utilized, and they can be adjusted depending on the characteristics of a specific problem and dataset. Stratification may be necessary in some cases but non-essential in others. However, in general, stratification is recognized as an important step for improving model training and enhancing generalization performance. ### Multi-label stratification Multi-label classification[39], unlike conventional classification, allows each data point to be classified with multiple labels simultaneously. Stratified sampling is a technique that selects samples evenly from each category, ensuring the representativeness of the entire dataset while maintaining the importance of each category. However, this technique is only applicable to single-label classification problems. Multi-label stratification extends this technique to problems in which each data point can have multiple labels. This ensures that each label combination is evenly distributed in the training and validation datasets, allowing the model to optimize generalizability for each label combination during training. There are various use cases for multi-label stratification. For example, in music classification[40], as a song can belong to multiple genres, the corresponding training and validation datasets should represent each genre combination correctly to achieve accurate model learning. Multi-label stratification can be used to meet these requirements. Similarly, in medical imaging[41], an image can display multiple diseases. In this case, the combinations involving each disease label need to be well represented in the training and validation datasets, and multi-label stratification can be used for this purpose. Additionally, in text classification[42], text data such as news articles, research papers, and blog posts can be simultaneously classified multiple topics or categories. In this case, multi-label stratification can be applied to ensure that combinations involving each topic are evenly distributed in the training and validation datasets. In summary, multi-label stratification plays an important role in enabling more accurate model training in various fields to address real-world problems effectively. ## 3 Proposed algorithm In this section, we introduce a new algorithm that applies stratification to object detection datasets using the YOLO format. The main goal is to improve existing dataset partitioning methods, enabling a more precise and fairer training process. The detailed operation of the algorithm is as follows. This algorithm requires three inputs: the paths to the folders where the image files and text files are stored, denoted as \(F_{img}\) and \(F_{txt}\) respectively, and the number of subsets within the dataset to be created, denoted as \(k\). The algorithm, illustrated below, operates as follows. Lines 1 to 5 generate a list using the names of the image files. Similarly, lines 6 to 10 construct a list using the names of the text files. Lines 11 to 27 convert the label data from the text files into Dataframe format. In this process, if the dataset contains background images to prevent false positives, there may be no corresponding text file for that image or the text file may be empty to prevent the presence of false positives. To account for such background images in the partitioning process, we insert -1 in the class column of the Dataframe. Once this step is completed, the Dataframe contains the paths of the text files, number of objects per class, and the x and y coordinates, width (w), and height (h) of each object. Lines 29 to 34 are preprocessing steps needed before the multi-label stratified KFold procedure is performed. In this process, we perform one-hot encoding on the classes and concatenate the original Dataframe with the one-hot encoded Dataframe. We then multiply the \(w\) and \(h\) values by 1000 to prevent loss of values when calculating their averages. Next, we remove the "class", "x", and "y" columns from the Dataframe and group them based on filename. This creates a Dataframe that indicates how many objects of each class exist within each text file. For background images, the total object count is 0; thus, we change it to 1 to avoid division by 0 when calculating the averages of \(w\) and \(h\). We then calculate w, h, and the ratio of h to w. Then, we remove the "w" and "h" columns to improve computational efficiency during the partitioning process. Finally, lines 35 to 38 perform the multi-label stratified KFold procedure, dividing the dataset into training and validation sets. We discuss the selection of the labels for the Dataframe in the multi-label stratified KFold process \(class_{0}\sim class_{n}\), \(avg\_w\), \(avg\_h\), and \(avg\_ratio\). \(class_{0}\sim class_{n}\) indicates the presence and quantity of each class within an image. This parameter serves to ensure diversity in the training data by considering the various class labels within the dataset. We also selected average width (\(avg\_w\)), average height (\(avg\_h\)), and average ratio (\(avg\_ratio\)) as labels for the following reason. Object detection models such as Faster R-CNN[43] and YOLO use predefined anchor boxes with specific aspect ratios. Anchor boxes are one of the elements used in object detection, in which frames with specific positions and sizes are set within an image. This allows the model to detect objects of various sizes and shapes, resulting in improved accuracy. If the aspect ratio distribution of bounding boxes in the training set and validation set differs, model performance can be negatively affected. If the aspect ratio distribution is not aligned, it becomes difficult to appropriately set the position and size of the anchor boxes, which ultimately affects the accuracy of object detection. Therefore, by selecting the average width (\(avg\_w\)), average height (\(avg\_h\)), and average ratio (\(avg\_ratio\)) as labels, we aim to characterize the aspect ratio distribution of the bounding boxes in each dataset and optimize their size and aspect ratios, improving model performance. This allows the model to effectively detect objects with various aspect ratios. ## 4 Experiments ### Datasets We present the results of experiments conducted on public datasets to demonstrate the efficiency of the proposed algorithm. The datasets are used for object detection purposes, with each image indicating the location and type of object through a bounding box. Across the datasets, the number of samples is generally far greater than the number of classes, which enhances the training process by providing a wide range of samples for each class. In addition, we selected a dataset that includes background images to minimize the phenomenon of background false positives. This is crucial in preventing errors in object detection owing to features in the background. We used entropy (Equation 1) to assess the distribution of each dataset. Higher entropy corresponds to higher uncertainty in the data and vice versa. Thus, entropy can be used as a measure of how well the classes in the data are distributed. In the equation, \(p_{i}\) represents the occurrence probability of each class over the entire dataset. \[Entropy=-\sum_{i=1}^{n}p_{i}\log(p_{i}) \tag{1}\] Table 1 shows a detailed analysis of the datasets. In datasets with a large number of classes, entropy tends to be relatively high. If the class label distribution of a dataset is diverse or uncertain, the stratification process may not be as effective. As stratification involves extracting samples while maintaining the class ratio of the original dataset, the proposed algorithm is recommended for use in scenarios in which the number of classes is less than or equal to 20. ### Class distribution We applied 10-fold cross-validation to the datasets and compared the class distribution of the original dataset with that of the split datasets. To quantify the differences in distribution between them, we used the mean absolute error(MAE) (Equation 2). This calculation was used to measure the difference in class ratios between the original dataset and split dataset. Through this, we quantitatively evaluated how accurately the split dataset reflects the class distribution of the original dataset. In the MAE metric below, \(y_{i}\) represents the class ratio of the original dataset, \(\hat{y}_{i}\) represents the class ratio of the divided dataset, and \(n\) represents the number of classes in the existing dataset. \[MAE=\frac{1}{n}\sum_{i=1}^{n}|y_{i}-\hat{y}_{i}| \tag{2}\] We compared the MAE of the traditional KFold cross-validation method and that of our proposed algorithm. As shown in Table 2, in the majority of datasets in which our proposed algorithm was applied, the median MAE was lower compared to that of the traditional KFold cross-validation. This demonstrates that our proposed algorithm was \begin{table} \begin{tabular}{l l c c c c c c c c c} \hline \hline \multirow{2}{*}{**Category**} & \multirow{2}{*}{**Dataset**} & \multirow{2}{*}{**Classes**} & \multirow{2}{*}{**Samples**} & \multirow{2}{*}{**Samples**} & \multicolumn{2}{c}{**Samples per Image**} & \multicolumn{2}{c}{**Class per Image**} & \multicolumn{2}{c}{**Entropy**} \\ \cline{5-10} & & & & & **per Class** & Min & Avg & Max & Min & Avg & Max & \\ \hline \multirow{3}{*}{Public} & COCO val2017[44] & 80 & 5000 & 62.5 & 0 & 7.9 & 96 & 0 & 2.9 & 14 & 3.39 \\ & Pascal VOC 2012 val[45] & 20 & 3422 & 171.1 & 0 & 2.3 & 23 & 0 & 1.4 & 5 & 2.31 \\ & PlantDoc[46] & 30 & 2569 & 85.6 & 0 & 3.4 & 42 & 0 & 1 & 3 & 3.17 \\ \hline \multirow{3}{*}{Private} & Website screenshot & 8 & 1206 & 150.8 & 2 & 45 & 2023 & 2 & 5.3 & 8 & 1.61 \\ & Aquarium & 7 & 638 & 91.1 & 0 & 7.6 & 56 & 0 & 1.4 & 3 & 1.42 \\ \cline{1-1} & BCCD & 3 & 364 & 121.3 & 1 & 13.4 & 30 & 1 & 2.5 & 3 & 0.53 \\ \hline \hline \end{tabular} \end{table} Table 1: Statistical analysis of datasets for object detection. Public datasets: COCOval2017, Pascal VOC 2012 val, and PlantDoc. Private datasets[47]: Website screenshot, Aquarium, and BCCD. more effective in preserving the class ratios than the traditional method. However, some of datasets with an entropy of 2 or higher have found that KFold preserves class distribution better than our proposed algorithm. This indicates that such a phenomenon occurs when there is high uncertainty in the label distribution. In contrast, for datasets with an entropy of 2 or lower, our algorithm consistently showed lower median MAE and variance values compared to KFold in all cases. These results demonstrate that our algorithm provides a more stable and consistent performance. Based on the experimental results, our proposed algorithm has been confirmed to effectively preserve class ratios while reducing variance, surpassing the commonly used KFold method. ### Training and testing To analyze the difference in performance between KFold and our proposed algorithm, we applied 10-fold cross-validation using the two methods in question to split the dataset. The dataset used in our study was divided according to the following procedure (Figure 2). 1. The original dataset was split into 10 folds using our proposed algorithm. 2. The last fold (10th fold) was set as the test dataset. 3. The remaining 9 folds (k-1 folds) were combined and further split into 9 folds using KFold. 4. Similarly, the k-1 folds were split into 9 folds using our proposed algorithm. 5. Training was performed iteratively for each of the 9 fold datasets. 6. The trained models were used for inference on the fixed test dataset. 7. Finally, the inference results using KFold and our proposed algorithm were compared using MAE. The training was conducted with a 320-pixel image as the base model of YOLOv7 on a computing system consisting of three NVIDIA RTX 3090 Ti GPUs. \begin{table} \begin{tabular}{l|l|c|c c|c c} \hline \multirow{2}{*}{**Category**} & \multirow{2}{*}{**Dataset**} & \multirow{2}{*}{**Entropy**} & \multicolumn{2}{c|}{**Train**} & \multicolumn{2}{c}{**Validation**} \\ \cline{3-6} & & & **KFold** & **Ours** & **KFold** & **Ours** \\ \hline \multirow{3}{*}{Public} & COCO val2017 & 3.39 & **165\(\pm\)127** & 168\(\pm\)128 & **1466.5\(\pm\)1126.5** & 1506.5\(\pm\)1144.5 \\ & Pascal VOC 2012 val & 2.31 & **591.5\(\pm\)408.5** & 618\(\pm\)391 & 5299\(\pm\)3712 & **5279\(\pm\)3330** \\ & PlantDoc & 3.17 & 463.5\(\pm\)321.5 & **301.5\(\pm\)255.5** & 4097\(\pm\)2803 & **2614.5\(\pm\)2205.5** \\ \hline \multirow{3}{*}{Private} & Website screenshot & 1.61 & 4864\(\pm\)3880 & **4092.5\(\pm\)3428.5** & 42897\(\pm\)34009 & **35538.5\(\pm\)29582.5** \\ & Aquarium & 1.42 & 5172.5\(\pm\)4075.5 & **3943**\(\pm\)2202 & 44013.5\(\pm\)33452.5 & **38541\(\pm\)22795** \\ & BCCD & 0.53 & 1973.5\(\pm\)1417.5 & **1224\(\pm\)862** & 17683.5\(\pm\)12738.5 & **11459\(\pm\)8186** \\ \hline \end{tabular} \end{table} Table 2: Statistical characteristics of the subsets generated through 10-fold cross-validation. A comparison between KFold and our proposed algorithm is presented, listing the name of each dataset, entropy of each dataset, class distribution of the training set (9 folds), and class distribution of the validation set (1 fold). The unit of the MAE is 1e-7. Figure 2: 10-fold cross-validation split method. The yellow box represents the fold used for validation, and the green boxes represent the fold used for training. When splitting the Train/Validation data and Test data, YOLOstratifiedKFold is used, and when dividing the data onto folds, either KFold or YOLOstratifiedKFold is used. For datasets in which the entropy value is 2 or less, the training on the dataset applied with our proposed method consistently showed higher performance in the \(mAP\_0.5\) metric as compared to when the KFold method was used, illustrated in Table 3. In the custom datasets, which contain relatively few classes, the class distribution of the datasets applied with our proposed method was more similar to the original class distribution (refer to Table 2) than the public dataset, and we confirmed that the performance of our model was also higher than that of the original KFold method. These results suggest that in training with data that exhibit relatively low complexity, i.e., a low number of classes, our method can achieve higher performance based on the \(mAP\_0.5\) metric. In contrast, it was difficult to confirm a significant improvement in the \(mAP\_0.5:0.95\) metric from the training results. For our method, the datasets demonstrating better performance in \(mAP\_0.5:0.95\) had less than 100 samples per class. These results suggest that our method effectively works even with small datasets, demonstrating good performance even for low numbers of samples per class. This implies that when the dataset is split using our method, the model can recognize the rough location of the object more accurately; however, it does not reflect the detailed shape and exact location of the object during the splitting process. One possible way to solve this problem is to perform stratification by referring to the features within the bounding box in the image; however, this method significantly increases the computational load required for stratification, which could greatly increase the amount of time to perform the routine. Therefore, in this study, we minimized the time burden and verified the performance of the model by excluding image references and only using label data stored in text files. ### Inference results In Figure 3, the inference results for the public dataset are presented. When comparing KFold and our method on the COCO dataset, the classification error for our method is less than that using KFold. While the tennis racket located in the center of the image is not accurately classified with KFold, it is correctly classified using our method. This demonstrates that using our method to split the dataset results in a smaller classification error as the class ratio is maintained throughout the dataset partitioning. A comparison conducted on the Pascal VOC 2012 dataset shows that the bounding box appearing in the upper left of the image is drawn incorrectly when using KFold but is drawn closer to the correct answer when using our method. This is also due to the reduction in localization error[48] as the aspect ratio of the bounding box is maintained with the split of the dataset. For the PlantDoc dataset, using KFold resulted in two bounding boxes with different classes drawn in a similar location even though there is only one ground truth, leading to multiple detections. This occurs because the class ratios between the subsets are not maintained in the random splitting of the dataset with this method. Conversely, this phenomenon did not appear using our method. In Figure 4, the inference results on the private dataset are shown. In the Website screenshot dataset, there is a True Negative[49] owing to the complexity of features within a class, but our method shows a higher IoU than does KFold. In this case, the localization error is reduced by maintaining the aspect ratio of the bounding box similar as before after splitting the dataset. In the Aquarium dataset, the True Negative phenomenon of \begin{table} \begin{tabular}{l|l|c|c|c|c} \hline \multirow{2}{*}{**Category**} & \multirow{2}{*}{**Dataset**} & \multirow{2}{*}{**Entropy**} & \multicolumn{2}{c|}{**mAP\_0.5**} & \multicolumn{2}{c}{**mAP\_0.50.95**} \\ \cline{4-6} & & & **KFold** & **Ours** & **KFold** & **Ours** \\ \hline \multirow{3}{*}{Public} & COCO val2017 & 3.39 & 23.50\%\(\pm\)0.80\% & **23.75\%\(\pm\)1.25\%** & 14.00\%\(\pm\)0.80\% & **14.10\%\(\pm\)0.70\%** \\ & Pascal VOC 2012 val & 2.31 & **46.05\%\(\pm\)2.25\%** & 44.95\%\(\pm\)2.85\% & **27.95\%\(\pm\)1.05\%** & 27.45\%\(\pm\)1.55\% \\ & PlantDoc & 3.17 & **48.80\%\(\pm\)2.60\%** & 48.20\%\(\pm\)2.00\% & 32.90\%\(\pm\)1.90\% & **33.30\%\(\pm\)1.30\%** \\ \hline \multirow{3}{*}{Private} & Website screenshot & 1.61 & 45.85\%\(\pm\)1.65\% & **46.10\%\(\pm\)1.50\%** & **28.00\%\(\pm\)1.20\%** & 27.75\%\(\pm\)1.45\% \\ & Aquarium & 1.42 & 51.60\%\(\pm\)3.20\% & **52.25\%\(\pm\)4.25\%** & 23.10\%\(\pm\)2.10\% & **23.25\%\(\pm\)2.25\%** \\ \cline{1-1} & BCCD & 0.53 & 87.90\%\(\pm\)1.70\% & **88.55\%\(\pm\)1.35\%** & **55.55\%\(\pm\)1.25\%** & 55.45\%\(\pm\)1.55\% \\ \hline \end{tabular} \end{table} Table 3: Results of training by 10-fold cross validation Figure 3: Comparing KFold to our proposed method on the Public dataset. Figure 4: Comparing KFold to our proposed method on the private dataset. no detection occurred when using KFold; however, this phenomenon was reduced when using our method. In the BCCD dataset, by optimizing the anchor box considering the class ratio, width, height, and ratio of the object, both KFold and our method achieved results closer to the ground truth of the bounding box and realized a higher IoU. ## 5 Discussion & Conclusion This study focuses on the importance of applying stratification in the preprocessing stage of object detection tasks. This is the first study, to our knowledge, in the field of object detection that proposes a stratification method in dataset splitting, demonstrating the effect of alleviating class imbalance in the generation of training and validation sets. Additionally, stratification has been demonstrated to enhance the performance of an object detection model for a 2D image object detection dataset. However, stratification did not yield the best results under all experimental conditions. For high-dimensional datasets with a large number of classes, the application of stratification was ineffective, indicating that it may not be suitable for all types of datasets. Nevertheless, research involving classification tasks has shown that class balancing is essential and can significantly contribute to performance improvement for unbalanced datasets. This aspect is applicable to the object detection task in this study, emphasizing the need for an integrated approach to apply stratification in classification and object detection tasks. The results of this study confirm that the application of stratification must be carefully performed based on the characteristics of the dataset and distribution of the detection targets. As the features present in image data can influence the effect of stratification, further research in needed to improve and optimize the stratification method for use on images. Based on the results of this study, greater attention toward stratification utilization and its role in enhancing performance of object detection models is expected. Conceptualization, H.L., S.A.; methodology, H.L.; software, H.L.; validation, H.L., S.A.; formal analysis, H.L.; data curation, H.-L.; writing--original draft preparation, H.L.; writing--review and editing, H.L., S.A.; visualization, H.L.; funding acquisition, S.A. All authors have read and agreed to the published version of the manuscript. Not applicable. Data Availability Statement: Datasets used in this study are publicly available on the web([https://public.roobflow.com/](https://public.roobflow.com/)). This work was supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIT)(No. NRF-2022R1A4A1023248) The authors declare no conflict of interest.
2301.09882
Effect of mesonic off-shell correlations in the PNJL equation of state
We study the meson contribution to the equation of state of the 2-flavor PNJL model, including the full momentum dependence of the meson polarization loops. Within the Beth-Uhlenbeck approach, we demonstrate that the contribution from the quark-antiquark continuum excitations in the spacelike region $\omega^2 - q^2 < 0$, i.e. the Landau damping, leads to an increase of the pressure for temperatures $\gtrsim 0.8\,T_c^\chi$ and a significant meson momentum cut-off dependence in the mesonic pressure and the QCD trace anomaly. We investigate the dependence of the results on the choice of the Polyakov-loop potential parameter $T_0$. From the dependence of the mesonic pressure on the current quark mass, by means of the Feynman-Hellmann theorem, we evaluate the contribution of the pion quasiparticle gas and Landau damping to the chiral condensate.
Konstantin Maslov, David Blaschke
2023-01-24T09:40:29Z
http://arxiv.org/abs/2301.09882v1
# Effect of mesonic off-shell correlations in the PNJL equation of state ###### Abstract We study the meson contribution to the equation of state of the 2-flavor PNJL model, including the full momentum dependence of the meson polarization loops. Within the Beth-Uhlenbeck approach, we demonstrate that the contribution from the quark-antiquark continuum excitations in the spacelike region \(\omega^{2}-q^{2}<0\), i.e. the Landau damping, leads to an increase of the pressure for temperatures \(\gtrsim 0.8\,T_{c}^{\times}\) and a significant meson momentum cut-off dependence in the mesonic pressure and the QCD trace anomaly. We investigate the dependence of the results on the choice of the Polyakov-loop potential parameter \(T_{0}\). From the dependence of the mesonic pressure on the current quark mass, by means of the Feynman-Hellmann theorem, we evaluate the contribution of the pion quasiparticle gas and Landau damping to the chiral condensate. ## I Introduction Elucidating the structure of the phase diagram of quantum chromodynamics (QCD) is one of the challenging problems in particle physics. Its experimental exploration is presently performed at large existing heavy-ion collision (HIC) facilities, such as the Relativistic Heavy-Ion Collider (RHIC) at BNL Brookhaven, the Large Hadron Collider (LHC) and the Super Proton Synchrotron (SPS) at CERN Geneva. It will in future be continued with dedicated experiments at the Facility for Antiproton and Ion Research (FAIR) at GSI Darmstadt, the Nuclotron-based Ion Collider Afacility (NICA) at JINR Dubna and the Japan Proton Accelerator Research Complex (J-PARC) at the Tokai campus of JAEA. In nature, the QCD transition from a quark-gluon plasma (QGP) to a phase of confined quarks and gluons, a gas or liquid of hadronic resonances, is realized in the Early Universe and may also occur in the interiors of neutron stars, their mergers and supernova explosions, being subject to observations by multi-messenger Astronomy. While at high energies QCD is in the regime of asymptotic freedom allowing to apply the well-developed methods of perturbation theory, a theoretical description of the hadronization transition with its aspects of dynamical chiral symmetry breaking and confinement (at low temperatures also color superconductivity) is a notoriously difficult task. It concerns the challenging domain of low energy QCD, where nonperturbative methods must be developed and applied. A first principles approach to describe the QCD transition at finite temperatures uses Monte Carlo simulations of the QCD partition function formulated in lattice quantum field theory on a discretized Euclidean space-time. The \(\mu_{B}=0\) sector of the QCD phase diagram is well explored within lattice QCD, and a crossover transition with a pseudocritical temperature \(T_{c}^{\rm{LQCD}}=156.5\pm 1.5\) MeV is obtained [1]. This method, however, is inapplicable at large values of the baryon chemical potential \(\mu_{B}\) because of the yet unsolved sign problem. At low temperatures and densities \(n\simeq(1-2)\,n_{0}\), where \(n_{0}\simeq 0.16\,\rm{fm}^{-3}\) is the nuclear saturation density, the chiral effective field theory approach for the description of hadronic degrees of freedom became a standard tool to describe nuclear phenomena with theoretically controlled uncertainties [2]. However, these uncertainties become larger at larger densities, which renders it unsuitable for a description of dense matter, e.g. in neutron star interiors. Additional complications arise while extending this approach to large temperatures. With an increase of the density the QCD vacuum can change from a state with massive constituent quarks to one with light current quarks, referred to as the chiral symmetry restoration transition. Another phase transition relevant especially for astrophysical applications is the appearance of color superconductivity at large densities, which is related to the appearance of diquark condensates. This region of the QCD phase diagram is beyond the scope of the current work. The first step in theoretically assessing the hadronization of quark and gluon degrees of freedom in a thermal quantum field theory is to define an effective Lagrangian in the quark sector of QCD that shares the essential symmetries with the QCD Lagrangian but is formulated in terms of quark currents that should emerge from "integrating out" the dynamical gluon degrees of freedom. When this current-current form of the interaction Lagrangian is symmetric under chiral rotation of the quark fields one arrives at a version of the Nambu-Jona-Lasinio (NJL) model of low-energy QCD, see [3; 4; 5; 6] for standard reviews. A modern refinement of the model takes into account the coupling of quark Dirac-spinors to a gluon background field in the Polyakov gauge [7; 8; 9; 10; 11] which is called the Polyakov-loop improved NJL model or in short: PNJL model. The traced Polyakov-loop \(\Phi\) and its conjugate \(\bar{\Phi}\) appear now as additional order parameters in the model which are vanishing in the confined phase due to the Z(3) center symmetry of SU(3) color and signal deconfinement when they approach unity under broken center symmetry. The thermodynamics of this PNJL model for the QGP is generally considered at the mean-field level, but the parameters of the model can be tuned so as to reproduce the pion mass \(m_{\pi}=140\) MeV and pion decay constant \(f_{\pi}=93\) MeV as well as the chiral light quark condensate \(\langle\bar{u}u\rangle^{1/3}=-240\pm 20\) MeV [6] (or a suitably chosen constituent quark mass) in the vacuum, which satisfies the low-energy QCD theorems, the Gell-Mann-Oakes-Renner [12] and the Goldberger-Treiman [13] relations. To construct the pion simultaneously as the Goldstone boson of the broken chiral symmetry and a pseudoscalar bound state in the pseudoscalar quark-antiquark channel in the vacuum, one quantizes the Gaussian fluctuations around the mean-field (MF) solution. This is achieved by solving the homogeneous Bethe-Salpeter equation, which defines the bound state pole of the pion propagator via the pseudoscalar-pseudoscalar polarization loop integral. The NJL and PNJL model details can be found, e.g., in Ref. [14; 15]. Footnote 1: The chiral condensate is defined as the pion mass \(m_{\pi}=140\) MeV and the pion mass \(m_{\pi}=93\) MeV. [MISSING_PAGE_POST] model is described by the Lagrangian \[\mathcal{L}_{\rm PNJL}=\bar{q}(i\not{D}-m_{0})q+G_{s}\Big{[}(\bar{q}q)^{2}+(\bar{q }i\gamma^{5}\vec{\tau}q)^{2}\Big{]}, \tag{1}\] where \(m_{0}=5.5\,\)MeV is the bare quark mass assuming the isospin symmetry, \(D_{\mu}=\partial^{\mu}-i\delta^{\mu}_{0}(A^{0}+\mu)\) is the covariant derivative in the Polyakov-loop background field \(A^{0}\) and the chemical potential \(\mu\), and \(\vec{\tau}\) are the the Pauli matrices in the isospin space. \(G_{s}=5.04\,\)GeV\({}^{-2}\) is the NJL 4-quark coupling in the scalar-pseudoscalar channel. Below the consideration will be limited to the \(\mu=0\) case, but we shall keep \(\mu\) in the expressions for completeness. The traced Polyakov loop and its conjugate are defined as \[\Phi(\vec{x})=\frac{1}{N_{c}}{\rm Tr}_{c}\langle\langle L(\vec{x} )\rangle\rangle,\quad\bar{\Phi}(\vec{x})=\frac{1}{N_{c}}{\rm Tr}_{c}\langle \langle L^{\dagger}(\vec{x})\rangle\rangle\] \[L(\vec{x})=\mathcal{P}\exp\Big{[}i\int\limits_{0}^{\beta}d\tau A _{4}(\vec{x},\tau)\Big{]}, \tag{2}\] where \(A_{4}=iA_{0}\), and \(\beta=1/T\). For a pure Yang-Mills (YM) SU(3) theory without quarks, we have \(\bar{\Phi}=\Phi\), and the Polyakov loop is an order parameter for the deconfinement phase transition, corresponding to the spontaneous breaking of the Z(3) center symmetry. The thermodynamics of this phase transition in pure gauge theory for SU(3)\({}_{c}\) can be described well by the effective potential \(\mathcal{U}(\Phi,\bar{\Phi},T)\), which exhibits a unique minimum at \(\Phi=0\) for low temperatures and develops a second minimum at \(\Phi\neq 0\) as the temperature exceeds the critical temperature \(T_{0}\), corresponding to a first-order SU(3) YM phase transition. In this work we use the same form as in [10] chosen to reproduce the SU(3) Yang-Mills thermodynamics obtained in lattice simulations [40] \[\frac{\mathcal{U}(\Phi,\bar{\Phi},T)}{T^{4}}=-\frac{b_{2}(T)}{2} \Phi\bar{\Phi}-\frac{b_{3}}{6}(\Phi^{3}+\bar{\Phi}^{3})+\frac{b_{4}}{4}(\bar{ \Phi}\Phi)^{2},\] \[b_{2}(T)=a_{0}+a_{1}\Big{(}\frac{T_{0}}{T}\Big{)}+a_{2}\Big{(} \frac{T_{0}}{T}\Big{)}^{2}+a_{3}\Big{(}\frac{T_{0}}{T}\Big{)}^{3}, \tag{3}\] with the coefficients \(a_{0,1,2,3}=[6.75,-1.95,2.625,-7.44]\), \(b_{3}=0.75\), \(b_{4}=7.5\). The parameter \(T_{0}\) has the meaning of the critical temperature of the first-order phase transition in the pure YM theory and for this case \(T_{0}=0.27\,\)GeV. In presence of dynamical quarks the effect of running QCD coupling can be translated into a rescaling of this parameter, depending on the number of quark flavors [26]. In this paper we will compare our results between \(T_{0}=0.27\,\)GeV and \(T_{0}=0.208\,\)GeV obtained in [26] for \(N_{f}=2\), and also for \(T_{0}=0.178\,\)GeV, corresponding to \(N_{f}=3\), in order to study the overall effect of such rescaling on the off-shell pion contribution to the EoS. The total grand canonical thermodynamic potential in the PNJL model reads [10; 35; 14; 36] \[\Omega(T;\Phi,\bar{\Phi},m)=\mathcal{U}(\Phi,\bar{\Phi},T)+\frac {(m-m_{0})^{2}}{4G_{s}}\] \[-2N_{f}\Big{\{}\int\limits_{|\vec{p}|<\Lambda}\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! can be evaluated without regularization. Then we reconstruct the real part using the Kramers-Kronig relation. The explicit expression for \(\operatorname{Im}\Pi_{M}\) is given in the Appendix A. The RPA-resummed propagator of a quasi-meson \(M\) is then determined as \[D_{M}(\omega,q)=-\frac{2G_{s}}{1+2G_{s}\Pi_{M}(\omega,q)}. \tag{10}\] We define the mass \(m_{M}\) of a quasi-meson \(M\) as a solution of \[1+2G_{s}\operatorname{Re}\Pi_{M}(\omega=m_{M},q=0)=0. \tag{11}\] The usefulness of this expression is limited to the temperature region in which the meson is a bound state. All the physical information about mesonic excitations in the medium is encoded in the spectral function \[\rho_{M}(\omega,q)=-2\operatorname{Im}D_{M}(\omega,q), \tag{12}\] which is the subject of our study in this work. ### Pressure of the meson gas After integrating out the quarks in the Gaussian approximation and dropping the vacuum mesonic pressure, the meson contribution to the pressure reads [17; 25; 33; 41] \[P_{M}=d_{M}\sum_{k=\operatorname{QP,LD}|q|<\Lambda^{k}}\frac{d^ {3}q}{(2\pi)^{3}}w_{M}^{k}(q,T), \tag{13}\] \[w_{M}^{\operatorname{QP}}\equiv\int\limits_{q}^{\infty}\frac{d \omega}{\pi}\frac{\delta_{M}(\omega,q,T)}{e^{\omega/T}-1},\ w_{M}^{\operatorname{ LD}}\equiv\int\limits_{0}^{q}\frac{d\omega}{\pi}\frac{\delta_{M}(\omega,q,T)}{e^{ \omega/T}-1},\] where \(d_{\pi}=3,d_{\sigma}=1\), and \[\delta_{M}(\omega,q,T)=-\arctan\frac{\operatorname{Im}D_{M}(\omega,q,T)}{ \operatorname{Re}D_{M}(\omega,q,T)} \tag{14}\] is the quark-antiquark scattering phase shift in the corresponding meson channel \(b\). We have separated the part coming from the Landau damping (LD) region \(0<\omega<q\) from the rest, which we label as the quasipole (QP) contribution, with their respective momentum thresholds \(\Lambda^{\operatorname{QP}},\Lambda^{\operatorname{LD}}\). For simplicity, we use \(\Lambda^{\operatorname{QP}}\to\infty\), which gives a maximum possible quasipole contribution to the thermodynamics, that is weakly dependent on the choice of the threshold for \(T\) below \(T_{c}^{\chi}\). For the LD contribution in this work we will vary the momentum threshold \(\Lambda^{\operatorname{LD}}\) in the limits \(\Lambda^{\operatorname{LD}}=(1-2)\Lambda\) encountered in the literature [19; 33; 35] in order to investigate its threshold sensitivity. ## III Numerical results In this section we discuss the numerical results of the developed approach to mesonic correlations in the PNJL model. We start in subsect. III.1 with the MF approximation and proceed to discussing the meson spectral functions in subsect. III.2 and their contribution to thermodynamic properties of PNJL matter in subsect. III.3. Then we show the correction to the chiral condensate by means of the Hellmann-Feynman theorem arising from the LD contribution to the pressure in subsect. III.4. Finally, we discuss in subsect. III.5 the dependence of the results on rescaling the critical temperature parameter \(T_{0}\) of the Polyakov loop potential for example of the interaction measure. ### Mean-field solutions and meson masses We start with discussing the temperature dependence of the mean fields and meson masses in the PNJL model, as these quantities will later define the meson contribution to the pressure. In Fig. 1 we compare the temperature dependence of the pion and sigma masses, constituent quark mass, and the Polyakov loop variable between two choices of the \(T_{0}\) parameter of the Polyakov loop potential \(T_{0}=[0.27,0.208]\operatorname{GeV}\). The characteristic temperatures of the hadron-quark phase transition are the pseudocritical temperatures related to the chiral symmetry restoration \(T_{c}^{\chi}\) and Figure 1: Masses of pions (solid lines) and constituent quarks (dashed lines), together with the Polyakov loop variable \(\Phi\) (dash-dotted lines) as functions of the temperature for PNJL models with \(T_{0}=[0.27,0.208]\operatorname{GeV}\). The mass of the sigma is shown by the dotted line only for \(T_{0}=0.27\operatorname{GeV}\) case for clarity. For \(T_{0}=0.208\,T_{c}^{\chi}\) we also show the position of the pion spectral function peak (dotted line) at \(T>T_{\text{Mott}}\). ment \(T_{c}^{\Phi}\), respectively, defined as \[T_{c}^{\chi}=\arg\max\frac{\partial\langle\bar{q}q\rangle(T)}{ \partial T}, \tag{15}\] \[T_{c}^{\Phi}=\arg\max\frac{\partial\Phi(T)}{\partial T}. \tag{16}\] In Fig. 1 we plot the results as functions of \(T\) normalized by the respective \(T_{c}^{\chi}\) for these cases. The key parameters extracted from these solutions are collected in Table 1. The temperature behavior of \(m\) and \(\Phi\) is standard for the PNJL model [35; 14]. With increasing temperature the quark mass monotonously decreases, while the Polyakov loop expectation value grows. The results for \(m(T/T_{c}^{\chi})\) are almost coinciding between the \(T_{0}=[0.27,0.208]\,\mathrm{GeV}\) cases, with the respective \(T_{c}^{\chi}\). The main difference between the cases \(T_{0}=[0.27,0.208]\,\mathrm{GeV}\) is the well-known increase of the difference between \(T_{c}^{\Phi}\) and \(T_{c}^{\chi}\)[42; 10]. Therefore, for \(T_{0}=0.208\,\mathrm{GeV}\) the \(\Phi(T)\) and, correspondingly, the quark distribution function (7) is larger even for low \(T/T_{c}^{\chi}\) than in the \(T_{0}=0.27\,\mathrm{GeV}\) case. The pion and sigma masses are defined as solutions of Eq. (11). The pion pole position stays almost constant below the corresponding critical temperature. For \(T\stackrel{{<}}{{{}_{\sim}}}T_{c}^{\chi}\) the \(\sigma\) mass is always slightly above the \(2m\) threshold and the \(\sigma\) propagator has a quasipole with a finite quasiparticle width. For temperatures exceeding the pion Mott temperature \(T_{\mathrm{Mott}}\), defined by \(m_{\pi}(T_{\mathrm{Mott}})=2m(T_{\mathrm{Mott}})\), the pion quasiparticles acquire finite width related to the decays into quark-antiquark pairs, as well as a thermal mass, which rises approximately linearly in temperature. The \(\sigma\) quasipole location follows the threshold mass \(2m(T)\) up to \(T\simeq T_{c}^{\chi}\) and at larger temperatures coincides with the thermal mass of the pion, reflecting the chiral symmetry restoration. For \(T\stackrel{{>}}{{{}_{\sim}}}[0.29,0.27]\,\mathrm{GeV}\) in respective cases \(T_{0}=[0.27,0.208]\,\mathrm{GeV}\), Eq. (11) does not have a real solution at all. Therefore, the lines for \(m_{\pi,\sigma}(T)\) in Fig. 1 terminate at these temperatures in units of respective \(T_{c}^{\chi}\). The peak position of the spectral functions, shown for the \(T_{0}=0.208\,\mathrm{GeV}\) case as an example, stays linearly increasing for all \(T>T_{c}^{\chi}\). The peak position is always below the real solution of the dispersion equation (11), as was also observed in [14]. As was mentioned above, the main effect of decreasing \(T_{0}\) on the mean fields is the increase of the gap between \(T_{c}^{\Phi}\) and \(T_{c}^{\chi}\). More subtle differences are the slight decrease of the quark mass below \(T_{c}^{\chi}\) for the case \(T_{0}=0.208\,\mathrm{GeV}\), and the noticeable decrease of the slope of the pion thermal mass at \(T/T_{c}^{\chi}>1\) with respective \(T_{c}^{\chi}\). In terms of the absolute \(T\) the latter effect is still present, but less noticeable in the magnitude. These observations will prove to be useful for comparing the contribution of the low-energy pionic excitations to the thermodynamics between the cases \(T_{0}=[0.27,0.208]\,\mathrm{GeV}\) in the section III.5. ### Pion spectral properties In order to intuitively compare the contribution of different regions of the spectral strength in the pion channel to thermodynamic quantities, it is useful to consider the dynamic structure factor in the axial channel, \[S_{\pi}(\omega,q;T)=\frac{1}{2\pi}\frac{\rho_{\pi}(\omega,q;T)}{e^{\omega/T}- 1}, \tag{17}\] as it was used, e.g., in [43] to illustrate the dilepton yield enhancement due to the diquark dynamic fluctuations. For \(\omega<q\), where the bound state is absent, its behavior is qualitatively the same as of the quantity \(\frac{1}{\pi}\delta_{\pi}/(e^{\omega/T}-1)\) entering Eq. (13). Therefore the plot of \(S_{\pi}(\omega,q;T)\) allows to illustrate the enhancement of pressure integrand for \(\omega\to 0\) and suppression at \(\omega\gg T\). In Fig. 2 we show \(S_{\pi}(\omega,q;T)\) for temperatures \(T=[0.18,0.23,0.27]\,\mathrm{GeV}\simeq[0.8,1.0,1.2]\,T_{c}^{\chi}\) in the PNJL model with \(T_{0}=0.27\,\mathrm{GeV}\). The upper panels show the heatmap plot of this quantity in the \(\omega-q\) plane. For \(T=[0.8,1.0]\,T_{c}^{\chi}\) the thick solid lines represent the poles of the pion propagator (10) present the real axis. For \(T=1.2\,T_{c}^{\chi}\) the pion is dissolved, and the dashed line in the corresponding panel shows the zero of the real part of the denominator of (10). The shown contour levels are adjusted to be the same in the pair production region and the LD region of the spectral function. The main feature of this plot is the enhancement of the LD region of \(S_{\pi}(\omega,q;T)\) by the thermal factor even at \(T=0.8\,T_{c}^{\chi}\), where the quark and pion masses are close to their respective vacuum values. However, for \(T=[0.8,1.0]\,T_{c}^{\chi}\) the contribution to the pion pressure is governed mainly by the pole contribution, see also Fig. 4 and the description in the text. For \(T=1.2\,T_{c}^{\chi}\) the threshold mass \(2m\simeq 0.1\,\mathrm{GeV}\) is below the pion mass, and the pion pole is absorbed into the continuum. In this case it is clearly seen that for \(q\stackrel{{>}}{{{}_{\sim}}}0.5\,\mathrm{GeV}\) the LD region gives the dominant contribution to the weighed spectral function. In the lower panels of the Fig. 2 we show \(\rho_{\pi}(\omega,q;T)\) and \(S(\omega,q;T)\) for momenta \(q=[0.2,0.5]\,\mathrm{GeV}\) indicated on the lines. We see that the presence of the thermal factor enhances the spectral function in the low-frequency region by an order of magnitude, which is shown in the figure by the shaded areas. Moreover, in the limit \(\omega\to 0\)\(S_{\pi}(\omega,q;T)\to\mathrm{const}\) contrary to \(\rho_{\pi}(\omega,q;T)\to 0\) being an odd function of the frequency. Simultaneously, the pair production region \(\omega^{2}>4m^{2}+q^{2}\) is suppressed by the thermal factor. For \(T\simeq 1.2\,T_{c}\) the weighted spectral function \(S_{\pi}(\omega,q;T)\) in the LD region exceeds the quasipole spectral function for \(q=0.5\,\mathrm{GeV}\). In the upper panel of Fig. 3 we show the pion phase shift as a function of the frequency and temperature for a fixed momentum \(q=0.5\,\mathrm{GeV}\). For the temperatures \(T<T_{\mathrm{Mott}}\) the phase shift shows a jump from \(0\) to \(\pi\) at \(\omega\simeq\sqrt{m_{\pi}^{2}(T)+q^{2}}\) corresponding to the existence of pion as a bound state. The LD contribution is negligibly small for \(T\stackrel{{<}}{{{}_{\sim}}}0.2\,\mathrm{GeV}\). As the temperature increases, for \(T\stackrel{{>}}{{{}_{\sim}}}T_{\mathrm{Mott}}\) the pion bound state is dissolved, so the phase shift never reaches \(\pi\) and changes continuously for \(\omega>2m(T)\), which corresponds to the resonance-shape structure in \(\rho(\omega,q;T)\) in the lower right panel of Fig. 2. Simultaneously the LD contribution becomes noticeable, with a characteristic maximum at \(\omega\simeq q/2\) and \(T\simeq T_{c}^{\chi}\). The magnitude of the LD contribution is still around 5 times smaller than the quasipole part for \(T\simeq T_{c}^{\chi}\). For the LD part of the phase shift decreases with an increase of the temperature, which is due to the growth of the pion thermal mass characterizing the overall magnitude of \(\mathrm{Re}\,\Pi_{\pi}(\omega,q;T)\) that enters the denominator in (14). In the lower panel of Fig. 3 we show the integrand of the mesonic contribution to the pressure in Eq. (13), being the pion phase shift multiplied by the Bose factor, at the critical temperature \(T=T_{c}^{\chi}=0.229\,\mathrm{GeV}\). In the QP region \(\omega>q\) the thermal factor leads to an exponential suppression of the integrand at large \(\omega\). In the LD region, as a consequence of thermal enhancement, the integrand becomes comparable with the one in the QP region for \(q\stackrel{{>}}{{{}_{\sim}}}0.5\,\mathrm{GeV}\) and exceeding it at large momenta. The reason for this is that different characteristic temperatures govern the large-momentum tails of the corresponding contributions, being \(\exp(-\sqrt{(2m)^{2}+q^{2}}/2T)\) for the LD part and \(\exp(-\sqrt{m_{\pi}^{2}(T)+q^{2}}/T)\) for the QP contribution, which can be seen from the explicit expressions in the Appendix B. ### Mean-field + fluctuations thermodynamics Let us start with examining the contribution of the regions of the spectral density described above to the mesonic pressure integrand \(w_{\pi}(q;T)\), defined in (13). In Fig. 4 we plot \(w_{\pi}(q;T)\) as a function of pion mo \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline ModelQuantity & \(T_{c}^{\chi}\) & \(T_{c}^{\Phi}\) & \(T_{\mathrm{Mott}}^{\chi}\) & \(T_{\mathrm{c,fl}}^{\Phi}\) & \(T_{\mathrm{Mott}}\) & \(\Lambda_{\mathrm{LD}}=\Lambda\) & \(\Lambda_{\mathrm{LD}}=2\Lambda\) & \(T_{\mathrm{max}}\) & \(\Lambda_{\mathrm{LD}}=\Lambda\) \\ \hline \hline \(T_{0}=0.27\,\mathrm{GeV}\) & 0.229 & 0.225 & 0.236 & 0.227 & 0.223 & 0.282 & 0.226 \\ \hline \(T_{0}=0.208\,\mathrm{GeV}\) & 0.201 & 0.183 & 0.209 & 0.196 & 0.192 & 0.204 & 0.198 \\ \hline \(T_{0}=0.178\,\mathrm{GeV}\) & 0.194 & 0.166 & 0.199 & 0.187 & 0.183 & 0.190 & 0.184 \\ \hline \end{tabular} \end{table} Table 1: Comparison of characteristic temperatures in units of \(\,\mathrm{GeV}\) between PNJL models with various \(T_{0}\). \(T_{c}^{\chi}\) and \(T_{c}^{\Phi}\) are the pseudocritical temperatures of the chiral and Polyakov-loop phase transition temperatures, \(T_{c,\mathrm{fl}}^{\chi}\) are the \(T_{c}^{\chi}\) recalculated with inclusion of the pion contribution as described in the section III.4, and \(T_{\mathrm{max}}\), \(T_{\mathrm{max,fl}}\) are the locations of the maximum of the trace anomaly \(I(T)/T^{4}\), see section III.5. Figure 2: Upper panels: dynamic structure factor \(S_{\pi}(\omega,q;T)\) in the pion channel in the units of \(\,\mathrm{GeV}^{-2}\) in the \(\omega-q\) plane for temperatures \(T=[0.18,0.23,0.27]\,\mathrm{GeV}\simeq[0.8,1.0,1.2]\,T_{c}\). Thick lines show the position of the pion pole for \(T=0.18\,\mathrm{GeV}\) and \(0.23\,\mathrm{GeV}\) (solid lines) and the solution of the dispersion relation for \(T=0.27\,\mathrm{GeV}\), at which pion is dissociated (dashed line). Lower panels: the regular part of the pion spectral function \(\rho_{\pi}(\omega,q,T)\) and the dynamics structure factor \(S_{\pi}(\omega,q;T)\) as functions of the frequency for two momenta \(q=[0.2,0.5]\,\mathrm{GeV}\) indicated on the lines and shown by the dashed horizontal lines in the upper panels. Shaded areas in the lower panels illustrate the enhancement of the spectral function due to the thermal factor. mentment for the temperatures \([0.18,0.23,0.27]\,\mathrm{GeV}\simeq[0.8,1.0,1.2]\,T_{c}^{\chi}\), where for the Polyakov loop potential the \(T_{0}=0.27\,\mathrm{GeV}\) is chosen. For \(T=0.18\,\mathrm{GeV}\) this function almost coincides with the free pion gas case \[w_{\pi}^{\mathrm{vac}}(q;T)=-T\ln(1-e^{-\sqrt{m_{\pi}^{2}(T-0)+q^{2}}/T}), \tag{18}\] because the \(\bar{q}q\)-pair continuum threshold is still large and the pair production region, which gives a negative contribution to \(w^{\mathrm{QP}}\), is exponentially suppressed by the thermal weight. The LD contribution is two orders of magnitude smaller than the QP one. With an increase of the temperature to \(0.2\,\mathrm{GeV}\) the \(\bar{q}q\)-pair threshold becomes lower, which enhances the negative contribution of the \(\omega>2m\) region to the pion distribution function. For \(T=0.27\,\mathrm{GeV}\simeq 1.2\,T_{c}^{\chi}\) the pion pole moves away from the real axis and the distribution function is suppressed compared to the vacuum case. This corresponds to the Mott dissociation of pions and leads to a decrease of the pion QP contribution to the pressure for \(T\stackrel{{>}}{{\sim}}T_{c}^{\chi}\), which is well-described in the literature [41; 35]. However, simultaneously with an increase of the temperature, the LD contribution starts playing a noticeable role. The large-momentum tail of \(\mathrm{Im}\,\Pi_{\mathrm{LD}}\) involves \(2\,T\) in the exponent, cf. Eqs. (31), (32), and is therefore enhanced in comparison with the free pion gas case, as can be seen from the middle and right panels of the Fig. 4. For \(T\simeq[1,1.2]\,T_{c}\) the LD contribution to \(w_{\pi}(q;T)\) is at least comparable to the pole one, due to both its large value in the maximum and twice the effective temperature in its large-momentum tail. However, for low temperatures this expression is suppressed by the same Polyakov-loop mechanism as quark contribution to the thermodynamics, despite that the Polyakov loop acts only on the colored partons. It follows from the same expression (32), which shows that \(\mathrm{Im}\,\Pi_{\mathrm{LD}}\) is proportional to the quark distribution function. The correlation pressure then follows from integrating the \(w_{\pi}^{\mathrm{QP,LD}}(q,T)\) over the momentum. In the upper panel of Fig. 5 we show the contributions to the pressure coming from the MF approximation to the PNJL model and the meson fluctuations, with and without inclusion of the LD region for \(T_{0}=0.27\,\mathrm{GeV}\). Without the LD contribution meson pressure (solid line) grows with an increase of the temperature towards the Stephan-Boltzmann (SB) limit (dotted line) below \(T_{\mathrm{Mott}}\simeq T_{c}^{\chi}\) along the result for the ideal pion gas with the vacuum pion mass (dash-dotted line). For \(T>T_{\mathrm{Mott}}\) the pion pressure decreases as the pion bound state dissolves due to the lowering of \(\bar{q}q\) continuum threshold and the increase of the pion thermal mass. This is the usual observed behavior of the pressure with Mott dissociation of the bound states [41; 34; 35]. However, with the increase of the temperature the LD contribution starts to play a noticeable role. We plot it as a band corresponding to varying the LD cutoff \(\Lambda^{\mathrm{LD}}\) from \(\Lambda\) to \(2\Lambda\) in order to demonstrate its sensitivity. The temperature dependence of the contribution to the pressure from the LD region at \(T\stackrel{{<}}{{\sim}}T_{c}^{\chi}\) is qualitatively in the same as the MF pressure, as it is suppressed by the same thermal exponential factor, see Appendix B. An analytic estimate for the essential threshold dependence of \(P_{\pi}^{\mathrm{LD}}\) is given in Appendix B. As the temperature increases, for any given \(\Lambda^{\mathrm{LD}}\) this contribution to the pressure has a maximum near \(T_{c}^{\chi}\) and decreases with further increase of the temperature due to the presence of the momentum threshold. The overall result for the LD excitation pressure is heavily dependent on the of the the LD threshold \(\Lambda_{\pi}\), because the large high-momentum tail of the pressure integrand, as shown in Fig. 4. We see that the cutoff-related uncertainty can reach \(2\,P_{\mathrm{SB}}^{\pi}\) for Figure 3: Upper panel: Pion phase shift in the PNJL model with \(T_{0}=0.27\,\mathrm{GeV}\) as a function of the frequency and temperature for a fixed momentum \(q=0.5\,\mathrm{GeV}\). Lower panel: Pion phase shift weighed with the Bose factor as a function of momentum and frequency for \(T=T_{c}^{\chi}\). Separate color mappings are used for \(\omega<q\) and \(\omega>q\) in both panels. \(T\simeq T_{c}^{X}\). Another quantity characterizing the thermodynamics of the model is the scaled trace anomaly \(I(T)=T^{5}d(P/T^{4})/dT\) shown in the lower panel of Fig. 5, which is more sensitive to various contributions to the pressure. The monotonous increase in the PNJL pressure results in a positive contribution to \(I(T)\), which has a characteristic maximum. The quasipole correlation contribution is positive for \(T\stackrel{{<}}{{{}_{\sim}}}0.8\,T_{c}\) and becomes negative as the temperature increases, since the quasipole meson pressure decreases as the mesons melt and acquire the thermal mass, leading to the decrease of the QP pressure. The LD contribution to \(I(T)\) also changes sign as the presence of the LD threshold leads to decreasing LD pressure with an increase of the temperature. We note that even for a conservative choice of \(\Lambda^{\rm LD}=\Lambda\) the contribution of the LD spectral region to \(I(T)\) is of the same order as the QP one. For \(T\stackrel{{>}}{{{}_{\sim}}}T_{c}\) they both are negative and each lead to a significant decrease of the overall \(I(T)\). The presence of the kink near \(T_{c}\) in the overall \(I(T)\) is related to the rapid growth of the LD contribution as \(T\) approaches \(T_{c}^{X}\) from below. For a larger threshold \(\Lambda^{\rm LD}=2\Lambda\) the kink is eliminated, since the LD contribution is large enough to overwhelm the second maximum. Therefore, in such a model the position of the characteristic maximum of \(I(T)\) is affected by the magnitude of the mesonic LD contribution. Later in the subsect. III.5 we will see that even for a fixed threshold lowering of the \(T_{0}\) parameter leads to an enhancement of the LD contribution and a similar effect on the position of the \(I(T)\) maximum. ### Perturbative correction to the chiral condensate As it is seen from Fig. 5, in PNJL model at temperatures below \(T_{c}\) the dominant contribution to the pressure comes from the gas of mesonic excitations and the quark degrees of freedom are statistically suppressed by the Polyakov loop. The low-\(T\) behavior of the chiral condensate in a physically acceptable model should be governed by the pion contribution. Figure 4: The QP (solid lines) and LD (dashed lines) contributions to the pressure integrand \(w_{\pi}(q;T)\) (13) as functions of momentum for temperatures \(T=[0.18,0.23,0.27]\,{\rm GeV}\simeq[0.8,1.0,1.2]\,T_{c}^{\rm c}\). Dotted lines show the free pion gas case. Figure 5: Upper panel: pressure as a function of the temperature in the PNJL model with \(T_{0}=0.27\) GeV with the contribution from the meson correlations. Solid line denotes the QP contribution of the excitations with \(\omega>q\), dashed line stands for the PNJL pressure in the MF approximation, and dotted line is the sum of MF and QP pressure. Shaded areas show the uncertainty in the LD contribution and the total pressure coming from varying the meson threshold \(\Lambda^{\rm LD}\) in the limits \([1-2]\,\Lambda\). Lower panel: same contributions to the trace anomaly as a function of the temperature. Hatched area is the LD contribution from the \(\sigma\)-meson. The \(\Lambda^{\rm LD}=\Lambda\) case for the total \(I(T)\) is highlighted by a solid line. The pion polarization operator itself depends on the constituent quark mass, which allows us to calculate the "perturbative" contribution to the chiral condensate from the meson \(M=(\pi,\sigma)\) fluctuation gas as \[\langle\bar{q}q\rangle_{M}=-\frac{\partial P_{M}(T;m)}{\partial m_{0}}. \tag{19}\] The presence of mesonic fluctuations does affect the chiral condensate In the model "mean field + meson fluctuations" that we use in this study the back-relation of the meson excitations to the quark MF sector is not included. A way of estimating the pion contribution to the chiral condensate perturbatively is to take the derivative over the quark current mass \(m_{0}\) at the MF solution for \(m\): \[\langle\bar{q}q\rangle_{\rm tot}=\langle\bar{q}q\rangle_{\rm MF}+\langle \bar{q}q\rangle_{\pi,\sigma}^{\rm QP}+\langle\bar{q}q\rangle_{\pi,\sigma}^{ \rm LD}. \tag{20}\] In Fig. 6 we show the resulting contributions to the chiral condensate. We separately show the contribution from the "pole" pressure and the LD contribution to the pressure to the total chiral condensate normalized by its vacuum value \(\langle\bar{q}q\rangle=-(0.251\,{\rm GeV})^{3}\). Both of these contributions are negative, which reflects the fact that for a given temperature the pion mass increases with an increase of the current quark mass, therefore giving rise to the negative \(\partial P_{\pi}/\partial m_{0}\). For comparison, we also show the pion contribution to the chiral condensate obtained in [44] by the dash-dotted line. At low temperatures the \(\langle\bar{q}q\rangle_{\pi}^{\rm pole}\) gives the dominant contribution to the chiral condensate. We see that the low-temperature part of the pion contribution in our approach is lower, but is qualitatively the same as obtained in [44]. In the same temperature region \(T\stackrel{{>}}{{{}_{\sim}}}0.18\,{\rm GeV}\simeq 0.8\,T_{c}\) the LD region gives a noticeable contribution to the temperature dependence of the chiral condensate, comparable in magnitude with the \(\langle\bar{q}q\rangle_{\pi,\sigma}^{\rm pole}\). The magnitude of this contribution exhibits a maximum, corresponding to the maximum of the pressure in the Fig. 5. Its temperature dependence are \(T\stackrel{{<}}{{{}_{\sim}}}T_{c}\) is also qualitatively the same as that of the the \(\langle\bar{q}q\rangle\) in the MF approximation. At large temperatures it decays with the same rate as the pole contribution in line with what is described in subsection. III.3 for the pion pressure. The total \(\langle\bar{q}q\rangle\) is then following the \(\langle\bar{q}q\rangle_{\pi,\sigma}^{\rm pole}\) at low temperatures. After calculating the perturbative contribution of the pions to the chiral condensate at finite temperature, we can define the "new" pseudocritical temperature \(T_{c,\rm fl}^{\chi}\) of the chiral phase transition with an inclusion of both LD and QP mesonic contributions as \[T_{c,\rm fl}^{\chi}=\arg\max\frac{\partial(\langle\bar{q}q\rangle_{\rm MF}+ \langle\bar{q}q\rangle_{\pi,\sigma}^{\rm QP}+\langle\bar{q}q\rangle_{\pi, \sigma}^{\rm LD})}{\partial T}. \tag{21}\] The inclusion of the pole contribution affects \(T_{c,\rm fl}^{\chi}\) only a little, because the most rapid decrease at \(T\to T_{c,\rm fl}^{\chi}\) comes from the quark-like contributions. In contrast with this, near \(T_{c}^{\chi}\) the LD contribution contribution starts playing a noticeable role and enhances the chiral condensate decrease, as its contribution to the thermodynamics for \(T\stackrel{{<}}{{{}_{\sim}}}T_{c}^{\chi}\) is qualitatively the same as that of the quark MF contribution. In the model with \(T_{0}=0.27\,{\rm GeV}\) the inclusion of the LD term shifts the pseudocritical temperature to \(T_{c,\rm fl}^{\chi}=[227,223]\,{\rm MeV}\), corresponding to \(\Lambda^{\rm LD}=[1,2]\,\Lambda\) in comparison with \(229\,{\rm MeV}\) in the MF approximation to the PNJL model. This reduction is rather small but nevertheless important in the context of bringing the pseudocritical temperature of a constituent quark model closer to the observed in the lattice QCD. ### Rescaling the Polyakov loop potential critical temperature parameter \(T_{0}\) A rescaling of the Polyakov loop potential parameter \(T_{0}\) to lower values is often used for comparing the PNJL results with the lattice QCD ones [10; 25]. Such a simple replacement is supported by the FRG calculations [45], which have shown that the change of the Polyakov-loop potential due to the quark backreaction onto the Polyakov-loop potential can be approximated by with a similar rescaling of \(T_{0}\). In this section we compare the results for the LD pressure between \(T_{0}=[0.27,0.208,0.178]\,\,{\rm GeV}\), corresponding to pure YM and the values obtained in [26] for 2 and 3 flavors, respectively. For brevity, we omit the results of the pressure and study only the trace anomaly as a more sensitive quantity for the pion LD contribution than the pressure. In Fig. 7 we show the trace anomaly for \(T_{0}=[0.27,0.208,0.178]\,\,{\rm GeV}\). The dashed lines show the MF contribution to \(I(T)\), and the solid lines are the total \(I(T)\) with contributions from LD and Figure 6: Contributions to the chiral condensate as functions of the temperature for \(\Lambda_{T}=\Lambda\). The line and shaded regions have the same meaning as in the Fig. 5. The dash-dotted line shows the results of for the chiral condensate behavior with only pions included. and \(\sigma\) excitations. For clarity we show only the \(\Lambda^{\rm LD}=\Lambda\) case. We see that with a decrease of \(T_{0}\) the overall magnitude of the LD contribution to \(I(T)\) increases. As a consequence, the increase of the overall magnitute of the LD contribution with a decrease of \(T_{0}\) makes the wiggle in the \(\Lambda_{\rm LD}=\Lambda\) curve disappear, since the LD contribution contributes positively to the \(I(T)\) maximum at lower temperature and negatively to the second one. For the cases \(T_{0}=[0.208,0.178]\,\)GeV the presence of the pion contribution leads to a decrease of the location of the characteristic maximum of \(I(T)\) from \([0.212,0.197]\,\)GeV in the MF case to \([0.197,0.185]\,\)GeV with the excitation contributions included. The reason for this is the well-known increase in the separation of \(T_{c}^{\Phi}\) and \(T_{c}^{\chi}\) in the \(T_{0}\) rescaling is used. This is illustrated in Fig 1. For instance, in the case of \(T_{0}=0.178\,\)GeV the \(T_{c}^{\Phi}\) is \(28\,\)MeV smaller than \(T_{c}^{\chi}\). This leads to an increase of the quark distribution function at lower temperatures than in the case \(T_{0}=0.27\,\)GeV, as can be seen by the \(\Phi(T)\) dependence in Fig. 1. This entails a decrease of the quark effective mass and the subsequent increase of the LD contribution, as it corresponds to the emission (absorption) of pions by the quark medium. For the case \(T_{0}=0.178\,\)GeV we have evaluated the shifted pseudocritical temperatures \(T_{c,\rm fl}^{\chi}\) as it was done in section III.4. The results are collected in Table 1. In the case \(T_{0}=0.178\,\)GeV we have obtained \(T_{c,\rm fl}^{\chi}=[0.187,0.183]\,\)GeV with \(\Lambda^{\rm LD}=[1,2]\,\Lambda\), respectively, which demonstrates a stronger reduction of the pseudocritical temperature than in the case \(T_{0}=0.27\,\)GeV. This demonstrates again the enhancement of the LD contribution in the case of a larger separation between the chiral and deconfinement transition preudocritical temperatures, and its importance if the goal is set to build a quantitative description of the QCD pseudocritical temperature at \(\mu=0\) within a chiral quark model. ## IV Conclusion and Outlook We have studied the 2-flavor PNJL model at finite temperature for a baryon chemical potential \(\mu=0\) in the mean-field approximation including the contributions to the pressure from the from dynamical quark current correlations in pion and sigma channels. The calculation of the meson pressure was performed using the generalized Beth-Uhlenbeck expression [41; 17; 46] through the quark-antiquark scattering phase shifts in the pion and sigma channels, which, as opposed to the standard Beth-Uhlenbeck approach, are medium-dependent and encode the Mott dissociation of the mesons. In this work, we paid particular attention to the effect of including the full momentum dependence of the pion phase shift as a function of frequency \(\omega\) and momentum \(q\), including the Landau damping (LD) kinematic region with \(s=\omega^{2}-q^{2}<0\) on the thermodynamic quantities. In the LD region, the values of the spectral function and the phase shift are normally rather small and sometimes excluded out of consideration in similar models [25; 36]. In this paper, we have demonstrated that the inclusion of the LD region of the meson spectral strength to the calculation of the thermodynamic functions in fact leads to a significant contribution to the pressure centered around the chiral pseudocritical temperature \(T_{c}^{\chi}\) because of the thermal enhancement of the low-frequency region by the Bose-Einstein distribution, and can affect the characteristic temperatures of the phase transition in the PNJL model. Using the analytical expression of the 1-loop polarization operator in the pion channel, we have illustrated the thermal enhancement of the low-frequency mesonic ex Figure 7: Scaled trace anomaly with the pion QP and LD contributions (solid lines, \(\Lambda^{\rm LD}=\Lambda\)) and the MF result (dashed lines) for \(T_{0}=[0.178,0.208,0.27]\,\)GeV labeled on the lines in GeV. citations with finite momentum on the example of the pion structure factor. Such an increase leads to a significant contribution to the pressure integrand in the Beth-Uhlenbeck formula and a corresponding noticeable contribution to the pressure. Below \(T_{c}^{\chi}\), the pressure originating from the LD region grows with the same rate as the quark contribution, since LD corresponds to the emission and absorption of pions by the quark thermal bath. The analytic expressions for the LD polarization loop for non-relativistic quarks contain a similar temperature exponent \(\exp(-\sqrt{(2m)^{2}+q^{2}}/2\,T)\), with the same dependence on the quark effective mass \(m\) as the quark pressure, but with the large-momentum tail regulated by twice the medium temperature. This is partially responsible for the significant magnitude of the LD contribution. We have varied the threshold in the LD region \(\Lambda^{\rm LD}=[1-2]\,\Lambda\), where \(\Lambda\) is the quark 3-momentum cut-off which regularized the PNJL vacuum pressure, and have found a sizeable 4-fold enhancement of the result for the LD pressure for \(T\stackrel{{>}}{{{}_{\sim}}}T_{c}^{\chi}\). For the case of the NJL model we could obtain a semi-analytic estimate in the non-relativistic region, given in the Appendix B, which confirms the numerical results in the corresponding applicability range of temperatures. This contribution might be responsible for the kinks in the pressure in [33], which are not seen in the published version [34] where only the quasipole contribution was left in the mesonic pressure.The LD contribution to the pressure leads to a change in the trace anomaly which is more sensitive to any additional contribution since at \(\mu=0\) it is proportional to the temperature derivative of the pressure. The LD contribution to the trace anomaly changes sign at \(T\simeq T_{c}^{\chi}\), because the pion contribution for \(T\stackrel{{>}}{{{}_{\sim}}}T_{c}^{\chi}\) is decreased due to the pion dissociation. We have studied the contribution of the meson gas to the temperature dependence of the chiral condensate melting, calculated perturbatively without iterating the self-consistency condition, by means of the Hellmann-Feynman theorem. The resulting chiral condensate at low temperatures exhibits a decrease due to the pion pole excitations only. As the temperature approaches \(T_{c}^{\chi}\), the contribution of the LD region becomes important, which qualitatively for \(T\stackrel{{<}}{{{}_{\sim}}}T_{c}^{\chi}\) has the same temperature dependence as the quark contribution. The inclusion of such a term allows to make \(T_{c}^{\chi}\) lower by several MeV depending on the choice of \(\Lambda^{\rm LD}\). Finally, we have studied the effect of lowering the \(T_{0}\) parameter of the Polyakov-loop potential, suggested by the renormalization-group arguments [26; 45], on the LD contribution to the thermodynamic quantities. By the example of the trace anomaly, as the most sensitive quantity, we have shown that decreasing \(T_{0}\) leads to an overall increase of the LD contribution to the thermodynamics. This is a consequence of the well-known fact of separation of \(T_{c}^{\chi}\) and \(T_{c}^{\Phi}\) under a decrease of \(T_{0}\), which leads to a relative increase of the quark distribution function for \(T\stackrel{{<}}{{{}_{\sim}}}T_{c}^{\chi}\). In turn, this enhances the LD contribution, because it corresponds to emission/absorption of pions by quark thermal bath and is therefore sensitive to the quark distribution function. All these effects described above are expected to be present not only in the mean-field-based approaches to the PNJL model but also in those including the \(1/N_{c}\) corrections to the quark propagator if the mesonic polarization operators are evaluated with quark quasiparticle propagators at the mean-field level, as was done in this work. This work is limited to \(N_{f}=2\) case and focuses only on the pion contribution, but after generalization to \(N_{f}=2+1\) even more meson degrees of freedom would give rise to a similar contribution to the thermodynamics. Moreover, a self-consistent solution of the gap equation and the equation of motion for the Polyakov loop variable is necessary and will be reported elsewhere. Physically the Landau damping corresponds to a non-dissipative process of the energy transfer from meson system to the quark subsystem and its inverse, and therefore the inclusion of the quark back-reaction is necessary to provide reliable conclusions. However, the soft quark excitations are not enhanced by the thermal Bose factor, and we expect our conclusions about the mesonic contribution to be qualitatively unchanged. The inclusion of the back-reaction can be done within a self-consistent, e.g. \(\Phi\)-derivable scheme, which is a subject of our future work. ## Acknowlegdements We thank Krzysztof Redlich, Chihiro Sasaki, Pok Man Lo, Evgeny Kolomeitsev and Dmitry Voskresensky for the discussions and their interest in this work. The work benefited a lot from numerous conversations and help from Oleksii Ivanytskyi. K.A.M. acknowledges the hospitality of the University of Wroclaw, where the most part of this work was done. The work of K.A.M. was supported by the Foundation for the Advancement of Theoretical Physics and Mathematics "BASIS", project No. 17-15-568-1. This work has been supported in part by the Polish National Science Centre (NCN) under grant No. 2019/33/B/ST9/03059. ## Appendix A Expressions for the meson polarization operator The imaginary part of the polarization operator of the meson \(M=(\pi,\sigma)\) can be expressed for PNJL model in the following compact form [5; 33; 41]: \[{\rm Im}\,\Pi_{M}(\omega,q)=-\kappa_{M}\frac{N_{c}N_{f}}{8\pi q} \Big{(}\theta(s-4m^{2})\Big{[}\theta(\omega)J^{+}_{\rm pair}(\omega,q)+\theta(- \omega)J^{-}_{\rm pair}(\omega,q)\Big{]}+\theta(-s)J_{\rm LD}(\omega,q)\Big{)}, \tag{10}\] \[J^{\pm}_{\rm pair}(\omega,q)=T\ln\Big{(}\frac{\phi^{\mp}(E_{-}) \phi^{\mp}(-E_{-})}{\phi^{\mp}(E_{+})\phi^{\mp}(-E_{+})}\Big{)},\ J_{\rm LD}( \omega,q)=T\ln\Big{(}\frac{\phi^{+}(E_{-})\phi^{-}(E_{-})}{\phi^{+}(-E_{+}) \phi^{-}(-E_{+})}\Big{)}, \tag{11}\] where \(m\) in the constituent quark mass, \(s=\omega^{2}-q^{2}\), \(\kappa_{\pi}=s\), \(\kappa_{\sigma}=s-4m^{2}\), \(E_{\pm}=\frac{\omega}{2}\pm\frac{q}{2}\sqrt{1-\frac{4m^{2}}{\omega^{2}-q^{2}}}\), and the distribution function is defined as \[\phi^{\pm}(\varepsilon)=\Big{(}\frac{1}{y_{\pm}^{3}+3y_{\pm}(\bar{\Phi}+\Phi y _{\pm})+1}\Big{)}^{1/3}, \tag{12}\] where \(y_{\pm}=e^{(\varepsilon\pm\mu)/T}\). The NJL expressions can be restored by taking the limit \(\Phi,\bar{\Phi}\to 1\). The real part is then calculated as \[{\rm Re}\,\Pi_{M}(\omega,q;T)=-\int\limits_{0}^{\Lambda_{2}^{2}(q)}\frac{ds^{ \prime}}{\pi}\frac{{\rm Im}\,\Pi_{M}(\sqrt{s^{\prime}},q;T)}{\omega^{2}-s^{ \prime}}, \tag{13}\] where \(q=|\vec{q}|\), and it is necessary to use the mass- and momentum-dependent 4-dimensional cutoff \(\Lambda_{4}^{2}(q)=4(\Lambda^{2}+m^{2})+q^{2}\) in order to keep thermodynamic consistency [5]. For \(\mu=0\) the LD part in the NJL model has the following asymptotic expressions: \[J_{\rm LD}(\omega,q;T)\mathop{\longrightarrow}_{T\to 0}-4T\exp(-\frac{q}{2T} \sqrt{1+\frac{4m^{2}}{q^{2}-\omega^{2}}})\] \[\times\sinh\frac{\omega}{2T}\mathop{\longrightarrow}_{\omega \to 0}-2\omega\exp(-\frac{\sqrt{4m^{2}+q^{2}}}{2T}). \tag{14}\] The dependence of the LD part on the constituent quark mass over the temperature contains therefore the same factor \(e^{-m/T}\) as the quark contribution, while the large-momentum tail has an effective temperature of \(2T\). The factor \(\omega\) in the low-frequency limit eliminates the pole of the Bose-Einstein distribution in the integrals in Eq. (13). A more general expression for the polarization operator in the LD limit, which includes the PNJL case and is valid for any \(T\) as long as \(\omega\ll q\), is [35] \[{\rm Im}\,\Pi_{\pi}^{\rm LD}(\omega,q;T)\simeq N_{c}N_{f}\frac{\omega^{2}-q^{2 }}{4\pi}\frac{\omega}{q}f_{\Phi}(\frac{q}{2}\xi), \tag{15}\] with \(f=f_{\Phi}^{+}=f_{\Phi}^{-}\) for \(\mu=0\) is given by Eq. (8). This expression also proves to be rather accurate for nearly all \(0<\omega\stackrel{{<}}{{{}_{\sim}}}q\), if \(q\stackrel{{<}}{{{}_{\sim}}}2T\). In the limit \(\Phi\to 1,\bar{\Phi}\to 1\) the NJL case is restored, as \(f_{\Phi}\) becomes a usual Fermi-Dirac distribution. ## Appendix B Low-temperature analytic estimate of the LD contribution to the pressure The asymptotic expression (15) for \({\rm Im}\,\Pi_{\pi}(\omega,q)\) allows to estimate the parametric dependence of the LD pressure analytically. Concerning the \({\rm Re}\,\Pi_{\pi}(\omega,q)\), in the non-relativistic case, corresponding to the best applicability range of (15), we can use the pole approximation for the real part of the inverse pion propagator \[{\rm Re}(\frac{1}{2G_{s}}+\Pi_{\pi})\simeq-\frac{1}{g_{\pi qq}^{ 2}}\frac{1}{\omega^{2}-m_{\pi}^{2}(T)-q^{2}},\] \[g_{\pi qq}^{2}=-\Big{(}\frac{\partial\,{\rm Re}\,\Pi_{\pi}( \omega,q=0,T)}{\partial\omega^{2}}\Big{|}_{\omega=m_{\pi}}\Big{)}^{-1}, \tag{16}\] where \(m_{\pi}(T)\) is the pion mass, determined by Eq. (11), and \(g_{\pi qq}^{2}(T)\) is the squared pion-quark coupling [3; 5]. The phase shift (14) in the LD region does not get close to \(\pi\) and we can approximate it by the argument of the arctan. Then the pion pressure reads \[P_{\pi}\simeq\frac{N_{c}N_{f}d_{\pi}}{8\pi^{4}}\int\limits_{0}^ {\Lambda}qdq\int\limits_{0}^{q}\omega d\omega\frac{|s|}{m_{\pi}^{2}+|s|}\frac{ 1}{e^{\frac{q}{2T}\xi(\omega,q)}+1}\] \[\times\frac{1}{\exp(\omega/T)-1},\ s=\omega^{2}-q^{2},\ \xi=\sqrt{1+\frac{4m^{2}}{|s|}}, \tag{17}\] In order to factor out the essential temperature dependence, we replace the sharp momentum cut-off by the dipole-type form-factor \(L(\alpha,x)=\alpha^{2}/(\alpha^{2}+x^{2})\) and change variables to \(q=Tx\cosh\chi\), \(\omega=Tx\sinh\chi\equiv|s|\cdot y\) to transform the integration limits to the infinite quartemane. After further non-relativistic expansions in terms of new variables, we arrive to an (underestimating) expression \[P_{\pi,{\rm NR}}^{\rm LD}=\frac{N_{c}N_{f}d_{\pi}g_{\pi qq}^{2}}{8 \pi^{4}}\frac{m^{2}}{m_{\pi}^{2}}T^{4}e^{-m/T}\times\] \[F(\frac{\Lambda_{\pi}}{\sqrt{mT}},\frac{m_{\pi}}{\sqrt{mT}},\frac {m}{T}), \tag{18}\] where \(d_{\pi}=3\) is the pion isospin degeneracy factor, and the dimensionless function \(F\) is defined as \[F(\alpha,\beta,\gamma)=\int\limits_{0}^{\infty}dxx^{3}e^{-x^{2}/ 8}L(\alpha,x)L(\beta,x)G(\frac{1}{x^{2}}+\frac{1}{8\gamma}),\] \[G(\beta)=\int\limits_{0}^{\infty}dy\frac{ye^{-\beta y^{2}/2}}{e^{ y}-1}.\]
2309.03233
Scale-Score: Food Label to Support Nutritious and Sustainable Online Grocery Shopping
To empower online grocery shoppers in making nutritionally and environmentally informed decisions, we investigate the efficacy of the Scale-Score, a label combining nutritional and environmental information to highlight a product's benefit to both the consumer's and the planet's health, without obscuring either information. We conducted an experimental study in a mock online grocery environment, and assessed label efficacy. We find that the Scale-Score supports nutritious purchases, yet needs improving regarding sustainability support. Our research shows first insights into design considerations and performance of a combined yet disjoint food label.
Marco Druschba, Gözel Shakeri
2023-09-05T06:57:52Z
http://arxiv.org/abs/2309.03233v1
Scale-Score: Food Label to Support Nutrious and Sustainable Online Grocery Shopping - Extended Abstract ###### Abstract To empower online grocery shoppers in making nutritionally and environmentally informed decisions, we investigate the efficacy of the Scale-Score, a label combining nutritional and environmental information to highlight a product's benefit to both the consumer's and the planet's health, without obscuring either information. We conducted an experimental study in a mock online grocery environment, and assessed label efficacy. We find that the Scale-Score supports nutritious purchases, yet needs improving regarding sustainability support. Our research shows first insights into design considerations and performance of a combined yet disjoint food label. Sustainable HCI, persuasive technology, sustainability communication, personal informatics ## 1 Introduction Labels support consumers in making nutritious and sustainable decisions by transforming complex information about food e.g., nutritional values, animal welfare standards, or environmental aspects, into simple logos or diagrams [1]. Food-label technology and personal informatics thereby both use similar techniques to motivate users [2], such as providing information, enabling comparison, and giving feedback [3]. Several studies within the HCI discipline investigated labels as a means of providing education tailored to users' own context and choices; addressing health- and environmental challenges separately, although they are closely intertwined (e.g. sustainability: Envirofy [4], Nu-Food [5]; nutrition: BetterChoice [6], FLICC [7]). Our research focuses on the design space of labels which comprise of both, health and environmental information; when it matters most, _while_ online grocery shopping [8, 9]. We describe a study which tested the impact of presenting the _Scale-Score_ (Figure 1, right) on the nutritional quality and environmental impact of the consumers' food choices, compared to the effects of both Nutri-Score and Eco-Score labels, and no persuasive technology. We found the Scale-Score improved nutritional quality of purchases, however surprisingly, it performed worse in terms of environmental impact, compared to Nutri-Eco and baseline condition. This paper contributes first evidence in support of using a joint yet disjoint nutritional and ecological label to encourage transitions towards healthier and more sustainable diets, when online shopping. ## 2 Experimental Research ### Scale-Score The Scale-Score (Figure 1, right) combines Nutri- and Eco-Scores into a single label represented by a classic beam scale. It also shows an overall rating based on the product's Nutri- and Eco-Scores by computing their mean value; in case of an uneven result, we opted to go in favour of the Nutri-Score, prioritising nutrition over sustainability, in accordance with previously gathered user requirements. ### Methods & Participants We employed a within-subjects design with a single independent variable, visualisation, and three factors, Scale-Score, Nutri-/Eco-Score, and baseline with no visualisation. Dependent variables (i.e., shopping behaviour) were: 1) average environmental value of chosen products (based on Eco-Score calculations) and 2) average nutrition value of chosen products (based on Nutri-Score calculations). In each condition, participants shopped according to a shopping list with three items (cereal, milk, and peanut butter). Participants were entered into a random draw to win their shopping basket, as an incentive to encourage normal purchase decisions. Figure 1: Scale-Score (right) provides high level information about the nutritional and sustainable value of foods, yet gives additional information to allow for individual prioritisation of health or environment. In a mock-up online supermarket we tested three conditions (left: no labels, middle: Nutri- and Eco-Score, right: Scale-Score) showing that the Scale-Score supports nutritious purchase decisions, yet needs improving regarding sustainability support. The study lasted one hour. We recruited 12 participants (5f, \(\mu=39\) years, \(\sigma=22.9\) years) through our institution's forums. Regarding demographics, all participants stated having seen the Nutri-Score prior; the Eco-Score was seen by 16.7 % prior. ### Results We used a one-way ANOVA with post-hoc Tukey-tests via IBM SPSS Statistics (v. 28.0.1.0). For this study an alpha (\(\alpha\)) of 0.05 was used. There was no statistically significant difference between the conditions on 'nutrition' as determined by one-way ANOVA \((F(2.35)=0.8p=0.458)\). However, a Tukey-post-hoc-test revealed that Scale- and Nutri-/Eco-Score means did not differ from each other (\(p\)-value \(=0.994\)), but both Nutri-/Eco-Score (\(p\)-value \(=0.557\)) and Scale-Score (\(p\)-value \(=0.496\)) differed compared to no score. Looking at the descriptive statistics, Scale-Score resulted in lower nutrition values (mean \(=2.89\)) compared to Nutri-/Eco-Score (mean \(=3.06\)) and no score (mean \(=4.78\)). There also was no statistically significant difference between the conditions on'sustainability' as determined by one-way ANOVA \((F(2.35)=1,301p=0.286)\). Tukey-post-hoc-test revealed a non-significant difference between Nutri-/Eco-Score (\(p\)-value \(=0.595\)) and Scale-Score (\(p\)-value \(=0.810\)) compared to no score. Looking at the descriptive statistics, Scale-Score resulted in lowest sustainability values (mean \(=53.11\)), Nutri-/Eco-Score in highest (mean \(=59.78\)) and no score in intermediate (mean \(=55.69\)). ## 3 Discussion and Conclusions To achieve a global and successful transition to healthy and sustainable diets, systems and tools are needed to support consumers in this. We designed Scale-Score, a label that displays nutritional and environmental information. We did not find significant differences in support provision of either visualisation compared to baseline, however there is a trend showing the Nutri- and Eco-Score combination may support consumers in sustainable and healthy decision making. The Scale-Score may support nutritious choices compared to no visualisation, however, worsened environmental impact of the basket compared to baseline. First, this may be due to the make-up of the Scale-Score: nutritional aspect weighted more into the final score. Consequently, a product that is marked with a good Scale-Score rating (e.g. B) may well contain an environmental 'D' rating. As a result, the average sustainability score was worse compared to Nutri- and Eco-Score representation, resulting in Scale-Score's poor environmental performance. Second, participants may have ignored the multi-level information provided, given the small sizes of Nutri- and Eco-Score labels within the Scale-Score, contributing further to the de-valuation of environmental information. In future, we plan to re-design the label taking advantage of the interaction modalities available in web-based interfaces, where meta _and_ multi-level information can effectively support sustainable and nutritious grocery shopping. At ICT4S, we hope to inspire and engage in conversations on user-centred food label designs embedded in personal informatics.
2310.19308
Free from Bellman Completeness: Trajectory Stitching via Model-based Return-conditioned Supervised Learning
Off-policy dynamic programming (DP) techniques such as $Q$-learning have proven to be important in sequential decision-making problems. In the presence of function approximation, however, these techniques often diverge due to the absence of Bellman completeness in the function classes considered, a crucial condition for the success of DP-based methods. In this paper, we show how off-policy learning techniques based on return-conditioned supervised learning (RCSL) are able to circumvent these challenges of Bellman completeness, converging under significantly more relaxed assumptions inherited from supervised learning. We prove there exists a natural environment in which if one uses two-layer multilayer perceptron as the function approximator, the layer width needs to grow linearly with the state space size to satisfy Bellman completeness while a constant layer width is enough for RCSL. These findings take a step towards explaining the superior empirical performance of RCSL methods compared to DP-based methods in environments with near-optimal datasets. Furthermore, in order to learn from sub-optimal datasets, we propose a simple framework called MBRCSL, granting RCSL methods the ability of dynamic programming to stitch together segments from distinct trajectories. MBRCSL leverages learned dynamics models and forward sampling to accomplish trajectory stitching while avoiding the need for Bellman completeness that plagues all dynamic programming algorithms. We propose both theoretical analysis and experimental evaluation to back these claims, outperforming state-of-the-art model-free and model-based offline RL algorithms across several simulated robotics problems.
Zhaoyi Zhou, Chuning Zhu, Runlong Zhou, Qiwen Cui, Abhishek Gupta, Simon Shaolei Du
2023-10-30T07:03:14Z
http://arxiv.org/abs/2310.19308v2
# Free from Bellman Completeness: ###### Abstract Off-policy dynamic programming (DP) techniques such as \(Q\)-learning have proven to be important in sequential decision-making problems. In the presence of function approximation, however, these techniques often diverge due to the absence of Bellman completeness in the function classes considered, a crucial condition for the success of DP-based methods. In this paper, we show how off-policy learning techniques based on return-conditioned supervised learning (RCSL) are able to circumvent these challenges of Bellman completeness, converging under significantly more relaxed assumptions inherited from supervised learning. We prove there exists a natural environment in which if one uses two-layer multilayer perceptron as the function approximator, the layer width needs to grow _linearly_ with the state space size to satisfy Bellman completeness while a constant layer width is enough for RCSL. These findings take a step towards explaining the superior empirical performance of RCSL methods compared to DP-based methods in environments with near-optimal datasets. Furthermore, in order to learn from sub-optimal datasets, we propose a simple framework called MBRCSL, granting RCSL methods the ability of dynamic programming to stitch together segments from distinct trajectories. MBRCSL leverages learned dynamics models and forward sampling to accomplish trajectory stitching while avoiding the need for Bellman completeness that plagues all dynamic programming algorithms. We propose both theoretical analysis and experimental evaluation to back these claims, outperforming state-of-the-art model-free and model-based offline RL algorithms across several simulated robotics problems.1 Footnote 1: Our code is available at [https://github.com/zhaoyizhoul123/mbrcsl](https://github.com/zhaoyizhoul123/mbrcsl). Part of the work was done while Zhaoyi Zhou was visiting the University of Washington. ## 1 Introduction The literature in reinforcement learning (RL) has yielded a promising set of techniques for sequential decision-making, with several results showing the ability to synthesize complex behaviors in a variety of domains (Mnih et al., 2013; Lillicrap et al., 2015; Haarnoja et al., 2018) even in the absence of known models of dynamics and reward in the environment. Of particular interest in this work is a class of reinforcement learning algorithms known as off-policy RL algorithms (Fujimoto et al., 2018). Off-policy RL algorithms are those that are able to learn optimal behavior from an "off-policy" dataset - i.e. a potentially suboptimal dataset from some other policy. Moreover, off-policy RL methods are typically able to "stitch" together sub-trajectories across many different collecting trajectories, thereby showing the ability to synthesize behavior that is better than any trajectories present in the dataset (Degris et al., 2012; Kumar et al., 2019). The predominant class of off-policy RL algorithms builds on the idea of dynamic programming (DP) (Bellman, 1954; 1957) to learn state-action \(Q\)-functions. These techniques iteratively update the \(Q\)_-function_ by applying the _Bellman operator_ on the current \(Q\)-function. The step-wise update of DP enables the advantages listed above - usage of prior suboptimal data and synthesis of optimal policies by stitching together multiple suboptimal trajectories. On the other hand, these off-policy RL algorithms based on dynamic programming can only guarantee convergence under the condition of _Bellman completeness_, a strong and nuanced claim on function approximation that ensures the projection error during iterative DP can be reduced to be arbitrarily low. Unlike supervised learning, designing a function class that satisfies Bellman completeness is particularly challenging, as simply enlarging the function class does not guarantee Bellman completeness. This begs the question - _can we design off-policy reinforcement learning algorithms that are free from the requirements of Bellman completeness, while retaining the advantages of DP-based off-policy RL?_ To this end, we study a class of off-policy RL algorithms referred to as return-conditioned supervised learning (RCSL) (Schmidhuber, 2019; Srivastava et al., 2019). The key idea of RCSL is to learn a return-conditioned distribution of actions in each state by directly applying supervised learning on the trajectories in the dataset, achieving satisfactory performance by simply conditioning the policy on high desired returns. These techniques have seen recent prominence due to their surprisingly good empirical performance, simplicity and stability (Kumar et al., 2019). While RCSL has empirically shown good results, the reason behind these successes in many empirical domains is still not well understood. In this work, we show how RCSL can circumvent the requirement for Bellman completeness. We theoretically analyze how RCSL benefits from not requiring Bellman completeness and thus can provably outperform DP-based algorithms with sufficient data coverage. While RCSL is able to circumvent the requirement for Bellman completeness, it comes with the requirement for data coverage. As prior work has shown, RCSL has a fundamental inability to stitch trajectories (Brandfonbrener et al., 2022), a natural advantage of using DP-based off-policy RL. As long as the dataset does not cover optimal trajectories, RCSL methods are not guaranteed to output the optimal policy. To remedy this shortcoming, while preserving the benefit of avoiding Bellman completeness, we propose a novel algorithm - model-based RCSL (**MBRCSL**, cf. Figure 1). The key insight in **MBRCSL** is that RCSL style methods simply need data coverage, and can take care of optimizing for optimal behavior when the data provided has good (albeit skewed) coverage. Under the Markovian assumption that is made in RL, we show data coverage can be achieved by simply sampling from the behavior policy under a learned model of dynamics. We show how a conjunction of two supervised learning problems - learning a model of dynamics and of the behavior policy can provide us the data coverage needed for the optimality of RCSL. This leads to a method that both avoids Bellman completeness and is able to perform stitching and obtain the benefits of DP. Concretely, the contributions of this work are: 1. We theoretically show how RCSL can outperform DP when Bellman completeness is not satisfied. 2. We theoretically show the shortcomings of RCSL when the off-policy dataset does not cover optimal trajectories. 3. We propose a novel technique - MBRCSL that can achieve trajectory stitching without suffering from the challenge of Bellman completeness. 4. We empirically show how MBRCSL can outperform both RCSL and DP-based off-policy RL algorithms in several domains in simulation. Figure 1: **Model-based return-conditioned supervised learning** (MBRCSL). Model-based rollouts augments datset with potentially optimal trajectories, enabling trajectory stitching for RCSL while avoiding Bellman completeness requirements. ### Related Work **Function Approximation and Bellman Completeness.** Function approximation is required when the state-action space is large. A line of works has established that even if the optimal \(Q\)-function is realizable, the agent still needs exponential number of samples (Du et al., 2019; Wang et al., 2020, 2021; Weisz et al., 2021). Bellman completeness is a stronger condition than realizability but it permits sample-efficient algorithms and has been widely adopted in theoretical studies (Munos and Szepesvari, 2008; Chen and Jiang, 2019; Jin et al., 2020; Zanette et al., 2020; Xie et al., 2021). **Return-conditioned supervised learning.** Return-conditioned supervised learning (RCSL) directly trains a return-conditioned policy via maximum likelihood (Schmidhuber, 2019; Srivastava et al., 2019; Kumar et al., 2019b; Andrychowicz et al., 2017; Furuta et al., 2021; Ghosh et al., 2019) The policy architecture ranges from MLP (Emmons et al., 2022; Brandfonbrener et al., 2022) to transformer (Chen et al., 2021; Bhargava et al., 2023) or diffusion models (Ajay et al., 2022). The finding that RCSL cannot do stitching is first proposed by (Kumar et al., 2022) through empirical studies. Here, the trajectory stitching refers to combination of transitions from different trajectories to form potentially better trajectories, which is a key challenge to offline RL algorithms (Char et al., 2022; Hepburn and Montana, 2022; Wu et al., 2023). Brandfonbrener et al. (2022) gives an intuitive counter-example, but without a rigorous theorem. In Section 3.2, we provide two different reasons and rigorously show the the inability of RCSL to do stitching. Brandfonbrener et al. (2022) also derives a sample complexity bound of RCSL. However, they do not concretely characterize when RCSL requires much weaker conditions than \(Q\)-learning to find the optimal policy. Wu et al. (2023); Liu and Abbeel (2023); Yamagata et al. (2023) try to improve RCSL performance on sub-optimal trajectories by replacing dataset returns with (higher) estimated returns, while our approach aims at extracting _optimal_ policy from dataset through explicit trajectory stitching. Besides, Paster et al. (2022); Yang et al. (2022) address the problems of RCSL in stochastic environments. **Model-based RL.** Model-based RL learns an approximate dynamics model of the environment and extracts a policy using model rollouts (Chua et al., 2018; Janner et al., 2019; Hafner et al., 2021; Rybkin et al., 2021). A complementary line of work applies model-based RL to offline datasets by incorporating pessimism into model learning or policy extraction (Kidambi et al., 2020; Yu et al., 2020, 2021). We leverage the fact that models do not suffer from Bellman completeness and use it as a crucial tool to achieve trajectory stitching. ## 2 Preliminaries **Finite-horizon MDPs.** A finite-horizon Markov decision process (MDP) can be described as \(\mathcal{M}=(H,\ \mathcal{S},\ \mathcal{A},\ \mu,\ \mathcal{T},\ r)\), where \(H\) is the planning horizon, \(\mathcal{S}\) is the state space, and \(\mathcal{A}\) is the action space. \(\mu\in\Delta(\mathcal{S})\) is the initial state distribution, and \(\mathcal{T}:\mathcal{S}\times\mathcal{A}\rightarrow\Delta(\mathcal{S})\) is the transition dynamics. Note that for any set \(\mathcal{X}\), we use \(\Delta(\mathcal{X})\) to denote the probability simplex over \(\mathcal{X}\). In case of deterministic environments, we abuse the notation and write \(\mathcal{T}:\mathcal{S}\times\mathcal{A}\rightarrow\mathcal{S}\), \(r:\mathcal{S}\times\mathcal{A}\rightarrow[0,1]\). **Trajectories.** A trajectory is a tuple of states, actions and rewards from step 1 to \(H\): \(\tau=(s_{1},a_{1},r_{1},\cdots,s_{H},a_{H},r_{H})\). Define the return-to-go (RTG) of a trajectory \(\tau\) at timestep \(h\) as \(g_{h}=\sum_{h=h}^{H}r_{h^{\prime}}\). The alternative representation of a trajectory is \(\tau=(s_{1},g_{1},a_{1};\cdots;s_{H},g_{H},a_{H})\). We abuse the notation and call the sub-trajectory \(\tau[i:j]=(s_{i},g_{i},a_{i};\cdots;s_{j},g_{j},a_{j})\) as a trajectory. Finally, we denote \(\mathrm{T}\) as the set of all possible trajectories. **Policies.** A policy \(\pi:\mathrm{T}\rightarrow\Delta(\mathcal{A})\) is the distribution of actions conditioned on the trajectory as input. If the trajectory contains RTG \(g_{h}\), then the policy is a _return-conditioned policy_. The Markovian property of MDPs ensures that the optimal policy falls in the class of _Markovian policies_, \(\Pi^{\mathsf{M}}=\{\pi:\mathcal{S}\rightarrow\Delta(\mathcal{A})\}\). A policy class with our special interest is _Markovian return-conditioned policies_, \(\Pi^{\mathsf{M}\mathrm{RC}}=\{\pi:\mathcal{S}\times\mathbb{R}\rightarrow\Delta( \mathcal{A})\}\), i.e., the class of one-step return-conditioned policies. **Value and \(Q\)-functions.** We define value and \(Q\)-functions for policies in \(\Pi^{\mathsf{M}\mathrm{D}}\). Let \(\mathbb{E}_{\pi}[\cdot]\) denote the expectation with respect to the random trajectory induced by \(\pi\in\Pi^{\mathsf{M}\mathrm{D}}\) in the MDP \(\mathcal{M}\), that is, \(\tau=(s_{1},a_{1},r_{1},\ldots,s_{H},a_{H},r_{H})\), where \(s_{1}\sim\mu(\cdot)\), \(a_{h}\sim\pi(\cdot|s_{h})\), \(r_{h}=r^{u}(s_{h},a_{h})\), \(s_{h+1}\sim\mathcal{T}(\cdot|s_{h},a_{h})\) for any \(h\in[H]\). We define the value and \(Q\)-functions using the RTG \(g_{h}\): \[V_{h}^{\pi}(s)=\mathbb{E}_{\pi}\left[g_{h}|s_{h}=s\right],\ Q_{h}^{\pi}(s,a)= \mathbb{E}_{\pi}\left[g_{h}|s_{h}=s,a_{h}=a\right].\] For convenience, we also define \(V_{H+1}^{T}(s)=0\), \(Q_{H+1}^{T}(s,a)=0\), for any policy \(\pi\) and \(s,a\). **Expected RTG.** Return-conditioned policies needs an initial desired RTG \(\tilde{g}_{1}\) to execute. The _return-conditioned evaluation process_ for return-conditioned policies is formulated in Algorithm 1. We know that \(\tilde{g}_{1}\) is not always equal to \(g\), the real RTG. Hence to evaluate the performance of \(\pi\in\Pi^{\text{MRC}}\), we denote by \(J(\pi,\tilde{g}_{1})\) the expected return \(g\) under \(\pi\) with initial desired RTG \(\tilde{g}_{1}\). **Offline RL.** In offline RL, we only have access to some fixed dataset \(\mathcal{D}=\left\{\tau^{1},\tau^{2},\cdots,\tau^{K}\right\}\), where \(\tau^{i}=(s_{1}^{i},a_{1}^{i},r_{1}^{i},\cdots,s_{H}^{i},a_{H}^{i},r_{H}^{i}) \left(i\in[K]\right)\) is a trajectory sampled from the environment. We abuse the notation and define \((s,a)\in\mathcal{D}\) if and only if there exists some \(i\in[K]\) and \(h\in[H]\) such that \((s_{h}^{i},a_{h}^{i})=(s,a)\). We say dataset \(\mathcal{D}\)_uniformly covers_\(\mathcal{M}\) if \((s,a)\in\mathcal{D}\) for any \(s\in\mathcal{S}\), \(a\in\mathcal{A}\). For any deterministic optimal policy \(\pi^{*}:\mathcal{S}\rightarrow\mathcal{A}\) of \(\mathcal{M}\), we say dataset \(\mathcal{D}\)_covers_\(\pi^{*}\) if \((s,\pi^{*}(s))\in\mathcal{D}\) for any \(s\in\mathcal{S}\). The goal of offline RL is to derive a policy using \(\mathcal{D}\) which maximizes the expected total reward. For Markovian policy \(\pi\in\Pi^{\text{Mc}}\), the goal is \(g^{*}=\max_{\pi}\sum_{s}\mu(s)V_{1}^{\pi}(s)\). For Markovian return-conditioned policy \(\pi\in\Pi^{\text{MRC}}\), the goal is \(\max_{\pi}J(\pi,g^{*})\). ``` 1:Input: Return-conditioned policy \(\pi:\mathcal{S}\times\mathbb{R}\rightarrow\Delta(\mathcal{A})\), desired RTG \(\tilde{g}_{1}\). 2:Observe \(s_{1}\). 3:for\(h=1,2,\cdots,H\)do 4: Sample action \(a_{h}\sim\pi(\cdot|s_{h},\tilde{g}_{h})\). 5: Observe reward \(r_{h}\) and next state \(s_{h+1}\). 6: Update RTG \(\tilde{g}_{h+1}=\tilde{g}_{h}-r_{h}\). 7:Return: Actual RTG \(g=\sum_{h=1}^{H}r_{h}\). ``` **Algorithm 1** Return-conditioned evaluation process ### Realizability and Bellman Completeness of Q-Learning Many exisiting RL algorithms, such as \(Q\)-learning and actor-critic, apply dynamic-programming (DP) technique (Bertsekas & Tsitsiklis, 1996) to estimate value functions or \(Q\)-functions. For instance, \(Q\)-learning updates estimated (optimal) \(Q\)-function \(\widehat{Q}_{h}(s,a)\) by \[\widehat{Q}_{h}\leftarrow\mathcal{B}\widehat{Q}_{h+1}=\{r(s,a)+\mathbb{E}_{s ^{\prime}\sim\mathcal{T}(\cdot|s,a)}\sup_{a^{\prime}}\widehat{Q}_{h+1}(s^{ \prime},a^{\prime})\}_{(s,a)\in\mathcal{S}\times\mathcal{A}}. \tag{1}\] where \(\mathcal{B}\) is called the _Bellman operator_(Sutton & Barto, 2018). In practice, \(\widehat{Q}\) is represented using some function approximation class such as multilayer perceptron (MLP). Two crucial conditions for DP-based methods with function approximation to succeed are realizability (Definition 1) and to be Bellman complete (Definition 2) (Munos, 2005; Zanette, 2023). **Definition 1** (Realizability).: _The function class \(\mathcal{F}\) contains the optimal \(Q\)-function: \(Q^{*}\in\mathcal{F}\)._ **Definition 2** (Bellman complete).: _A function approximation class \(\mathcal{F}\) is Bellman complete under Bellman operator \(\mathcal{B}\) if \(\max_{Q\in\mathcal{F}}\min_{Q^{\prime}\in\mathcal{F}}\|Q^{\prime}-BQ\|=0\)._ **Remark 1**.: _It has been shown that realizability alone is not enough, as Weisz et al. (2021); Wang et al. (2020) show that it requires an exponential number of samples to find a near-optimal policy._ ### Return-Conditioned Supervised Learning RCSL methods learn a return-conditioned policy \(\pi^{\text{cx}}(a_{h}|\tau[h-\text{cx}+1:h-1],s_{h},g_{h})\) from \(\mathcal{D}\) via supervised learning, where \(\text{cx}\) is the policy context. We focus on return-conditioned policies with context \(1\), i.e., policy of the form \(\pi^{1}(a_{h}|s_{h},g_{h})\), because the optimal policy can be represented by a return-conditioned policy of context \(1\). Next, we introduce the _RTG dataset_, summarizes the information that RCSL algorithm relies on. **Definition 3** (Rtg dataset).: _Given a collection of trajectories \(\mathcal{D}=\left\{\tau^{1},\tau^{2},\cdots,\tau^{K}\right\}\), an RTG dataset for \(\mathcal{D}\) is defined by \(\mathcal{D}^{1}:=\left\{(s_{t}^{k},g_{t}^{k},a_{t}^{k})\mid 1\leq k\leq K,1\leq t \leq H\right\}.\)_ We are now ready to define the RCSL framework. **Definition 4** (RCSL framework).: _An offline RL algorithm belongs to RCSL framework if it takes an RTG dataset as input and outputs a return-conditioned policy._ Understanding the Strengths and Weaknesses of Return-Conditioned Supervised Learning Methods In this section, we first provide a theoretical discussion of why RCSL methods can outperform the DP methods in deterministic environments, when there is sufficient data coverage. We then give rigorous theoretical treatments to explain why RCSL style methods cannot do "trajectory-stitching". ### Insight 1: RCSL can outperform DP in deterministic environments In this section, we show that RCSL can achieve better performance than DP-based algorithms, as long as the dataset contains expert (optimal) trajectory. This stems from the fact that DP-algorithms needs to satisfy Bellman-completeness when function approximation is used, while RCSL algorithms only need to represent the optimal policy, and hence only require realizability. To explain this point, we choose one representative implementation from both RCSL and DP-based methods respectively to make a comparison. For the former, we design "MLP-RCSL", a simple RCSL algorithm that learns a deterministic policy. Suppose that the action space \(\mathcal{A}\) is a subset of \(\mathbb{R}^{d}\). MLP-RCSL trains a policy network \(\widehat{\pi}_{\theta}(s,g)\in\mathbb{R}^{d}\) by minimizing mean square error (MSE) of action (or more generally, maximize the log-likelihood): \[L(\theta)=\mathbb{E}_{(s,g,a)\sim\mathcal{D}^{1}}\left\|\widehat{\pi}_{\theta} (s,g)-a\right\|^{2}. \tag{2}\] We add a projection \(f(a)=\arg\min_{a\in\mathcal{A}}\|a-\widehat{\pi}_{\theta}(s,g)\|\) at the output of policy network, so that we are ensured to get a valid action. We choose \(Q\)-learning as a representative for DP-based methods, where the \(Q\)-function is learned by a \(Q\)-network, denoted by \(\widehat{Q}_{\phi}(s,a)\). To make the comparison fair, both \(\widehat{\pi}_{\theta}(s,g)\) and \(\widehat{Q}_{\phi}(s,a)\) are implemented with a _two-layer_ MLP network with ReLU activation. **Remark 2**.: _Often DP-based offline RL methods aim to penalize out-of-distribution actions (in the presence of incomplete coverage) via techniques such as conservativism (Kumar et al., 2020) or constrained optimization (Wu et al., 2019). However, we do not need to consider these techniques in our constructed environment, as conservativism is not needed when the dataset has full coverage._ We first analyze when MLP-RCSL and \(Q\)-learning satisfy their own necessary requirements to succeed, in terms of the hidden layer size needed. To be specific, we set the requirements as: * MLP-RCSL: The policy network must be able to represent the optimal policy. * \(Q\)-learning: The \(Q\)-network must be able to represent the optimal \(Q\)-function and satisfy Bellman completeness. **Theorem 1**.: _There exists a series of MDPs and associated datasets \(\{\mathcal{M}^{u}=(H^{u},\mathcal{S}^{u},\mathcal{A},\mu,\mathcal{T}^{u},r^{u }),\mathcal{D}^{u}\}_{u\geq 1}\), such that the following properties hold:_ 1. \(|\mathcal{S}^{u}|=\Theta(u)\)_;_ 2. \(\mathcal{D}^{u}\) _uniformly covers_ \(\mathcal{M}^{u}\)_;_ 3. _There exists a series of MLP-RCSL policy network (two-layer MLP)_ \(\{\pi^{u}_{\text{RC}}\}_{u\geq 1}\) _such that_ \(\pi^{u}_{\text{RC}}\) _can represent optimal policy for_ \(\mathcal{M}^{u}\) _with_ \(O(1)\) _neurons in the hidden layer;_ 4. _For_ \(u\geq 1\)_, any two-layer MLP_ \(Q\)_-network representing the optimal_ \(Q\)_-function and satisfying Bellman completeness simultaneously should have_ \(\Omega(u)\) _neurons in the hidden layer._ We also provide simulations to verify the theorem (cf. A.1). **Remark 3**.: _From Theorem 1, we see that the model size of \(Q\)-learning must grow linearly with state space. On the other hand, RCSL can scale with large state space Besides, we also point out that the success of RCSL is based on the existence of expert trajectory._ **Proof idea.** Detailed proof of Theorem 1 is deferred to Appendix A. We construct a class of MDPs called LinearQ (Figure 2), such that the optimal policy as well as optimal \(Q\)-function can be represented by the linear combination of a constant number of ReLU neurons, while the reward function must be represented with \(\Omega(|\mathcal{S}^{u}|)\) ReLU neurons because there are that many. We denote the all-zero MLP as \(\widehat{Q}_{\phi(0)}\) which satisfies \(\widehat{Q}_{\phi(0)}(s,a)=0\) for all \(s\in\mathcal{S}^{u}\) and \(a\in\mathcal{A}\). We see that Bellman operation (cf. Equation 1) applies to the all-zero MLP give \(\mathcal{B}\widehat{Q}_{\phi(0)}(s,a)=r^{u}(s,a),\forall s\in\mathcal{S}^{u},a \in\mathcal{A}\). Thus \(\Omega(|\mathcal{S}^{u}|)\) hidden neurons are needed to satisfy Bellman completeness. ### Insight 2: RCSL cannot perform trajectory-stitching It is generally believed that RCSL, cannot perform trajectory stitching (Brandfonbrener et al., 2022; Wu et al., 2023). We provide two theorems to rigorously show the sub-optimality of RCSL algorithms, even when the environment has _deterministic transition_ and dataset satisfies _uniform coverage_. These two theorems present different rationales for the failure of trajectory stitching. Theorem 2 shows that Markovian RCSL (context length equal to \(1\)) outputs a policy with a constant sub-optimal gap. Theorem 3 states that any decision transformer (using cross-entropy loss in RCSL) with any context length fails to stitch trajectories with non-zero probability. **Theorem 2**.: _For any Markovian RCSL algorithm, there always exists an MDP \(\mathcal{M}\) and an (sub-optimal) offline dataset \(\mathcal{D}\) such that_ 1. \(\mathcal{M}\) _has deterministic transitions;_ 2. \(\mathcal{D}\) _uniformly covers_ \(\mathcal{M}\)_;_ 3. _The output return-conditioned policy_ \(\pi\) _satisfies_ \(|g^{*}-J(\pi,g^{*})|\geq 1/2\) _almost surely._ The key idea is that the information of reward function cannot be recovered from the RTG dataset \(\mathcal{D}^{1}\). We are able to construct two MDPs with different reward functions but the same RTG dataset. Therefore, at least RCSL will at least fail on one of them. The proof is deferred to Appendix B.1. **Theorem 3**.: _Under the assumption that a decision transformer outputs a mixture of policies for trajectories in the dataset for an unseen trajectory (details in Assumption 1), there exists an MDP \(\mathcal{M}\) and an (sub-optimal) offline dataset \(\mathcal{D}\) such that_ 1. \(\mathcal{M}\) _has deterministic transitions;_ 2. \(\mathcal{D}\) _uniformly covers_ \(\mathcal{M}\)_;_ 3. _Any decision transformer_ \(\pi\) _satisfies_ \(J(\pi,g^{*})<g^{*}\)_._ The key idea is that cross-entropy loss of decision transformers forces non-zero probability for imitating trajectories in the dataset. We construct a dataset containing two sub-optimal trajectories with the same total reward, then their first step information \((s_{1},g_{1})\) are the same. We make their first step action \(a_{1}\) and \(a^{\prime}_{1}\) different, so the RTG dataset contains two trajectories \((s_{1},g_{1},a_{1})\) and \((s_{1},g_{1},a^{\prime}_{1})\). Under the assumption on generalization ability to unseen trajectory (Assumption 1), the decision transformer will randomly pick actions with the first step \((s_{1},g^{*}>g_{1})\). ## 4 Model-Based Return-Conditioned Supervised Learning In this section, we ask whether we can avoid Bellman-completeness requirements, while still being able to stitch together sub-optimal data. To this end, we propose model-based return-conditioned Figure 2: Left: Illustration for LinearQ transition dynamics. This figure is valid for \(u\) being an even number. The transitions are plotted in red and blue arrows, where red arrows stand for \(a=0\) and blue ones for \(a=1\). The rewards are omitted. Upper right: Illustration for optimal \(Q\)-function of LinearQ with respect to states, where size parameter \(u=4\). Realizability can be satisfied with constant hidden neurons. Lower right: Illustration for reward function of LinearQ with respect to states, where size parameter \(u=4\). \(\Omega(|\mathcal{S}^{*}|)\) hidden neurons are needed to represent reward. supervised learning (MBRCSL) (cf. Algorithm 2). This technique brings the benefits of dynamic programming to RCSL without ever having to do dynamic programming backups, just forward simulation. As demonstrated in Section 3.1, when there is sufficient data coverage, RCSL can outperform DP, but typically this data coverage assumption does not hold. The key idea behind MBRCSL is to augment the offline dataset with trajectories that themselves combine sub-components of many different trajectories (i.e "stitching"), on which we can then apply RCSL to acquire performance that is better than the original dataset. Since we are in the off-policy setting and want to avoid the requirement for additional environment access, we can instead learn an approximate dynamics model from the off-policy dataset, as well as an expressive multimodal behavior policy (Line 2), both using standard maximum likelihood estimation (MLE). Then, by simply rolling out (sampling i.i.d and executing sequentially) the behavior policy on the approximate dynamics model, we can generate "stitched" trajectories (Lines 4-10). In many cases, this generated data provides coverage over potentially optimal trajectory and return data, thereby enabling RCSL to accomplish better-than-dataset performance. We aggregate optimal trajectories into a new rollout dataset by picking out trajectories with returns larger than the maximum return in the dataset. Finally, applying RCSL on the augmented rollout dataset enables trajectory stitching without the need to satisfy Bellman completeness (Line 11). Another advantage of MBRCSL framework is its modular architecture, i.e., dynamics model, behavior policy and output policy architecture can be designed independently. This allows us to leverage the most performant model classes for each individual sub-component. The precise details of the architectures and optimization algorithms that empirically work best in each domain are left to the experiments (Section 5). ## 5 Experiments In our experimental evaluation, we are primarily focused on evaluating (a) how MBRCSL performs on offline RL tasks with sub-optimal datasets, in comparison with model-free and model-based offline RL baselines? Moreover, we hope to understand (b) How each component in MBRCSL affects overall performance? Question (b) is investigated in Appendix C with a complete ablation study. ### Evaluation on Point Maze Environments We first evaluate the methods on a custom Point Maze environment built on top of D4RL Maze (Fu et al., 2020). As illustrated in Fig. 3, this task requires the agent to navigate a ball from the initial location (denoted \(S\)) to a designated goal (denoted \(G\)). To investigate if the methods can do stitching, we construct an offline dataset consisting of two kinds of suboptimal trajectories with equal number: \(\bullet\): A detour trajectory \(S\to A\to B\to G\) that reaches the goal in a suboptimal manner. \(\bullet\): A trajectory for stitching: \(S\to M\). This trajectory has very low return, but is essential for getting the optimal policy. The optimal trajectory should be a stitching of the two trajectories in dataset (\(S\to M\to G\)). The trajectories are generated by a scripted policy similar to that in D4RL dataset (Fu et al., 2020). The resulting dataset has averaged return 40.7 and highest return 71.8. To answer question (a), we compare MBRCSL with several offline RL baselines. (1) **CQL**(Kumar et al., 2020) is a model-free offline RL method that learns a conservative Q function penalizing high values for out of distribution actions. (2) **MOPO**(Yu et al., 2020) is a model-based offline RL method that learns an ensemble dynamics model and performs actor-critic learning using short-horizon model-rollouts. (3) **COMBO**(Yu et al., 2021) combines CQL and MOPO by jointly learning a conservative Q function and a dynamics model. (4) **MOReL**(Kidambi et al., 2020) first learns a pessimistic MDP from dataset and then learns a near-optimal policy in the pessimistic MDP. (5) **Decision Transformer (DT)**(Chen et al., 2021) is a RCSL method that leverages a transformer model to generate return-conditioned trajectories. (6) **%BC** performs behavior cloning on trajectories from the dataset with top 10% return. We implement MBRCSL with the following architectures: * Dynamics model learns a Gaussian distribution over the next state and reward: \(\widehat{T}_{\theta}(s_{t+1},r_{t}|s_{t},a_{t})=\mathcal{N}\left((s_{t+1},r_{ t});\mu_{\theta}(s_{t},a_{t}),\Sigma_{\theta}(s_{t},a_{t})\right)\). We learn an ensemble of dynamics models, in which each model is trained independently via maximum-likelihood. This is the same as many existing model-based approaches such as MOPO (Yu et al., 2020) and COMBO (Yu et al., 2021) * We learn an approximate behavior policy with a diffusion policy architecture (Chi et al., 2023), which is able to capture multimodal action distributions. We fit the noise model by minimizing mean squared error (MSE). * The return-conditioned policy is represented by a MLP network and trained by minimizing MSE. As shown in Table 1, MBRCSL outperforms all baselines on the Point Maze environment, successfully stitching together suboptimal trajectories. COMBO, MOPO, MOReL and CQL struggles to recover the optimal trajectory, potentially due to challenges with Bellman completeness. They also have large variance in performance. In addition, we show in Appendix D that performance of DP-based offline RL methods cannot be improved by simply increasing model size. DT and %BC have low variance, but they cannot achieve higher returns than that of datset due to failure to perform stitching. ### Evaluation on Simulated Robotics Environments We also evaluate the methods on three simulated robotic tasks from Singh et al. (2020). In each task, a robot arm is required to complete some goal by accomplishing two phases of actions, specified as follows: * **(PickPlace)** (1) Pick an object from table; (2) Place the object into a tray. * **(ClosedDrawer)** (1) Open a drawer; (2) Grasp the object in the drawer. * **(BlockedDrawer)** (1) Close a drawer on top of the target drawer, and then open the target drawer; (2) Grasp the object in the target drawer. We take images from Singh et al. (2020) for environment illustration (cf. Figure 4). For each task, the associated dataset consists of two kinds of trajectories: (i) Prior trajectories that only perform phase 1; (ii) Task trajectories which only perform phase 2, starting from the condition that phase 1 is completed. For PickPlace, ClosedDrawer and BlockedDrawer, the prior dataset completes phase 1 with probability about 40%, 70% and 90% respectively. The task dataset has success rate of about \begin{table} \begin{tabular}{|l||c|c|c|c|c|c|c|} \hline Task & MBRCSL (ours) & COMBO & MOPO & MOReL & CQL & DT & \%BC \\ \hline Pointmaze & **91.5\(\pm\)7.1** & 56.6\(\pm\)50.1 & 77.9\(\pm\)41.9 & 26.4\(\pm\)28.1 & 34.8\(\pm\) 24.9 & 57.2\(\pm\)4.1 & 54.0\(\pm\)9.2 \\ \hline \end{tabular} \end{table} Table 1: Results on Point Maze. We record the mean and standard deviation (STD) on 4 seeds. The dataset has averaged return 40.7 and highest return 71.8. Figure 3: Point Maze illustration. Left and middle: Simulation images at initial and final timestep. Green point represents the ball to control, red point represents the target position. Right: Maze and dataset description. The initial point is \(S\) and the goal is \(G\). The dataset consists of two trajectories: \(S\to A\to B\to G\) and \(S\to M\). 90% for PickPlace and 60% for both ClosedDrawer and BlockedDrawer. The agent must stitch the prior and task trajectory together to complete the task. All three tasks have sparse reward, with return 1 for completing phase 2, and 0 otherwise. Comparing to Point Maze, the simulated robotic tasks require more precise dynamics estimates. We choose **CQL**(Kumar et al., 2020), **COMBO**(Yu et al., 2021) and **Decision Transformer (DT)**(Chen et al., 2021) as baselines of model-free, model-based and RCSL methods, respectively. To demonstrate performance improvement of the output policy over the behavior policy in MBRCSL, we also tested behavior cloning implemented with diffusion policy (denoted by **BC**), which has the same architecture as the behavior policy in MBRCSL. To capture the more complicated dynamics, we implemented the dynamics and output return-conditioned policy with the following architectures: * Dynamics model is represented with a transformer model (Vaswani et al., 2017)\(\widehat{T}_{\theta}(s^{\prime},r|s,a)\), which is trained to minimize cross-entropy loss. * The output return-conditioned policy is represented by an autoregressive model (Korenkevych et al., 2019) trained by MLE. As shown in Table 2, MBRCSL outperforms all baseline methods in all three tasks. CQL achieves nonzero while low success rates and suffers from high variance in two of the tasks. This is possibly because Bellman completeness and sparse reward impede correct \(Q\)-function estimation. COMBO fails to complete the task at all, potentially due to inaccurate model rollouts incurring a higher \(Q\) estimation error. DT fails to extract meaningful behavior because of a lack of expert trajectory. We also found that compared to the behavior policy (BC) which tries to capture all possible behaviors, the output policy (MBRCSL) is able to extract successful trajectories from rollouts and results in a higher success rate. ## 6 Conclusion In this work, we conduct a thorough analysis of off-policy reinforcement learning, theoretically showing how return-conditioned supervised learning algorithms can provide a benefit over typical dynamic programming-based methods for reinforcement learning under function approximation through the viewpoint of Bellman completeness. We show that with data coverage, RCSL style algorithms do not require Bellman completeness, simply the easier realizability condition. We then characterize the limitations of RCSL in its inability to accomplish better-than-dataset behavior through trajectory stitching. To remedy this, while still retaining the benefit of avoiding Bellman completeness, we propose a novel algorithm - MBRCSL, that is able to accomplish data coverage through i.i.d forward sampling using a learned dynamics model and show its benefits. Notable open problems here concern how to make MBRCSL work in stochastic environments and scale these algorithms up to more complex and large-scale environments. \begin{table} \begin{tabular}{|l||c|c|c|c|c|} \hline Task & MBRCSL (ours) & CQL & COMBO & DT & BC \\ \hline PickPlace & **0.48\(\pm\)0.04** & 0.22\(\pm\)0.35 & 0\(\pm\)0 & 0\(\pm\)0 & 0.07\(\pm\) 0.03 \\ ClosedDrawer & **0.51\(\pm\)0.12** & 0.11\(\pm\)0.08 & 0\(\pm\)0 & 0\(\pm\)0 & 0.38\(\pm\)0.02 \\ BlockedDrawer & **0.68\(\pm\)0.09** & 0.34\(\pm\)0.23 & 0\(\pm\)0 & 0\(\pm\)0 & 0.61\(\pm\)0.02 \\ \hline \end{tabular} \end{table} Table 2: Results on simulated robotic tasks. We record the mean and STD of the success rate on 4 seeds. Figure 4: Simulated robotic tasks illustrations from CoG (Singh et al., 2020). The rows represent PickPlace, ClosedDrawer and BlockedDrawer, respectively, from top to bottom. The dataset for each task consists of two kinds of trajectories: Some perform actions in red arrows and others perform actions in green arrows. A task is completed if and only if actions in both red and green arrows are performed successfully. ## Acknowledgements SSD acknowledges the support of NSF IIS 2110170, NSF DMS 2134106, NSF CCF 2212261, NSF IIS 2143493, NSF CCF 2019844, NSF IIS 2229881. CZ is supported by the UW-Amazon Fellowship.
2310.01547
On the near-optimality of betting confidence sets for bounded means
Constructing nonasymptotic confidence intervals (CIs) for the mean of a univariate distribution from independent and identically distributed (i.i.d.) observations is a fundamental task in statistics. For bounded observations, a classical nonparametric approach proceeds by inverting standard concentration bounds, such as Hoeffding's or Bernstein's inequalities. Recently, an alternative betting-based approach for defining CIs and their time-uniform variants called confidence sequences (CSs), has been shown to be empirically superior to the classical methods. In this paper, we provide theoretical justification for this improved empirical performance of betting CIs and CSs. Our main contributions are as follows: (i) We first compare CIs using the values of their first-order asymptotic widths (scaled by $\sqrt{n}$), and show that the betting CI of Waudby-Smith and Ramdas (2023) has a smaller limiting width than existing empirical Bernstein (EB)-CIs. (ii) Next, we establish two lower bounds that characterize the minimum width achievable by any method for constructing CIs/CSs in terms of certain inverse information projections. (iii) Finally, we show that the betting CI and CS match the fundamental limits, modulo an additive logarithmic term and a multiplicative constant. Overall these results imply that the betting CI~(and CS) admit stronger theoretical guarantees than the existing state-of-the-art EB-CI~(and CS); both in the asymptotic and finite-sample regimes.
Shubhanshu Shekhar, Aaditya Ramdas
2023-10-02T18:42:23Z
http://arxiv.org/abs/2310.01547v2
# On the near-optimality of betting confidence sets ###### Abstract Constructing nonasymptotic confidence intervals (CIs) for the mean of a univariate distribution from independent and identically distributed (i.i.d.) observations is a fundamental task in statistics. For bounded observations, a classical nonparametric approach proceeds by inverting standard concentration bounds, such as Hoeffding's or Bernstein's inequalities. Recently, an alternative betting-based approach for defining CIs and their time-uniform variants called confidence sequences (CSs), has been shown to be empirically superior to the classical methods. In this paper, we provide theoretical justification for this improved empirical performance of betting CIs and CSs. Our main contributions are as follows: **(i)** We first compare CIs using the values of their first-order asymptotic widths (scaled by \(\sqrt{n}\)), and show that the betting CI of Waudby-Smith and Ramdas [2023] has a smaller limiting width than existing empirical Bernstein (EB)-CIs. **(ii)** Next, we establish two lower bounds that characterize the minimum width achievable by any method for constructing CIs/CSs in terms of certain inverse information projections. **(iii)** Finally, we show that the betting CI and CS match the fundamental limits, modulo an additive logarithmic term and a multiplicative constant. Overall these results imply that the betting CI (and CS) admit stronger theoretical guarantees than the existing state-of-the-art EB-CI (and CS); both in the asymptotic and finite-sample regimes. ## 1 Introduction This paper studies the fundamental limits of the width of nonasymptotic confidence intervals (CIs) and confidence sequences (CSs) for the mean \(\mu\) of a distribution \(P^{*}\) on \(\mathcal{X}=[0,1]\) from an i.i.d. sample \(X_{1},X_{2},\ldots,X_{n}\) drawn according to \(P^{*}\). We define these below, but note first that all the methods analyzed in this paper are fully nonparametric, and make no assumptions whatsoever about knowledge of any aspect of the distribution \(P^{*}\) except that it is bounded on \([0,1]\). Throughout this paper, we use \(P^{*}\) to denote the distribution generating the observations \((X_{t})_{t\geq 1}\), and use \(P\) when referring to an arbitrary probability distribution. Formally, a level-\((1-\alpha)\) CI for \(\mu\) is a \(\sigma(X_{1},\ldots,X_{n})\)-measurable subset \(C_{n}\) of the domain \(\mathcal{X}\) that satisfies the coverage guarantee \(\mathbb{P}\left(\mu\in C_{n}\right)\geq 1-\alpha\). CIs can be constructed for datasets of fixed (and non-random) size, and cannot be used for tasks that involve processing streams of observations of possibly data-dependent random size. In such cases, a time-uniform variant of CIs, called confidence sequences (CSs), are the appropriate tool for inference. Given \(X_{1},X_{2},\ldots\stackrel{{\text{i.i.d.}}}{{\sim}}P^{*}\), a level-\((1-\alpha)\) CS for the mean \(\mu\) is a collection of sets \(\{C_{t}:t\geq 1\}\) such that \(C_{t}\) is \(\sigma(X_{1},\ldots,X_{t})\) measurable for all \(t\geq 1\), and these sets satisfy the following uniform coverage guarantee: \(\mathbb{P}\left(\forall t\in\mathbb{N}:\mu\in C_{t}\right)\geq 1-\alpha\). The size of a CI, assuming it satisfies the \((1-\alpha)\) coverage guarantee, is a natural metric for evaluating its quality. Since we are concerned with observations on the unit interval, a good measure of the'size' is the _width_ of the CI, which is the length of the smallest interval containing the CI. Formally, the width of \(C_{n}\) is \[|C_{n}|=\inf\{b-a:C_{n}\subset[a,b]\}.\] Note that the width of a CI is a possibly random quantity, and most sensible strategies for constructing CIs usually ensure that it converges to \(0\) almost surely. In fact, for our setting of bounded univariate observations, the width typically converges to \(0\) at a \(\mathcal{O}\left(1/\sqrt{n}\right)\) rate. Another desirable property of CIs, in addition to the order optimal convergence rate, is _variance adaptivity_. That is, we want the leading constant (in the width of the CI) to be proportional to the standard deviation of the distribution. This means, for example, that the width of the CI for observations drawn from a Bernoulli(0.99) distribution will be significantly tighter than that for a Bernoulli(0.5) distribution (for the same value of \(n\)) for variance adaptive CIs. A standard approach for constructing non-asymptotic CIs proceeds by inverting finite-sample concentration inequalities. For bounded random variables, the earliest such result was derived by Hoeffding (1963), who constructed a CI with the order-optimal width (i.e., decaying at \(1/\sqrt{n}\) rate). However, the resulting CI, referred to as Hoeffding's CI, is not variance adaptive, as it uses the worst case variance of \(1/4\) for random variables supported on the unit interval. This was addressed by CIs based on the inequalities of Bennett (1962) and Bernstein (1927), but these methods require knowledge of the variance (or at least a good upper bound), which limits their applicability. This has led to the construction of so-called empirical Bernstein (EB) inequalities, that manage to replace the true (unknown) variance with their empirical estimates (Audibert et al., 2009; Maurer and Pontil, 2009). Additionally, time-uniform analogs (i.e., confidence sequences) of all these CIs have been recently derived (Howard et al., 2021). In a recent discussion paper, Waudby-Smith and Ramdas (2023) developed a new approach for constructing CIs and CSs for the mean of bounded random variables (both with and without replacement, the latter being a setting we briefly return to at the end of this paper). They consider a family of hypothesis tests, each testing whether the true mean \(\mu\) equals \(m\), for all values of \(m\in[0,1]\). The CI is then defined by inversion (as is standard): it is the set of the values of \(m\), for which the corresponding null hypothesis has not yet been rejected. The key novelty is the type of test that is employed, both for the CI and the CS, which involves "testing by betting". In short, one converts the hypothesis test into a game, such that if the null is true, no bettor can make money in that game, but if the null is false, a smart bettor can grow their wealth exponentially. We introduce the details of this strategy in Section 2.2. Using this strategy, the authors constructed two new CIs and CSs: a closed-form "predictable plug-in" empirical Bernstein CI/CS (that they refer to as PrPl-EB CI/CS), as well as a non-closed-form (but easy to numerically solve) betting CIs/CSs. In a thorough empirical evaluation spanning a variety of competing methods derived over the last 60 years, these new CIs and CSs were shown to _significantly_ outperform all existing methods. However, while some intuition was provided by the authors, the theoretical results of Waudby-Smith and Ramdas (2023) do not explain this large improvement. We were also not able to find any information-theoretic lower bounds on the best possible width achievable by any CI/CS. _The main motivation of our current paper is to close the gap between theory and practice for this basic and well-studied problem._ Overview of results.Our results provide theoretical justification of the benefits of the betting CI over competing methods. Despite the CIs being nonasymptotically valid, it turns out to be quite insightful to first compare their asymptotic behavior. To do this, we use the notion of first-order limiting widths of CIs, denoted by \(\gamma_{1}\). In particular, for any level-\((1-\alpha)\) CI, whose width is a possibly random quantity \(w_{n}\), we define its first-order limiting (half-)width as \[\gamma_{1}=\limsup_{n\to\infty}\sqrt{n}\,w_{n}. \tag{1}\] For all the CIs we consider in this paper, \(\gamma_{1}\) is either a constant, or bounded above by a constant almost surely. We refer to \(\gamma_{1}\) as the 'first-order' limiting (half-)width, because it is solely determined by the dominant \(\mathcal{O}(1/\sqrt{n})\) term in the width. This already leads to a separation amongst the CIs: they do not all have the same first order limiting width. For those that do, we can also similarly analyze the 'higher-order' limiting widths the \(\mathcal{O}(1/n)\), \(\mathcal{O}(1/n^{3/2})\), etc. terms; this is not the goal of the paper but we return to this briefly in Section 6.4. Using \(\gamma_{1}\), we can define a total order over the space of CIs: for any two level-\((1-\alpha)\) CIs, denoted by \(\mathrm{CI}^{(a)}\) and \(\mathrm{CI}^{(b)}\), we have \[\mathrm{CI}^{(a)}\leq_{\gamma}\mathrm{CI}^{(b)}\quad\Longleftrightarrow\quad \gamma_{1}^{(a)}\stackrel{{ a.s.}}{{\leq}}\gamma_{1}^{(b)}.\] In words, \(\mathrm{CI}^{(a)}\) is smaller (hence better) than \(\mathrm{CI}^{(b)}\), if the first-order limiting width \(\gamma_{1}^{(a)}\) is smaller than \(\gamma_{1}^{(b)}\). We use a strict inequality \(<_{\gamma}\), and equality \(=_{\gamma}\) in the natural way. In our first main result, Theorem 3.2 in Section 3, we use this ordering to compare betting CIs with some popular existing CIs (namely Hoeffding, Bernstein, MP-EB, and PrPl-EB CIs, formally defined in Section 2). More specifically, we show that the betting CI, with a suitable choice of bets, is a strict improvement over Hoeffding and MP-EB CIs, and its limiting width is at least as good as that of the Bernstein CI, without the knowledge of the variance of the distribution (needed to construct Bernstein CI). To summarize, this result implies that \[\text{betting CI}\ \leq_{\gamma}\text{PrPl-EB CI}=_{\gamma}\text{Bernstein CI}<_{\gamma}\text{MP-EB CI}<_{\gamma}^{*}\text{ Hoeffding CI}.\] The asterisk in the last inequality is to indicate that the strict inequality is valid when \(\sigma<(1/2)\sqrt{\log(2/\alpha)/\log(4/\alpha)}\). For example, with \(\alpha=0.05\), this condition reduces to \(\sigma<0.458\) (recall that \(\sigma\) is upper bounded by \(0.5\)). The previous result establishes the benefits of betting CI in an asymptotic sense, by showing that \(\gamma_{1}^{\text{(bet)}}\) compares favorably with the limiting first-order widths of competing CIs. Our next set of results, presented in Section 4, demonstrate the near-optimality of the betting CI in a non-asymptotic regime. We first establish a fundamental, method-agnostic, lower bound on the width achievable by any valid level-\((1-\alpha)\) CI in Section 4.1. This lower bound is in terms of certain inverse information projection terms. Such nonasymptotic width (or, more generally, volume) lower bounds appear to be quite rare in the nonparametric _estimation_ (but not testing) literature, and we have not been able to find analogous references for other problem classes. We then show that the betting CI width nearly matches the lower bound in Section 4.2. To elaborate, for \(P^{*}\in\mathcal{P}([0,1])\) and an \(m\in[0,1]\), define the information projection \(\text{KL}_{\inf}^{+}(P^{*},m)\) (resp. \(\text{KL}_{\inf}^{-}(P^{*},m)\)) as the smallest value of \(d_{\text{KL}}(P^{*},P)\) over all distributions \(P\in\mathcal{P}([0,1])\) with mean at least \(m\) (resp. at most \(m\)). Accordingly, the inverse information projection, \(\text{KL}_{\inf}^{+}(P^{*},\cdot)^{-1}(x)\), is the smallest \(m\) such that \(\text{KL}_{\inf}^{+}(P^{*},m)\) is at least \(x\) (the inverse of \(\text{KL}_{\inf}^{-}\) is defined similarly). The information projection terms \(\text{KL}_{\inf}^{+}\) and \(\text{KL}_{\inf}^{-}\), formally introduced in Definition 4.1, have been used in the multi-armed bandits literature to characterize the optimal instance-dependent cumulative regret (Lai and Robbins, 1985; Honda and Takemura, 2010). Our results provide an analogous instance-dependent characterization of the width of CIs using their inverse. In particular, if \(w_{n}\) denotes the width of any level-\((1-\alpha)\) CI, and \(w_{n}^{\text{(bet)}}\) denotes the width of the betting CI, Figure 1: Comparison of the widths of different level-\((1-\alpha)\) CIs of the mean of i.i.d. Bernoulli observations with means \(0.50\) (left) and \(0.95\) (right), with \(\alpha=0.005\). In both cases, the betting CI of Waudby-Smith and Ramdas (2023) dominates other methods, while the plot on the right further differentiates the variance-adaptive CIs from Hoeffding CI, that fails to exploit the low-variance observations. In the right plot, the CIs can be clipped to lie in \([0,1]\) — we have not done that for higher visual clarity. then we have the following, for \(a(n,\alpha):=\log\left((1-\alpha)^{1-\alpha}\alpha^{2\alpha-1}\right)/n\): \[w_{n}\geq\max\left\{\mathrm{KL}_{\mathrm{inf}}^{+}(P^{*},\cdot)^ {-1}\left(a(n,\alpha)\right)-\mu,\;\mu-\mathrm{KL}_{\mathrm{inf}}^{-}(P^{*}, \cdot)^{-1}\left(a(n,\alpha)\right)\right\},\] \[\text{and}\quad w_{n}^{(\mathrm{bet})}\leq 2\max\left\{ \mathrm{KL}_{\mathrm{inf}}^{+}(P^{*},\cdot)^{-1}\left(b(n,\alpha)\right)-\mu, \;\mu-\mathrm{KL}_{\mathrm{inf}}^{-}(P^{*},\cdot)^{-1}\left(b(n,\alpha)\right) \right\},\] \[\text{with}\quad a(n,\alpha)=\frac{\log\left((1-\alpha)^{1- \alpha}\alpha^{2\alpha-1}\right)}{n},\quad\text{and}\quad b(n,\alpha)=\frac{ \log(1/\alpha)+\mathcal{O}(\log n)}{n}.\] Since \(a(n,\alpha)\approx\log(1/\alpha)/n\) as \(\alpha\to 0\), this proves that the betting CI nearly matches the fundamental performance limit, but for two differences: an extra leading multiplicative factor of \(2\), and an additional \(\mathcal{O}(\log(n)/n)\) term in the argument of the inverse information projection. The main takeaway form these results is that (i) the optimal instance-dependent width of any CI is precisely characterized by an instance-dependent 'complexity parameter' based on the inverse of \(\mathrm{KL}_{\mathrm{inf}}^{+}\) and \(\mathrm{KL}_{\mathrm{inf}}^{-}\), and (ii) the width of the betting CI also depends on nearly the same complexity term. To the best of our knowledge, none of the existing CIs admit such upper bounds on their widths. Having analyzed the performance of betting CI, we then establish similar guarantees for the betting CS in Section 5. Due to their time-uniform coverage guarantees, it is not reasonable to order CSs by comparing their widths at some specific value of \(n\) (for any typical CS, one can always find another CS that is tighter than it for some \(n\), while being looser at some other \(n\)). To address this, we introduce a notion of _effective width_, that is more suitable for comparing CSs (Definition 5.2). More specifically, we define the effective width of a CS after \(n\) observations, as the smallest \(w>0\) for which the expected number of observations needed for the width of the CS to go below \(w\), is at most \(n\). Using this definition, we first establish a general lower bound on the effective width of any valid level-\((1-\alpha)\) CS in Proposition 5.4 in Section 5.1. This lower bound also depends on an appropriate inverse information projection term, similar to the corresponding bound for CIs discussed above. We then obtain a nearly matching upper bound (modulo the same constant multiplicative factor, and an additive logarithmic term in the argument of the inverse information projection) on the effective width of the betting CS in Theorem 5.6. Overall, our results provide rigorous justification for the empirically observed supremacy of the betting CI/CSs of Waudby-Smith and Ramdas (2023) over previously state-of-the-art methods, and also opens us several interesting directions for future work. Outline of the paper.We present the background information about some popular existing CIs in Section 2.1, and then formally describe the betting approach to constructing CIs in Section 2.2. The next three sections contain the details of the main technical results. In particular, in Section 3 we show that the betting-CI of Waudby-Smith and Ramdas (2023) is at least as good as the best existing EB-CI in terms of the first-order limiting width defined in (1). Next, in Section 4, we show that the betting CI has nearly-optimal width even in the finite sample setting, by establishing a method-agnostic lower bound, and then showing that the width of the betting CI is close to the lower bound. Finally, in Section 5, we show that the betting CS also satisfies a similar notion of near-optimality. We discuss some extensions of our results in Section 6, and conclude the paper with some interesting questions for future work in Section 7. ## 2 Preliminaries We begin by describing some popular existing CIs in Section 2.1, and then present the betting based approach for CI/CS constructing Section 2.2. Finally, we end with a discussion on a class of powerful betting strategies used for constructing CI/CS in Section 2.3. ### CIs/CSs based on concentration inequalities A standard approach for constructing CIs is by inverting high probability, non-asymptotic, concentration inequalities for the empirical mean based on i.i.d. observations. Here, we recall three of the most popular CIs constructed using exponential concentration inequalities for bounded observations. Hoeffding CI.Hoeffding (1963) derived one of the earliest concentration inequalities for the mean of bounded random variables. The main idea is to use the general Cramer-Chernoff method, with an appropriate exponential upper bound on the moment generating function of bounded random variables. On inverting the resulting concentration inequality, we get the following closed form CI: \[C_{n}^{(H)}=\left[\widehat{\mu}_{n}\pm\frac{w_{n}^{(H)}}{2}\right]\coloneqq \left[\widehat{\mu}_{n}-\frac{w_{n}^{(H)}}{2},\,\widehat{\mu}_{n}+\frac{w_{n} ^{(H)}}{2}\right],\quad\text{with }\widehat{\mu}_{n}=\frac{1}{n}\sum_{i=1}^{n}X_{i}, \quad\text{and }w_{n}^{(H)}=2\sqrt{\frac{\log(2/\alpha)}{2n}}.\] Note that the width of Hoeffding CI converges to zero at an order-optimal \(1/\sqrt{n}\) rate. Despite this order-optimality, it can be quite loose for random variables with low variance; in fact, the Hoeffding CI can be interpreted as assuming of a worst case variance of \(1/4\). Bernstein CI.The aforementioned weakness can be addressed by constructing a CI based on a variance-aware concentration inequality, such as Bernstein exponential concentration inequality (Boucheron et al., 2013, Chapter 2). In particular, inverting this concentration inequality results in the following CI: \[C_{n}^{(B)}=\left[\widehat{\mu}_{n}\pm\frac{w_{n}^{(B)}}{2}\right],\quad \text{where }w_{n}^{(B)}=2\sigma\sqrt{\frac{2\log(2/\alpha)}{n}}+\frac{4\log(2/\alpha)} {3n}.\] Since the maximum value of \(\sigma\) of a random variable supported on \([0,1]\) is \(1/2\), the dominant \(\mathcal{O}(1/\sqrt{n})\) term reduces to the Hoeffding CI width for this value of \(\sigma\). However for problem instances with \(\sigma\ll 1/2\), the above CI is a significant improvement over Hoeffding's. One drawback of Bernstein CI is that constructing it requires the additional information of the variance \(\sigma\), or at least a good upper bound on it. In our setup, we do not assume that such additional side information is available. Maurer & Pontil's Empirical Bernstein (MP-EB) CI.Maurer and Pontil (2009) addressed the above issue, by deriving a fully data-driven empirical Bernstein CI: \[C_{n}^{(\text{MP-EB})} =\left[\widehat{\mu}_{n}\pm\frac{w_{n}^{(\text{MP-EB})}}{2} \right],\] \[\text{where}\quad w_{n}^{(\text{MP-EB})} =2\widehat{\sigma}_{n}\sqrt{\frac{2\log(4/\alpha)}{n}}+\frac{14 \log(4/\alpha)}{3(n-1)},\text{ and }\widehat{\sigma}_{n}^{2}=\frac{1}{n-1}\sum_{i=1}^{n}(X_{i}- \widehat{\mu}_{n})^{2}.\] The above CI satisfies many qualities of a good CI: it is variance-adaptive, has order-optimal dependence on \(n\), and it is fully data-driven, not requiring any prior information about the variance. However, it can still be significantly improved in theory and practice. Predictable Plugin-EB (PrPl-EB) CI.Recently, Waudby-Smith and Ramdas (2023) obtained an improved empirical Bernstein CI using a martingale technique that avoids a union bound, thus achieving a better dependence on \(\alpha\). Define \(\widehat{\mu}_{0}:=0\), \(V_{0}:=1/4\), and \[\psi_{E}(\lambda)\coloneqq\frac{-\log(1-\lambda)-\lambda}{4},\quad\widehat{ \mu}_{t}=\frac{1}{t}\sum_{i=1}^{t}X_{i},\quad tV_{t}=\frac{1}{4}+\sum_{i=1}^{t} (X_{i}-\widehat{\mu}_{i-1})^{2},\quad\lambda_{t,n}=\sqrt{\frac{2\log(2/\alpha )}{nV_{t-1}}}. \tag{2}\] Then, the level-\((1-\alpha)\) PrPl-EB CI after \(n\) observations is defined as \[C_{n}^{(\text{PrPl-EB})}=\left[\widehat{\mu}_{n}\pm\frac{w_{n}^{ (\text{PrPl-EB})}}{2}\right], \tag{3}\] \[\text{where}\quad\widetilde{\mu}_{n}\coloneqq\frac{\sum_{t=1}^{n} \lambda_{t,n}X_{i}}{\sum_{t=1}^{n}\lambda_{t,n}},\quad\text{and}\quad w_{n}^{( \text{PrPl-EB})}=2\frac{\log(2/\alpha)+4\sum_{t=1}^{n}\psi_{E}(\lambda_{t,n})( X_{i}-\widehat{\mu}_{i-1})^{2}}{\sum_{t=1}^{n}\lambda_{t,n}}.\] The first-order limiting width, introduced in (1), for the three classical CIs introduced above (Hoeffding, Bernstein, MP-EB) can be easily calculated: \[\gamma_{1}^{(H)}=2\sqrt{\frac{\log(2/\alpha)}{2}},\quad\gamma_{1}^{(B)}=2\sigma \sqrt{2\log(2/\alpha)},\quad\text{and}\quad\gamma_{1}^{(\text{MP-EB})}\stackrel{{ a.s.}}{{=}}2\sigma\sqrt{2\log(4/\alpha)}.\] These values succinctly characterize the relative strengths of these three CIs: * Hoeffding CI achieves the order-optimal \(1/\sqrt{n}\) dependence, but is not variance adaptive. * Bernstein CI obtains a strictly smaller limiting width than Hoeffding CI for all values of \(\sigma\) (since \(\sigma\) can be at most \(1/2\)), but requires the additional information about the variance \(\sigma^{2}\). * MP-EB CI relaxes the requirement of knowing the variance, at the price of a larger \(\alpha\)-dependent term, which results from the EB-CI spending some of the \(\alpha\) budget on estimating the variance. Waudby-Smith and Ramdas (2023) showed that the PrPl-EB CI has a limiting width equal to \(\gamma_{1}^{(B)}\), thus closing the gap that exists between MP-EB and Bernstein CI. This improvement is also witnessed in practice, with PrPl-EB being significantly tighter than MP-EB in simulations across a wide range of examples. In Theorem 3.2 of this paper, we prove that the limiting width of the betting CI is no larger than \(\gamma_{1}^{(B)}\), suggesting the possibility that the betting CI might admit stronger performance guarantees than existing CIs. This motivates our subsequent investigation of the width of the betting CI (and CS) in the non-asymptotic regime, in which we characterize the fundamental limits of performance of any CI/CS, and then show that the betting CI/CS nearly matches this limit. We end this section by noting that there also exist time-uniform analogs of the above confidence intervals (i.e., CSs), and while we refer the reader to Howard et al. (2021), Waudby-Smith and Ramdas (2023) for the details, we will analyze the CS widths as well later in this paper. ### The betting approach for CI construction Waudby-Smith and Ramdas (2023) proposed a general method for constructing CIs/CSs for the mean of bounded random variables, by inverting the following continuum of hypothesis testing problems: \[H_{0,m}:\mu=m,\quad\text{versus}\quad H_{1,m}:\mu\neq m,\quad\text{for all }m\in[0,1].\] Then the CI (resp. CS) after \(n\) observations is defined as those values of \(m\) for which \(H_{0,m}\) has not yet been rejected by the corresponding hypothesis test (resp. sequential hypothesis test). For simplicity, we discuss the construction of betting CIs here, and present the details of the betting CS at the beginning of Section 5. To design each test required for the betting CI, Waudby-Smith and Ramdas (2023) build upon the principle of _testing-by-betting_, recently popularized by Shafer (2021). This principle states that the evidence against \(H_{0,m}\) can be precisely quantified by the wealth of a fictitious gambler who sequentially bets against the null in a fair (under the null) repeated game of chance, starting with an initial wealth of \(1\) and never betting more money than they have. Denoting the wealth after \(n\) rounds as \(W_{n}(m)\) (with \(W_{0}(m)=1\)), the fairness of the game under the null implies that \(\mathbb{E}[W_{n}(\mu)]\leq 1\). Markov's inequality then implies that one can reject \(H_{0,m}\) at level \(\alpha\) if \(W_{n}(m)\) exceeds \(1/\alpha\). Inverting these tests defines the CI at time \(n\) as \[C_{n}=\{m\in[0,1]:W_{n}(m)<1/\alpha\},\quad\text{with}\quad W_{n}(m)\coloneqq \prod_{i=1}^{n}f_{i}(X_{i},\lambda_{i}(m),m), \tag{4}\] where \(f_{i}\) is a predictable (i.e., \(\mathcal{F}_{i-1}\)-measurable), nonnegative 'payoff' function satisfying \(\mathbb{E}[f_{i}(X,\lambda,\mu)\mid\mathcal{F}_{i-1}]\leq 1\) for all feasible values of of \(\lambda\), and \(\lambda_{i}(m)\) is the predictable 'bet', denoting the fraction of the wealth, \(W_{i-1}(m)\), that is put at stake by the bettor in round \(i\). This approach thus transforms the task of constructing a CI/CS into that of designing wealth processes (or equivalently, payoff functions and betting strategies) that grow quickly for \(H_{0,m}\) with \(m\neq\mu\). **Remark 2.1**.: Note that the Hoeffding and Bernstein CIs, as well as PrPl-EB, can be written as special cases of the general betting framework introduced above. For example, Hoeffding CI can be defined as the intersection of two level-\((1-\alpha/2)\) betting CIs, \(C_{n}^{0}\) and \(C_{n}^{1}\), that use the same bets \((i.e.,\lambda_{i}(m),\) for all \(i,m)\), but different payoff functions \(f_{i}^{0}\) and \(f_{i}^{1}\) for \(i\in[n]\), defined as \[f_{i}^{j}(x,\lambda,m)=\exp((-1)^{j}\lambda(x-m)-\lambda^{2}/8),\;\text{for} \;j\in\{0,1\},\quad\text{and}\quad\lambda_{i}(m)=\sqrt{\frac{8\log(2/\alpha)} {n}},\] for all \(t\in[n]\) and \(m\in[0,1]\). Similarly, we can recover PrPl-EB CI using \[f_{i}^{j}(x,\lambda,m)=\exp\left((-1)^{j}\lambda(x-m)-\psi_{E}(\lambda)(x- \widehat{\mu}_{i-1})^{2}\right),\quad\text{and}\quad\lambda_{i}(m)=\lambda_{ i,n}\coloneqq\sqrt{\frac{2\log(2/\alpha)}{nV_{i-1}}},\] for all \(m\in[0,1]\). The bets \(\{\lambda_{i}(m):i\in[n],\;m\in[0,1]\}\) in both examples are optimized for a fixed value of \(n\), and hence we refer to them as CIs. We briefly discuss the constructing of betting CSs in Definition 5.1, at the beginning of Section 5. While the PrPl-EB CI has the benefit of a closed form expression, it does not utilize the full power of the general framework, as it can be interpreted as playing a sub-fair betting game at the (unknown) true mean \(\mu\). Mathematically, this means that the wealth process \(\{W_{n}(\mu):n\geq 1\}\) is a nonnegative super-martingale. Waudby-Smith and Ramdas (2023) also developed an alternative, and more powerful, approach that relies on nonnegative martingales, that results in a less conservative CI in practice. However, the drawback of the resulting CI that we refer to as betting CI is that the intervals do not have a closed-form expression, and have to be obtained numerically. We recall this construction next. **Definition 2.2** (Betting CI).: Given \(X_{1},X_{2},\ldots,X_{n}\) drawn i.i.d. from a distribution \(P^{*}\) supported on \([0,1]\) with mean \(\mu\), consider an instance of the general CI of (4), with \[f_{i}(x,\lambda,m)=1+\lambda(x-m),\quad\text{and}\quad\lambda_{t}(m)\in\left[ \frac{-1}{1-m},\frac{1}{m}\right],\quad\text{for all}\;m\in[0,1].\] Then, we define the betting CI after \(n\) observations as \(C_{n}^{(bet)}=\{m\in[0,1]:W_{n}(m)<1/\alpha\}\). Recall that for all \(m\in[0,1]\) and \(t\in[n]\), the bet \(\lambda_{t}(m)\) is \(\mathcal{F}_{t-1}\) measurable, but can use knowledge of the horizon \(n\). The key component influencing the practical and theoretical performance of the betting CI is the strategy used for choosing the bets or the betting fractions, \(\{\lambda_{t}:t\geq 1,\;m\in[0,1]\}\). Waudby-Smith and Ramdas (2023) proposed and empirically compared various betting strategies that aim to optimize, either exactly or approximately, the growth rate of the wealth process for \(m\neq\mu\). We formally describe this optimality criterion, and recall a powerful betting strategy next. ### Log-optimal betting and regret The width of the CI (or, as we return to later, CS) constructed by the betting method depends strongly on the choice of the betting strategy. Good betting strategies ensure that the wealth for \(m\neq\mu\) grows rapidly, which in turn leads to CIs or CSs whose width shrinks fast. We begin by introducing the log-optimal betting strategy (Kelly, 1956) that selects \(\lambda_{t}(m)=\lambda^{*}(m)\) for all \(t\geq 1\), and \(m\in[0,1]\), where \(\lambda^{*}(m)\) is defined as \[\lambda^{*}(m)\in\operatorname*{arg\,max}_{\lambda\in\left[-\frac{1}{1-m}, \frac{1}{m}\right]}\mathbb{E}_{X\sim P^{*}}\left[\log\left(1+\lambda(X-m) \right)\right].\] The above constraints on \(\lambda\) ensure that the quantity inside the logarithm is nonnegative. Clearly this strategy cannot be employed in practice, as the distribution \(P^{*}\) is unknown. Instead, practical betting CIs/CSs proceed by choosing a sequence of predictable bets, \(\{\lambda_{t}(m):t\geq 1,m\in[0,1]\}\), which incrementally approximate the log-optimal bets in a data-driven manner. The quality of a betting strategy can be measured by its regret; that is, the cumulative suboptimality with respect to the optimal betting strategy: \[\mathcal{R}_{n}(m)=\sum_{t=1}^{n}\log\left(1+\lambda^{*}(m)(X_{t}-m)\right)- \sum_{t=1}^{n}\log\left(1+\lambda_{t}(m)(X_{t}-m)\right);\quad m\in[0,1].\] While the term \(\mathcal{R}_{n}(m)\) is a random quantity (as a function of \(X_{1}^{n}\)), there exist strategies (called _no-regret_ strategies) that can guarantee uniformly decaying average regret over all possible realizations of \(X_{1}^{n}\). Formally, we say a betting strategy is _no-regret_ if for every \(n\geq 1\) and \(m\in[0,1]\), there exists a deterministic sequence, \(\{r_{n}(m):n\geq 1\}\) with \(r_{n}(m)/n\to 0\) for all \(m\in[0,1]\), satisfying \[\mathcal{R}_{n}(m)\leq r_{n}(m)\quad\text{for all }n\in\mathbb{N}.\] Many strategies, such as the mixture method or the online Newton step strategy (Hazan et al., 2007; Cutkosky and Orabona, 2018), admit a logarithmic regret bound for all \(n\) and \(m\). That is, for these strategies, we have \(r_{n}(m)=c_{m}\log(n)\) for some constant \(c_{m}\) depending on \(m\). For our nonasymptotic results in Section 4 and Section 5, we will consider an instance of the general betting CS of Waudby-Smith and Ramdas (2023) in which the bets are chosen via the following version of the mixture method (Hazan, 2016, SS 4.3). **Definition 2.3** (Mixture method).: This betting strategy sets \(\lambda_{1}(m)=0\), for all \(m\in[0,1]\), and sets \(\lambda_{t}(m)\) to be the weighted average of all possible \(\lambda\) values, with the weight assigned to a candidate value \(\lambda\) being proportional to the hypothetical wealth (denoted by \(W_{t-1}^{\lambda}(m)\)) if the bettor chose \(\lambda_{i}(m)=\lambda\) for all \(1\leq i\leq t-1\). More formally, we have \[\lambda_{t}(m)=\frac{\int_{-1/(1-m)}^{1/m}\lambda\,W_{t-1}^{\lambda}(m)d\lambda }{\int_{-1/(1-m)}^{1/m}W_{t-1}^{\lambda}(m)d\lambda},\quad\text{with}\quad W_ {t-1}^{\lambda}(m)\coloneqq\prod_{i=1}^{t-1}\left(1+\lambda(X_{i}-m)\right).\] This betting scheme is known to have a regret bound \(\mathcal{R}_{n}(m)\leq\log(n)+2\) for all \(m\in[0,1]\), and this can be further upper bounded by \(c\log(n)\), for a constant \(c\approx 1.8\) for all values of \(n\geq 13\). Using (discrete or continuous) mixtures to design appropriate (super-)martingales is a standard technique in sequential inference (Robbins, 1970; Howard et al., 2021; Waudby-Smith and Ramdas, 2023). Indeed, in the present context of betting CSs, Waudby-Smith and Ramdas (2023) proposed a parallelizable discrete mixture strategy, while Orabona and Jun (2024+) developed a method of employing continuous mixtures for CS construction by leveraging their regret bounds. More specifically, Orabona and Jun (2024+) proposed the following CS using the regret (denoted by \(r_{n}\)) of Cover's universal portfolio (Cover, 1991; Cover and Ordentlich, 1996) scheme: \[C_{n}=\left\{m\in[0,1]:\log\widetilde{W}_{n}(m)<\log(1/\alpha)+r _{n}\right\},\] \[\text{where}\quad\log\widetilde{W}_{n}(m)\coloneqq\sup_{\lambda \in\left[\frac{-1}{1-m},\frac{1}{m}\right]}\sum_{t=1}^{n}\log(1+\lambda(X_{t}- m)).\] Our nonasymptotic results about betting CIs and CSs stated in Theorem 4.6 and Theorem 5.6 respectively are also applicable to the above CS with some minor modifications. ## 3 Limiting width of the betting CI We now present the first main result of this paper, which says that for the same choice of bets, the limiting width of the betting CI is upper bounded by that of the PrPl-EB CI. The limiting width of PrPl-EB CI was shown by Waudby-Smith and Ramdas (2023) to be equal to that of the Bernstein CI (we recall this in Fact 3.1). Thus our result shows that the betting CI is also at least as good as as the Bernstein CI in terms of the order induced by the first-order limiting width. **Fact 3.1**.: _The PrPl-EB CI of (3) is a valid level-\((1-\alpha)\) CI for the mean \(\mu\) of \(P^{*}\), and it has the following first-order limiting width:_ \[\gamma_{1}^{(\text{PrPl-EB})}\stackrel{{ a.s.}}{{=}}2\sigma\sqrt{2 \log(2/\alpha)}.\] Waudby-Smith and Ramdas (2023) proved that for certain betting strategies, the width of the betting CI converges to \(0\) at the order-optimal \(1/\sqrt{n}\) rate. However, they did not establish whether or not the width of the betting CI is variance adaptive, and how it compares with their PrPl-EB CI. Nevertheless, through extensive empirical evaluation, they showed that the betting CI is variance-adaptive, and is often much tighter than all the competitors in practice. Our next result takes the first step to address this theory-practice gap, by showing that the limiting width of the betting CI is upper bounded by that of the PrPl-EB CI (and hence, also the usual Bernstein CI). This establishes that the width of the betting CI is variance-adaptive, and furthermore, it is also asymptotically tighter than (or at least, no worse than) the improved PrPl-EB CI. This result also motivates our further exploration of the non-asymptotic behavior of betting CIs/CSs in the subsequent sections of this paper. The betting CI we analyze in this section is constructed with the same bets used to define the PrPl-EB CI. More specifically, we have the following, with \((\lambda_{t,n})\) as defined in (2): \[C_{n}^{\text{(bet)}}=\{m\in[0,1]:W_{n}(m)<1/\alpha\},\] \[\text{where}\quad W_{n}(m)=\frac{1}{2}\left(W_{n}^{+}(m)+W_{n}^{- }(m)\right),\quad\text{and}\quad W_{n}^{\pm}(m)=\prod_{t=1}^{n}\left(1\pm \lambda_{t,n}(X_{t}-m)\right). \tag{5}\] We now state the main result of this section. **Theorem 3.2**.: _Suppose the width of the betting CI defined in (5) after \(n\) observations is denoted by \(|C_{n}^{\text{(bet)}}|=w_{n}^{\text{(bet)}}\). Then, we have_ \[\limsup_{n\to\infty}\sqrt{n}\times w_{n}^{\text{(bet)}}\ \stackrel{{ a.s.}}{{\leq}}\ \lim_{n\to\infty}\sqrt{n}\times w_{n}^{\text{(PrPl-EB)}}\ \stackrel{{ a.s.}}{{=}}\ 2\sigma\sqrt{2\log(2/\alpha)}.\] _In other words, with the same choice of bets as used by the PrPl-EB CI defined in (3), the limiting width achieved by the betting CI (Definition 2.2) is always no worse than the PrPl-EB CI._ Proof outline of Theorem 3.2.: The first step in proving this result is to show that \(C_{n}^{\text{(bet)}}\), the betting CI, is always contained inside a larger CI denoted by \(C_{n}^{\dagger}\), and furthermore, this larger CI has a finite limiting width \(c_{1}<\infty\). Using the inclusion \(C_{n}^{\text{(bet)}}\subset C_{n}^{\dagger}\), we can rewrite the betting CI as \[C_{n}^{\text{(bet)}}=\{m\in C_{n}^{\dagger}:W_{n}(m)<1/\alpha\}.\] This simple modification, replacing "\(m\in[0,1]\)" to "\(m\in C_{n}^{\dagger}\)" plays an important role in our analysis. This is because for large values of \(n\), we know that \(|C_{n}^{\dagger}|\leq 2c_{1}/\sqrt{n}\), which allows us to restrict our attention to values of \(m\) in a narrow band, instead of the whole unit interval. We exploit this to obtain the required approximation and concentration results, to bound width of betting CI with that of PrPl-EB CI. In particular, we show that \(w_{n}^{\text{(bet)}}\leq w_{n}^{\text{(PrPl-EB)}}+o(1/\sqrt{n})\), which leads to the required conclusion about the limiting widths. The details of the argument are presented in Appendix B. Theorem 3.2 characterizes the performance of a betting CI in the asymptotic regime. In the next section, we analyze its behavior in the non-asymptotic regime. ## 4 Non-asymptotic analysis of betting CI To benchmark the performance of the betting CI in the non-asymptotic regime, we first establish a fundamental, method-agnostic, lower bound on the width of any CI, in terms of certain inverse information projections (Definition 4.1) in Proposition 4.3. This result gives us a precise, problem-dependent measure of complexity (or hardness) of estimating the mean \(\mu\) of a distribution \(P^{*}\). Then, in Theorem 4.6, we show that the width of the betting CI nearly matches that lower bound, modulo a leading multiplicative constant and a logarithmic term in the argument of the information projection. ### Lower bound on any CI width Let \(\mathcal{C}\) denote a method of constructing CIs for the mean of a univariate distribution; that is, given observations \(X_{1},\ldots,X_{n}\) drawn i.i.d. from a distribution \(P^{*}\in\mathcal{P}_{0}\subset\mathcal{P}(\mathbb{R})\), and a confidence level \(\alpha\in(0,1)\), the method \(\mathcal{C}\) gives us a CI, denoted by \(C_{n}=\mathcal{C}(X^{n},\alpha)\), which satisfies \(\mathbb{P}\left(\mu\in C_{n}\right)\geq 1-\alpha\). Here \(\mathcal{P}_{0}\) denotes some pre-specified class of probability distributions, such as all distributions supported on \([0,1]\). Furthermore, assume that the method \(\mathcal{C}\) returns CIs whose width can be upper bounded by a deterministic quantity (as a function of \(n\), \(\alpha\), and the distribution \(P^{*}\)). Formally, assume there exists a function \(w(n,P^{*},\alpha)\) such that \[|\mathcal{C}(X^{n},\alpha)|=|C_{n}|\coloneqq\inf\{b-a:C_{n}\subset[a,b]\} \overset{a.s.}{\leq}w(n,P^{*},\alpha).\] For some CIs, such as Hoeffding and Bernstein CIs, the actual widths are deterministic \[w_{H}(n,P^{*},\alpha)=2\sqrt{\frac{\log(2/\alpha)}{2n}},\quad\text{and}\quad w _{B}(n,P^{*},\alpha)=2\sigma\sqrt{\frac{2\log(2/\alpha)}{n}}+\frac{4\log(1/ \alpha)}{3n},\] for distributions \(P^{*}\in\mathcal{P}_{0}=\mathcal{P}\left([0,1]\right)\). Our main result of this section characterizes the achievable limit (i.e., minimum width) of CIs in terms of the minimum KL divergence (or relative entropy) between the distribution \(P^{*}\), and a class of distributions whose means are either larger or smaller than some level \(m\). We refer to these terms as information projections, and recall their formal definition below. **Definition 4.1** (Information projection).: Let \(P^{*}\) denote any distribution with mean \(\mu\), lying in a class of distributions, \(\mathcal{P}_{0}\), supported on \(\mathbb{R}\). For any \(m\in\mathbb{R}\), let \(\mathcal{P}_{0,m}^{+}\) (resp. \(\mathcal{P}_{0,m}^{-}\)) denote a subset of \(\mathcal{P}_{0}\), containing distributions whose means are at least (resp. at most) \(m\in\mathbb{R}\); that is, \(\mathcal{P}_{0,m}^{+}=\{Q\in\mathcal{P}_{0}:\mu_{Q}\geq m\}\), and \(\mathcal{P}_{0,m}^{-}=\{Q\in\mathcal{P}_{0}:\mu_{Q}\leq m\}\). Using these, we define the following two information projection terms: \[\mathrm{KL}_{\inf}^{+}(P^{*},m,\mathcal{P}_{0})\coloneqq\inf\{d_{\mathrm{KL}} (P^{*},Q):Q\in\mathcal{P}_{0,m}^{+}\},\quad\text{and}\quad\mathrm{KL}_{\inf}^ {-}(P^{*},m,\mathcal{P}_{0})\coloneqq\inf\{d_{\mathrm{KL}}(P^{*},Q):Q\in \mathcal{P}_{0,m}^{-}\}.\] For any \(x\geq 0\), we define the inverse information projections as \[\mathrm{KL}_{\inf}^{+}(P^{*},\cdot,\mathcal{P}_{0})^{-1}(x) \coloneqq\inf\left\{m\in\mathbb{R}:\mathrm{KL}_{\inf}^{+}(P^{*},m, \mathcal{P}_{0})\geq x\right\}, \tag{6}\] \[\text{and}\quad\mathrm{KL}_{\inf}^{-}(P^{*},\cdot,\mathcal{P}_{0 })^{-1}(x) \coloneqq\sup\left\{m\in\mathbb{R}:\mathrm{KL}_{\inf}^{-}(P^{*},m, \mathcal{P}_{0})\geq x\right\}.\] **Remark 4.2**.: Unlike the rest of this paper, the results in this section (as well as in Section 5.1) are valid for arbitrary classes of distributions on the real line, and not necessarily restricted to distributions supported on \([0,1]\). When, dealing with arbitrary distribution classes, we will include \(\mathcal{P}_{0}\) in the expression \(\mathrm{KL}_{\inf}^{+}(P,m,\mathcal{P}_{0})\). However, for distributions supported on \([0,1]\), we will drop the \(\mathcal{P}_{0}\) dependence, to simplify the notation. We now show a lower bound on the width that can be achieved by any method (\(\mathcal{C}\)) of constructing level-\((1-\alpha)\) CIs for the mean of real-valued random variables. **Proposition 4.3**.: _Suppose \(\mathcal{C}\) denotes any method for constructing confidence intervals whose width satisfies \(|\mathcal{C}(X^{n},\alpha)|\leq w(n,P,\alpha)\) almost surely, for all \(P\) belonging to some class of distributions \(\mathcal{P}_{0}\subset\mathcal{P}(\mathbb{R})\). Introduce the term \(a(n,\alpha)=\log\left((1-\alpha)^{1-\alpha}\alpha^{2\alpha-1}\right)/n\), and define_ \[w^{*}(n,P,\alpha)=\max\left\{\mathrm{KL}_{\inf}^{+}(P,\cdot,\mathcal{P}_{0})^{- 1}\left(a(n,\alpha)\right)-\mu,\;\mu-\mathrm{KL}_{\inf}^{-}(P,\cdot,\mathcal{P} _{0})^{-1}\left(a(n,\alpha)\right)\right\}.\] _Recall that \(\mathrm{KL}_{\inf}^{+}(P,\cdot,\mathcal{P}_{0})^{-1}(x)\) and \(\mathrm{KL}_{\inf}^{-}(P,\cdot,\mathcal{P}_{0})^{-1}\) were introduced in Definition 4.1. Then, the width of the CI must satisfy \(w(n,P,\alpha)\geq w^{*}(n,P,\alpha)\)._ **Remark 4.4**.: The key conclusion of this result is that best achievable width (modulo a leading multiplicative constant) of any CI is characterized by the distribution dependent information projection terms. To the best of our knowledge, this is the first result providing an explicit characterization of the smallest achievable width of CI in terms of a distribution-dependent complexity term. As we see later in Section 6.1, for some common distributions (namely, Bernoulli and Gaussian), the evolution of the above lower bound with the sample size \(n\) exactly matches that of the widths of the optimal CIs (again, modulo a constant multiplicative term). **Remark 4.5**.: Proposition 4.3, as stated, is not directly applicable to CIs whose width after \(n\) observations is a random quantity, such the the MP-EB CI of Maurer and Pontil (2009), or the PrPI-EB CI, and betting CI of Waudby-Smith and Ramdas (2023). However, this result can be used to obtain a lower bound on the width of the best deterministic envelope containing these CIs. Such an envelope can be constructed by using a part of the available \(\alpha\) on replacing the random terms in the CIs, with their population counterparts, plus a deviation term. We obtain such a deterministic envelope for the betting CI in Section 4.2. Proof outline of Proposition 4.3.: The proof of this statement relies on the duality between confidence intervals (CIs) and (fixed sample-size) hypothesis tests. In other words, if we can construct tight level-\((1-\alpha)\) CIs around the mean of a distribution \(P^{*}\), it means that we can also accurately test for the mean, and the detection boundary of such a test is determined by the width of the CI. In particular, we proceed in the following steps: * Given \(X_{1},\ldots,X_{n}\stackrel{{\text{i.i.d.}}}{{\sim}}P^{*}\), we first consider a hypothesis testing problem with \(H_{0}:P^{*}=P\) versus \(H_{1}:P^{*}=Q\), where we assume that \(\mu_{Q}\) (the mean of \(Q\)) is at least \(\mu_{P}+w(n,P,\alpha)\) (here, we use \(\mu_{P}\) and \(\mu_{Q}\) to denote the means of \(P\) and \(Q\) respectively). For this problem, we define a test \(\Psi\) that rejects the null if \(\mu_{Q}\in\mathcal{C}(X^{n},\alpha)\), and show that this test controls the type-I and type-II errors at level \(\alpha\) for this hypothesis testing problem. * Next, using the properties of the test \(\Psi\), the chain rule of KL divergence, and the data processing inequality, we show that the KL divergence between \(P\) and \(Q\) must satisfy \(d_{\text{KL}}(P,Q)\geq a(n,\alpha)\). Since the right side in this inequality is independent of \(Q\), we conclude that \(\text{KL}^{+}_{\inf}\big{(}P,\mu_{P}+w(n,P,\alpha)\big{)}\geq a(n,\alpha) \coloneqq\log\left((1-\alpha)^{1-\alpha}\alpha^{2\alpha-1}\right)/n\), by taking an infimum over all possible distributions \(Q\) in the set \(\mathcal{P}^{+}_{0,\mu_{P}+w(n,P,\alpha)}\). Recall that \(\mathcal{P}^{+}_{0,\mu_{P}+w(n,P,\alpha)}\) denotes the class of all distributions in \(\mathcal{P}_{0}\), with mean at least \(\mu_{P}+w(n,P,\alpha)\). * Finally, we use the definition of \(w^{*}\equiv w^{*}(n,P,\alpha)\), and the continuity of \(\text{KL}^{+}_{\inf}(P,\cdot)\), to observe that \(\text{KL}^{+}_{\inf}\big{(}P,\mu_{P}+w(n,P,\alpha)\big{)}\geq\text{KL}^{+}_{ \inf}\big{(}P,\mu_{P}+w^{*}(n,P,\alpha)\big{)}\). This implies one part of the required result due to the monotonicity of \(\text{KL}^{+}_{\inf}(P,\cdot)\). Repeating the same argument, but with \(Q\) such that \(\mu_{Q}\) is smaller than \(\mu_{P}-w(n,P,\alpha)\) gives us the other term depending on \(\text{KL}^{-}_{\inf}\). The details of this proof are in Appendix C.1. ### Width achieved by betting CI Having established an unimprovable limit on the width of any level-\((1-\alpha)\) CI in Proposition 4.3, we now analyze how close the betting CI comes to achieving this optimal performance. Our main result shows the width of the betting CI nearly matches the lower bound derived in Proposition 4.3, but with two minor differences: there is an extra leading multiplicative factor of \(2\) (see Remark 4.4), and there is an extra \(\mathcal{O}(\log n/n)\) term in the argument of the inverse information projection terms. Despite these two sources of suboptimality, the main takeaway of this section is that the width of the betting CI has the 'right' functional form, depending on the same inverse information projection based complexity measure that arise in the method-agnostic lower bound. To the best of our knowledge, none of the existing CIs are known to admit similar upper bounds on their widths. Before proceeding to the statement of the result, we need to introduce some additional notation. In particular, recall from Honda and Takemura (2010) that for distributions supported on \([0,1]\), the two information projection terms (Definition 4.1) have the following dual representation for \(a\in\{+,-\}\), that makes their connection to the betting CI/CS explicit: \[\text{KL}^{a}_{\inf}(P^{*},m)=\sup_{\lambda\in S_{a}}\mathbb{E}[\log(1+ \lambda(X-m))],\quad\text{where}\quad S_{+}=\left[\frac{-1}{1-m},0\right], \quad\text{and}\quad S_{-}=\left[0,\frac{1}{m}\right].\] For any \(m\in[0,1]\), we also introduce their corresponding optimal betting fractions \[\lambda_{a}^{*}(m)\in\operatorname*{arg\,max}_{\lambda\in S_{a}}\;\mathbb{E}_ {X\sim P^{*}}\left[\log\left(1+\lambda(X-m)\right)\right],\quad\text{for}\;a \in\{+,-\}. \tag{7}\] We can now state the result characterizing the width of the betting CI. **Theorem 4.6**.: _Consider a betting CI constructed using the mixture betting strategy of Definition 2.3. Then, the width of such a level-\((1-\alpha/3)\) CI, denoted by \(w^{(\mathrm{b}et)}(n,P^{*},\alpha/3)\), satisfies:_ \[w^{(\mathrm{b}et)}(n,P^{*},\alpha/3)\leq 2\max\left\{\mathrm{KL}_{\inf}^{+}( \widehat{P}_{n},\cdot)^{-1}\left(\log(3n^{2}/\alpha)\right)-\mu,\;\mu-\mathrm{ KL}_{\inf}^{-}(\widehat{P}_{n},\cdot)^{-1}\left(\log(3n^{2}/\alpha)\right) \right\}, \tag{8}\] _where \(\widehat{P}_{n}\) denotes the empirical distribution of \(\{X_{1},\ldots,X_{n}\}\). Since the above upper bound on \(w^{(\mathrm{b}et)}(n,P^{*},\alpha/3)\) is random, we next derive a deterministic bound on it, that holds with probability at least \(1-\alpha\), for \(n\) large enough (precise conditions in Appendix C.2.4):_ \[w^{(\mathrm{b}et)}(n,P^{*},\alpha/3)\leq\max\left\{\mathrm{KL}_{ \inf}^{+}(P^{*},\cdot)^{-1}\left(b(n,\alpha)\right)-\mu,\;\mu-\mathrm{KL}_{ \inf}^{-}(P^{*},\cdot)^{-1}\left(b(n,\alpha)\right)\right\}+\frac{1}{n^{2}},\] \[\text{where}\quad b(n,\alpha)=\frac{\log(3n^{2}/\alpha)}{n}+ \frac{9\log(3n^{2}/\alpha)}{n\sigma^{2}}=\mathcal{O}\left(\frac{\log(n/\alpha )}{n}\right). \tag{9}\] **Remark 4.7**.: Since the regret guarantee of the mixture method of Definition 2.3 is valid for all \(n\), the first part of Theorem 4.6; that is, the bound in (8), is simultaneously true for all values of \(n\). Thus, (8) is in fact a random upper bound on the width of the betting CS constructed using the mixture method. Interestingly, the optimal complexity term characterizing the (random) width of the CS, again turns out to depend on information projections, now computed for the empirical probability distribution, \(\widehat{P}_{n}\), instead of the true distribution \(P^{*}\). The second part of Theorem 4.6, however, is valid only for a fixed value of \(n\). **Remark 4.8**.: As mentioned in Remark 4.5, since the bound on the width of the betting CI obtained in (8) is a random quantity, it is not subject to the fundamental limit obtained in Proposition 4.3. To address this, we use one-third of the available \(\alpha\) on constructing the CI, and the remaining two-thirds on obtaining a deterministic upper bound on the random width. The size of the resulting level-\((1-\alpha)\), deterministic CI is then subject to the result of Proposition 4.3, and it matches the dependence of the lower bound on the information projection term, with two differences. It has an extra leading multiplicative factor of \(2\), and it has an additional \(\mathcal{O}(\log n/n)\) additive term in the argument of \(\mathrm{KL}_{\inf}^{+}(P^{*},\cdot)^{-1}\). This term arises due to the regret incurred by the betting strategy, as well as a union bound over a uniform grid consisting of \(m=\mathcal{O}(n^{2})\) points, that we used to obtain the deterministic upper bound on the inverse empirical information projection. **Remark 4.9**.: It is easy to verify that the term \(a(n,\alpha)\) used in the statement of the lower bound (Proposition 4.3) is approximately equal to \(\log(1/\alpha)/n\) in the limit of \(\alpha\to 0\). Thus, the main conclusion of Theorem 4.6 is that we can construct a deterministic envelope containing the betting CI, whose width has very nearly the same functional dependence on \(n\), \(P\), and \(\alpha\), as the method-independent lower bound. To the best of our knowledge, none of the other commonly used CIs (such as Hoeffding, Bernstein, MP-EB, or PrPl-EB) are known to exhibit this near-optimal behavior. Proof outline of Theorem 4.6.: We begin by appealing to the logarithmic regret of the betting strategy, to show that for any \(m\neq\mu\), the wealth after \(n\) steps, \(W_{n}(m)\), grows at an exponential rate (with \(n\)). Furthermore, the growth rate of the wealth is approximately equal to the maximum of two empirical information projection terms. Inverting this lower bound on the wealth gives us the bound on the width stated in (8). To obtain the deterministic upper bound on the width, we need a high probability bound for the concentration of the empirical information projection about its population value for (almost) all \(m\in[0,1]\). Existing concentration inequalities, such as those derived by Honda and Takemura (2015, SS 7), when applied directly are not sufficiently tight for our purposes. Instead, we take an alternative approach, by first obtaining a wider CI that contains \(C_{n}^{(\mathrm{b}et)}\), and has a width of \(2b_{n}(n,\alpha)\). By focusing on this narrow band, we show that the information projection can be well approximated by the first two terms of their Taylor's expansion. Hence by controlling the deviations of the first two empirical moments from their population terms, we get a sufficiently tight concentration result for the empirical information projection about its population value. The final step is to take a union bound for \(m\) values over grid \(\mathcal{M}_{n}\) of size \(n^{2}+1\), consisting of equally spaced points inside the larger band of size \(2b(n,\alpha)\). This gives us a lower bound on the wealth, \(W_{n}(m_{i})\), for points \(m_{i}\in\mathcal{M}_{n}\). We obtain the final bound (9) by taking the inverse of the information projection (i.e., \(\mathrm{KL}_{\inf}^{+}(P^{*},\cdot)^{-1}\)) over the points in the grid \(\mathcal{M}_{n}\), and then adding the grid spacing term, \(1/n^{2}\), to account for the possible discretization error. The details of these steps are in Appendix C.2. Non-asymptotic analysis of betting CS We now turn our attention to the betting confidence sequences (CS) constructed by Waudby-Smith and Ramdas (2023), and show that they can be also be shown to achieve a near-optimal performance, similar to the betting CI. The construction of betting CSs follows the same general steps introduced in Section 2.2 as we recall below. **Definition 5.1** (Betting CS).: Consider a stream of observations, \(X_{1},X_{2},\ldots\), drawn i.i.d. from a distribution \(P^{*}\) supported on \([0,1]\) with mean \(\mu\). For all \(m\in[0,1]\), set \(W_{0}(m)=1\), and define the wealth process as \[W_{n}(m)=W_{n-1}(m)\times\left(1+\lambda_{n}(m)(X_{n}-m)\right),\quad\text{ for all }m\in[0,1],\] where \(\left\{\lambda_{n}(m):n\geq 1,\,m\in[0,1]\right\}\) is any sequence of predictable bets. Then, the level-\((1-\alpha)\) betting CS is a collection of sets \(\left\{C_{n}\subset[0,1]:n\geq 1\right\}\), defined as \[C_{n}=\{m\in[0,1]:W_{n}(m)<1/\alpha\},\] satisfying the uniform coverage guarantee \(\mathbb{P}\left(\forall n\geq 1:\mu\in C_{n}\right)\geq 1-\alpha\). As a consequence of the uniform coverage guarantee, we can also assume that the sets in the CS are nested; that is, \(C_{n}\subset C_{n^{\prime}}\) for all \(n^{\prime}\leq n\). In other words, for any \(n\geq 1\), we can let \(C_{n}\) be the running intersection of all \(C_{n^{\prime}}\) for \(n^{\prime}\leq n\) due to the time-uniform nature of CS. The main difference between the betting CI of Definition 2.2, and the betting CS defined above, is that the bets \(\left\{\lambda_{t}(m):t\geq 1,\,m\in[0,1]\right\}\) are not optimized with a pre-specified horizon \(n\). The uniform coverage guarantee of the betting CS is a simple consequence of Ville's inequality, a time-uniform variant of Markov's inequality, and we refer the reader to Waudby-Smith and Ramdas (2023) for further details. To analyze the behavior of betting CSs, with possibly random widths, we first introduce the notion of "effective width" in Definition 5.2. Then, in Proposition 5.4 we obtain a lower bound on the effective width of any CS, and show that the betting CS instantiated with the mixture method stated in Definition 2.3, nearly matches this fundamental limit in Theorem 5.6. ### Lower Bound on any CS effective width In this section, we establish fundamental limits on the performance of any valid level-\((1-\alpha)\) CS, with possibly random widths. We begin by introducing a notion of _effective width_, that we will use to characterize the performance of CSs. **Definition 5.2** (Effective width).: Let \(\mathcal{C}\) denote a method for constructing confidence sequences. Given a stream of observations \(X_{1},X_{2},\ldots\stackrel{{\text{i.i.d.}}}{{\sim}}P^{*}\), define the random stopping times \[T_{w}\equiv T_{w}(P^{*},\alpha)\coloneqq\inf\left\{n\geq 1:|\mathcal{C}(X^{n}, \alpha)|\leq w\right\},\quad\text{for }w\in(0,\infty).\] Then, the effective width of the CS after \(n\) observations, is defined as: \[w_{e}(n,P^{*},\alpha)\coloneqq\inf\left\{w>0:\mathbb{E}[T_{w}]\leq n\right\}.\] In words, the effective width of a CS after \(n\) observations is the minimum \(w>0\), for which the expected number of observations needed by the CS to reduce its width below \(2w\), no larger than \(n\). **Remark 5.3**.: While this stopping time based definition of effective width turns out to be particularly well suited to analyzing confidence sequences, we note that the time-uniform coverage guarantee is not necessary for this definition to be valid. In particular, the same definition can also be applied to CIs with random or even deterministic widths. In fact, the definition of effective width reduces to that of the usual width for the case of CIs with deterministic widths. To see why this is true, suppose \(\mathcal{C}\) constructs CIs with deterministic widths \(w(n,P^{*},\alpha)\), that are non-increasing in \(n\). Then, the term \(T_{w}\) in this case is simply a constant. Hence, \(w_{e}(n,P^{*},\alpha)\) is equal to \(\inf\{w\geq 0:T_{w}(P^{*},\alpha)\leq n\}\). Now, note that for all \(w\in[w(n,P^{*},\alpha),w(n-1,P^{*},\alpha))\), we have \(T_{w}=n\). This implies that \[w_{e}(n,P^{*},\alpha)=\inf\{w:w\in[w(n,P^{*},\alpha),w(n-1,P^{*},\alpha))\}=w( n,P^{*},\alpha),\] as claimed. For example, in the case of Hoeffding CI, we have \(T_{w}=\left\lceil 2\log(2/\alpha)/w^{2}\right\rceil\), for all \(w>0\), which implies that \[w_{e}^{H}(n,P^{*},\alpha)=\inf\left\{w\geq 0:\left\lceil\frac{2\log(2/\alpha)}{w^{2}} \right\rceil\leq n\right\}=2\sqrt{\frac{\log(2/\alpha)}{2n}}=w_{H}(n,P^{*}, \alpha).\] We now present the main result of this section, that characterizes the minimum achievable effective width of a confidence sequence (CS). **Proposition 5.4**.: _Suppose \(\mathcal{C}\) is a method for constructing confidence sequences that satisfies \(\mathbb{E}[T_{w}(P,\alpha)]<\infty\) for all \(w>0\), and for all \(P\in\mathcal{P}_{0}\subset\mathcal{P}(\mathbb{R})\). With \(a(n,\alpha)\coloneqq\log\left((1-\alpha)^{1-\alpha}\alpha^{2\alpha-1}\right)/n\), introduce_ \[w_{e}^{*}(n,P,\alpha)=\max\left\{\operatorname{KL}_{\inf}^{+}(P,\cdot, \mathcal{P}_{0})^{-1}\left(a(n,\alpha)-\mu\right),\;\mu-\operatorname{KL}_{ \inf}^{-}(P,\cdot,\mathcal{P}_{0})^{-1}\left(a(n,\alpha)\right)\right\},\] _where the inverse information projection terms, \(\operatorname{KL}_{\inf}^{+}(P,\cdot,\mathcal{P}_{0})^{-1}(\cdot)\) and \(\operatorname{KL}_{\inf}^{-}(P,\cdot,\mathcal{P}_{0})^{-1}(\cdot)\), were defined in (6). Then the effective width of the CS constructed by method \(\mathcal{C}\) using i.i.d. observations from \(P\in\mathcal{P}_{0}\) must satisfy \(w_{e}(n,P,\alpha)\geq w_{e}^{*}(n,P,\alpha)\)._ _Proof outline of Proposition 5.4._ The proof of this proposition uses a similar high-level strategy outlined for proving Proposition 4.3. The key difference is that we use now set up a sequential testing problem, for which we design a test using the given method \(\mathcal{C}\) for constructing CSs. We present the details in Appendix D.1. ### Effective width of betting CS In the previous subsection, we obtained a lower bound on the effective width of any method, \(\mathcal{C}\), for constructing confidence sequences. We now show that the betting CS proposed by Waudby-Smith and Ramdas [2023] nearly matches this lower bound. We first present a simplifying assumption about the data generating distribution \(P^{*}\). **Assumption 5.5**.: _The distribution \(P^{*}\) is supported over a closed interval that is a strict subset of \([0,1]\). That is, \(\text{supp}(P^{*})=[A,B]\) with \(A,B\in(0,1)\)._ This assumption implies that for all values of \(m\), the logarithmic increment of the wealth process, \(\log(1+\lambda_{t}(m)(X_{t}-m))\), is a bounded random variable. While this assumption is not strictly necessary for obtaining the next result, it does simplify the final expression of the upper bound. **Theorem 5.6**.: _Suppose Assumption 5.5 is true, and consider a betting CS constructed using the mixture betting strategy (Definition 2.3) that incurs a logarithmic regret. Then, the effective width of this CS satisfies:_ \[w_{e}^{(bet)}(n,P^{*},\alpha)\leq 2\max\left\{\operatorname{KL}_{ \inf}^{+}(P^{*},\cdot)^{-1}(b(n,\alpha))-\mu,\;\mu-\operatorname{KL}_{\inf}^{ -}(P^{*},\cdot)^{-1}(b(n,\alpha))\right\},\] \[\text{where }b(n,\alpha)=\frac{2\log(n^{2}/\alpha)+2C}{n},\] _for a constant \(C\) that depends on the distribution \(P^{*}\) (Assumption 5.5)._ **Remark 5.7**.: As in the case of Theorem 4.6, this result implies that the effective width is of the same order as the method-agnostic lower bound established in Proposition 5.4, modulo a constant multiplicative factor and an additive \(2\log(n^{2})+2C\) term in the argument to the inverse information projection. The term \(C\) arises due to the possible overshoot of the wealth process beyond the threshold \(1/\alpha\), and it is generally unavoidable, without further assumptions. Interestingly, the upper bound has much a smaller logarithmic penalty term, as compared to the upper bound on the betting CI width of Theorem 4.6. This is because of the expectation in the definition of effective width, which causes the population information projection (instead of the empirical information projection) to naturally appear in the upper-bound expression, and we don't need to apply a concentration result over a uniform grid to get the required form matching the lower bound. Proof outline of Theorem 5.6.: By appealing to the regret bound of the mixture betting strategy, we can reduce the problem into that of analyzing the 'oracle' wealth processes \(\{W_{t}^{*,a}(m):t\geq 1,\ m\in[0,1]\}\) for \(a\in\{+,-\}\), with \(W_{t}^{*,a}(m)=\prod_{i=1}^{t}\left(1+\lambda_{a}^{*}(m)(X_{i}-m)\right)\). Due to the i.i.d. nature of the multiplicative factors, this process is significantly more tractable to analyze than the true wealth process. In particular, we proceed as follows: * In the first step, we show that for any \(w\), we can upper bound the stopping time \(T_{w}\) in terms of the maximum of two stopping times, \(\tau_{w}^{+}\) and \(\tau_{w}^{-}\), associated with the oracle wealth process. In particular, \(\tau_{w}^{+}\) (resp. \(\tau_{w}^{-}\)) is the first time, the log of the oracle wealth process at \(m=\mu+w/2\) (resp. \(m=\mu-w/2\)) exceeds \(\log(n^{2}/\alpha)\). This implies the bound \(\mathbb{E}[T_{w}]\leq\max\{\mathbb{E}[\tau_{w}^{+}],\,\mathbb{E}[\tau_{w}^{-}]\}\). * The key benefit of working with the oracle process is that \(\log(W_{n}^{*,\pm}(m))\) are the sum of i.i.d. terms, as the oracle bets defined in (7) are fixed, non-random quantities. This is unlike the actual wealth process with predictable bets. Hence, by applying appropriate optional stopping arguments, we show that \(\mathbb{E}[\tau_{w}^{+}]\) (resp. \(\mathbb{E}[\tau_{w}^{-}]\)) can be upper bounded in terms of \(\mathrm{KL}_{\inf}^{+}(P^{*},\mu+w)\) (resp. \(\mathrm{KL}_{\inf}^{-}(P^{*},\mu-w)\)). * The final step in concluding the proof involves inverting the two information projection terms that bound \(\mathbb{E}[T_{w}]\), and some simplification to get the required bound. The details are in Appendix D.2. ## 6 Discussion, extensions, and open questions In Section 6.1, we instantiate our general lower bound for CIs (Proposition 4.3) for some univariate distributions, and compare them with the optimal CI width for those distributions. Next, in Section 6.2, we show that the techniques we developed for obtaining the lower bounds on the widths of CIs and CSs can be easily generalized to the case of multivariate observations. In Section 6.3, we extend our asymptotic analysis to the practically interesting case of estimating the mean of \(M\) unknown values in the unit interval, by uniformly sampling them without replacement. We end with some preliminary discussion about appropriate notions of second-order limiting widths of CIs in Section 6.4. ### Instantiations of the lower bound Since the non-asymptotic lower bounds in Proposition 4.3 and Proposition 5.4 are stated in terms of the abstract information projections, they might be difficult to interpret. For many distributions, such as members of one-parameter exponential family, the lower bounds admit closed-from expressions. In this section, we study the tightness of the CI lower bound (since the expressions for the CS lower bound are almost identical), by comparing its instantiations for two specific distributions with the width of an oracle CI constructed using the quantiles of the true distributions. Bernoulli distribution.We begin with the simplest case, where the distribution \(P^{*}\) is a Bernoulli with mean \(\mu\). In this case, \(\mathrm{KL}_{\inf}^{+}\) and \(\mathrm{KL}_{\inf}^{-}\) can be explicitly calculated. In particular, both \(\mathrm{KL}_{\inf}^{+}(P^{*},m)\) and \(\mathrm{KL}_{\inf}^{-}(P^{*},m)\) are equal to \(\mu\log(\mu/m)+(1-\mu)\log((1-\mu)/(1-m))\) for all \(m\geq\mu\) and \(m\leq\mu\) respectively. We can numerically invert these expressions to obtain a good approximation of the lower bounds derived in Proposition 4.3. To evaluate the tightness of the lower bound, we compare it to the width of an oracle CI constructed with the knowledge of the distribution of the sample mean of \(X_{1},\ldots,X_{n}\). In particular, \(\sum_{i=1}^{n}X_{i}\) is distributed according to the Binomial distribution with parameters \(n\) and \(\mu\). Hence, we can define an oracle CI as \(C_{n}^{*}=[\mu\pm w^{*}/2]\), with \(w^{*}=(z_{1-\alpha/2}-z_{\alpha/2})/n\), where \(z_{\beta}\) denotes the \(\beta\)-quantile of the Binomial distribution with parameters \(n\) and \(\mu\). We plot the variation of the lower bound, and the width of the oracle CI, with the sample size \(n\) for \(\mu=0.9\) in Figure 2. Gaussian distribution.Similarly, for the class of Gaussian distributions with variance \(\sigma^{2}\), \(\mathrm{KL}^{+}_{\inf}(P^{*},m)\) and \(\mathrm{KL}^{-}_{\inf}(P^{*},m)=(\mu-m)^{2}/2\sigma^{2}\), if \(m\geq\mu\). Hence, the result of Proposition 4.3 implies that the width of any CI for this class must satisfy \(w(n,P^{*},\alpha)\geq 2\sigma\sqrt{a(n,\alpha)/2}\), where \(a(n,\alpha)\) is as defined in Proposition 4.3. As with the Bernoulli case, we will compare the lower bound with the width of the oracle CI, defined as \(C^{*}_{n}=[\mu\pm w^{*}/2]\) with \(w^{*}=\sigma(z_{1-\alpha/2}-z_{\alpha})/\sqrt{n}\), where \(z_{\beta}\) denotes the \(\beta\)-quantile of the standard normal distribution. The variation of the predicted lower bound and the oracle upper bound with the sample size \(n\), for \(P^{*}=N(0,4)\) is shown in Figure 2. ### Extending lower bounds to the multivariate case While our focus in this paper is on the case of univariate observations, we note that the techniques used to obtain the lower bounds (Proposition 4.3 and Proposition 5.4) can be easily extended to more general observation spaces. To illustrate this, we state the generalizations of these two lower bounds for the case of observations lying in \(\mathcal{X}=\mathbb{R}^{d}\), for some integer \(d\geq 1\). In particular, suppose we are given observations \(X_{1},X_{2},\ldots\), drawn i.i.d. from a distribution \(P^{*}\in\mathcal{P}_{0}\subset\mathcal{P}(\mathcal{X})\), with mean vector \(\mu\). Our first result considers a method (\(\mathcal{C}\)) for constructing confidence intervals for \(P^{*}\) based on observations \(X_{1},\ldots,X_{n}\), which satisfies the condition that \[\sup_{x,x^{\prime}\in\mathcal{C}(X_{1}^{n},\alpha)}\ \|x-x^{\prime}\|_{2}\leq w (n,P^{*},\alpha), \tag{10}\] where \(\|\cdot\|_{2}\) denotes the usual \(\ell_{2}\)-norm on \(\mathbb{R}^{d}\). In other words, the confidence set constructed by the method \(\mathcal{C}\) based on \(n\) observations is contained in a ball of width no larger than a deterministic value \(w(n,P^{*},\alpha)\). To state the lower bound, we need to introduce a notion of information projection for multivariate case: \[\mathrm{KL}_{\inf}(P,r,\mathcal{P}_{0})=\inf_{Q\in\mathcal{P}_{0 }:\|\mu_{Q}-\mu_{P}\|_{2}\geq r}d_{\mathrm{KL}}(P,Q),\] \[\mathrm{and}\quad\mathrm{KL}_{\inf}(P,\cdot,\mathcal{P}_{0})^{-1} (x)=\inf\{r\geq 0:\mathrm{KL}_{\inf}(P,r,\mathcal{P}_{0})\geq x\}. \tag{11}\] **Proposition 6.1**.: _Suppose \(\mathcal{C}\) denotes any strategy for constructing confidence sets for the mean vector \(\mu\) satisfying (10) for all \(P\in\mathcal{P}_{0}\subset\mathcal{P}(\mathcal{X})\) for \(\mathcal{X}=\mathbb{R}^{d}\). Introduce \(a(n,\alpha)=\frac{\log\left((1-\alpha)^{1-\alpha}\alpha^{2\alpha-1}\right)}{n}\), and_ \[w^{*}(n,P,\alpha)=\mathrm{KL}_{\inf}(P,\cdot,\mathcal{P}_{0})^{-1}\left(a(n, \alpha)\right),\] _where \(\mathrm{KL}_{\inf}(P,\cdot,\mathcal{P}_{0})^{-1}\) was defined in (11). Then, the width of the confidence set must satisfy \(w(n,P,\alpha)\geq w^{*}(n,P,\alpha)\)._ Figure 2: The figures plot the variation (with the sample size \(n\)) of the predicted lower bound (Proposition 4.3) on the width of any CI (denoted by ‘lower’), and the width of the oracle CI (denoted by ‘upper’) for the mean of Bernoulli (left) and Gaussian (right) distributions (with \(\alpha\) set to \(0.01\)). In both cases, the ratio of the upper and lower bounds is approximately \(3.43\) for all values of \(n\geq 1000\), indicating that the lower bound captures the ‘right’ behavior of the optimal width with \(n\), modulo a constant multiplicative factor. The proof of this statement follows by adapting the arguments developed for proving Proposition 4.3, and the details are in Appendix E.1. Next, we introduce the notion of effective width for the multivariate case, that is a straightforward generalization of the analogous univariate term stated in Definition 5.2. Let \(\mathcal{C}\) denote a method for constructing confidence sequences for the mean \(\mu\) of a distribution \(P^{*}\in\mathcal{P}_{0}\subset\mathcal{P}(\mathcal{X})\). Given a stream of observations \(X_{1},X_{2},\ldots\stackrel{{\text{i.i.d.}}}{{\sim}}P^{*}\), define the random stopping time \[T_{w}\equiv T_{w}(P^{*},\alpha)\coloneqq\inf\left\{n\geq 1:\sup_{x,x^{\prime} \in\mathcal{C}(X^{n},\alpha)}\|x-x^{\prime}\|_{2}\leq w\right\},\quad\text{for }w\in(0,\infty).\] Then, the effective width of the CS after \(n\) observations, is defined as: \[w_{e}(n,P^{*},\alpha)\coloneqq\inf\left\{w\geq 0:\mathbb{E}[T_{w}(P^{*}, \alpha)]\leq n\right\}.\] Our next result obtains a lower bound on the effective width of any scheme for constructing a CS for the mean vector based on i.i.d. observations. **Proposition 6.2**.: _Suppose \(\mathcal{C}\) is a method for constructing confidence sequences that satisfies \(\mathbb{E}[T_{w}(P,\alpha)]<\infty\) for all \(w>0\), and for all \(P\in\mathcal{P}_{0}\subset\mathcal{P}(\mathcal{X})\) for \(\mathcal{X}=\mathbb{R}^{d}\). With \(a(n,\alpha)\coloneqq\log\left((1-\alpha)^{1-\alpha}\alpha^{2\alpha-1}\right)/n\), introduce the following term:_ \[w^{*}(n,P,\alpha)=\operatorname{KL}_{\inf}(P,\cdot,\mathcal{P}_{0})^{-1} \left(a(n,\alpha)\right).\] _Then the effective width of the CS constructed by strategy \(\mathcal{C}\) using i.i.d. observations from \(P\in\mathcal{P}_{0}\) must satisfy \(w_{e}(n,P,\alpha)\geq w^{*}(n,P,\alpha)\)._ The proof of this result is also a simple generalization of the corresponding univariate result (Proposition 5.4), and we present the details in Appendix E.2. ### Sampling without replacement Suppose \(\mathcal{X}_{M}\) denotes a collection of \(M\) numbers, \(\{x_{1},\ldots,x_{M}\}\), each lying in the unit interval \([0,1]\). Define the mean and variance terms as \[\mu\equiv\mu_{M}=\frac{1}{M}\sum_{i=1}^{M}x_{i},\quad\text{and} \quad\sigma^{2}\equiv\sigma_{M}^{2}=\frac{1}{M}\sum_{i=1}^{M}(x_{i}-\mu)^{2}.\] Suppose \(X_{1},X_{2},\ldots\) are drawn uniformly from this set without replacement (WoR); that is \(X_{1}\sim\operatorname{Uniform}(\mathcal{X}_{M})\), \(X_{2}\sim\operatorname{Uniform}\left(\mathcal{X}_{M}\setminus\{X_{1}\}\right)\), and so on. Our goal is to construct a high probability (i.e, with probability at least \(1-\alpha\), for a given \(\alpha\)) estimate of the mean \(\mu\), based on these observations drawn WoR. We being be recalling some existing results on this topic most relevant to us. As in the without replacement case, we can construct CIs based on standard exponential concentration inequalities. However, one key feature of the WoR setting is that the uncertainty about the mean \(\mu\) rapidly decays to zero (exactly) as the sample-size \(n\) approaches \(M\). This fact was captured by Serfling (1974), who obtained an improved version of an earlier inequality by Hoeffding (1963), with the term \(n\) replaced by \(\frac{n}{1-(n-1)/M}\). Bardenet and Maillard (2015) obtained a slight improvement of Serfling's result when \(n\geq M/2\), and then used it to derive variants of Hoeffding-Serfling, Bernstein-Serfling, and Empirical-Bernstein-Serfling inequalities. Finally, Waudby-Smith and Ramdas (2023) showed that their techniques also extend easily to the WoR case, and in particular, they proposed WoR variants of their PrPl-EB CI and betting CI. We discuss the details about the construction of these CIs in Appendix F. In this section, we introduce an analog of the first-order limiting width defined in (1) for the WoR case. First we state our main assumption. **Assumption 6.3**.: _Considering a sequence of problems \((\mathcal{X}_{M})_{M\geq 1}\), with \(\mu_{M}\coloneqq\frac{1}{M}\sum_{i=1}^{M}x_{i}\), assume that:_ \[\lim_{M\to\infty}\frac{1}{M}\sum_{i=1}^{M}(x_{i}-\mu_{M})^{2}=\lim_{M\to\infty }\sigma_{M}^{2}=\sigma^{2}>0.\] _A simpler version of the above condition is assuming that \(\sigma_{M}^{2}=\sigma^{2}\) and \(\mu_{M}=\mu\) for all \(M\in\mathbb{N}\)._ To define the limiting width of the CI in the WoR case, we fix a \(\rho\in(0,1]\), and consider the limiting value of the scaled width of the CI based on \(n=\lceil\rho M\rceil\) observations, as \(M\to\infty\). That is, \[\gamma_{1}(\rho)\coloneqq\limsup_{M\to\infty,n=\lceil\rho M\rceil}\sqrt{n}w_{n}.\] The main result of this section is a WoR analog of Theorem 3.2. In particular, it says that for the same choice of the bets, the limiting width of the WoR betting CI is never larger than that of the WoR PrPl-EB CI. Furthermore, both of these CIs capture the improvement in the width of the CI when \(n\) approaches \(M\), or equivalently when \(\rho\to 1\). **Proposition 6.4**.: _Suppose Assumption 6.3 is true. Then, with \(n=\lfloor\rho M\rfloor\) for some \(\rho>0\), we have_ \[\gamma_{1}^{(bet)}(\rho)\leq\gamma_{1}^{(Pr\text{Pl-EB})}(\rho)=2\sigma\sqrt{2 \log(2/\alpha)}\left(\frac{\rho}{-\log\left(1-\rho\right)}\right).\] The proof of this statement is in Appendix F.2. **Remark 6.5**.: Consider the case when \(\rho=1/2\). Then, we have \(\lim_{M\to\infty}\sqrt{n}w_{n}=\frac{2\sigma\sqrt{2\log(2/\alpha)}}{2\,\log 2 }\approx 1.44\sigma\sqrt{2\log(2/\alpha)}\). In comparison, the Bernstein CI derived by Bardenet and Maillard (2015) with the knowledge of the variance, has a limiting value of \(2\sigma\sqrt{2\log(2/\alpha)(1-1/2)}\approx 1.414\sigma\sqrt{2\log(2/\alpha)}\). While close, the Bernstein CI of Bardenet and Maillard (2015) has a better limiting width. Investigating whether the limiting widths of betting and PrPl-EB can be improved, as well as establishing the fundamental performance limits in the WoR case (i.e., analogs of Proposition 4.3 and Proposition 5.4) are interesting directions for future work. ### Second-order limiting width In (1), we defined the notion of the first-order limiting width (denoted by \(\gamma_{1}\)) to compare different CIs. Since the width of most practical CIs converges to zero at \(\mathcal{O}(1/\sqrt{n})\) rate, their first-order limiting width is either a constant (for Hoeffding, Bernstein, MP-EB, PrPl-EB), or upper bounded by a constant (betting CI) almost surely. In this section, we explore the notion of second-order limiting widths for CIs. **Definition 6.6**.: Let \(\mathcal{C}\) denote a method of constructing CIs: that is, for any \(n\geq 1\), \(\alpha\in(0,1)\), and \((X_{1},\ldots,X_{n})\stackrel{{\text{i.i.d.}}}{{\sim}}P^{*}\), the set \(C_{n}=\mathcal{C}(X^{n},\alpha)\) is a level-\((1-\alpha)\) CI for the mean \(\mu\). Denote the width of \(C_{n}\) with \(2w_{n}\), and assume that its first-order limiting width, \(\gamma_{1}\), is a constant almost surely. Then, we define the second-order limiting width as \[\gamma_{2}=\lim_{n\to\infty}n\times\left(w_{n}-\frac{\gamma_{1}}{\sqrt{n}} \right).\] For CIs with deterministic widths, such as Hoeffding or Bernstein, \(\gamma_{2}\) is a constant. For example, \(\gamma_{2}^{(H)}=0\), and \(\gamma_{2}^{(B)}=4\log(2/\alpha)/3\). For CIs with random widths, such as MP-EB, due to the fluctuations in the estimated parameter being inflated by a factor of \(n\), characterizing \(\gamma_{2}\) in an almost sure sense is ruled out. Instead, we need to explore weaker forms of convergence for the second-order limiting widths of such CIs. For example, we can prove that \(\gamma_{2}^{(\text{MP-EB})}\) converges in distribution to a Gaussian random variable with mean \(14\log(4/\alpha)/3\), as we show in our next result. In the statement of this result, we use \(\mu_{4}\) to denote the fourth central moment of \(X\); that is, \(\mu_{4}=\mathbb{E}[(X-\mu)^{4}]\). **Proposition 6.7**.: _With \(w_{n}^{(\text{MP-EB})}\) denoting the width of the MP-EB CI, we have_ \[n\left(w_{n}^{(\text{MP-EB})}-\frac{\gamma_{1}^{(\text{MP-EB})}}{\sqrt{n}} \right)=\frac{14\log(4/\alpha)}{3}+T_{n}^{(\text{MP-EB})}+o_{P}(1),\] _where \(o_{P}(1)\) is a term converging to \(0\) in probability, and \(T_{n}^{(\text{MP-EB})}\) denotes a zero-mean statistic, that converges in distribution to \(N(0,2\log(4/\alpha)(\mu_{4}-\sigma^{4}))\). Similarly, the PrPl-EB CI satisfies_ \[n\left(w_{n}^{(Pr\text{Pl-EB})}-\frac{\gamma_{1}^{(Pr\text{Pl-EB})}}{\sqrt{n} }\right)=\frac{4\log(2/\alpha)}{3}+T_{n}^{(Pr\text{Pl-EB})}+o_{P}(1),\] _where \(T_{n}^{(Pr\text{PI-EB})}\) is again a zero-mean random variable._ The proof of this result is in Appendix G. Unlike MP-EB, the leading constant, \(4\log(2/\alpha)/3\) in the above expression for the second-order width statistic of PrPl-EB CI matches the second-order term of the Bernstein CI. However, we have not managed to prove the convergence of \(T_{n}^{(\text{PrPl-EB})}\) in any suitable sense. It is an interesting question for future work whether we can modify the PrPl-EB CI (or consider an alternative the definition of second-order limiting width, that may be more suited to CIs with random widths), to guarantee that it dominates \(T_{n}^{(\text{MP-EB})}\) in an appropriate sense. ## 7 Conclusion In this paper, we provided theoretical justification for the superior empirical performance of the betting confidence intervals (CIs) and confidence sequences (CSs) reported by Waudby-Smith and Ramdas (2023). We first showed that the first-order limiting width of the betting CI (with a specific choice of bets) is strictly smaller than that of the empirical Bernstein CIs of Maurer and Pontil (2009), and is at least as small as the limiting width of a Bernstein CI constructed with the knowledge of the true variance. Then, we established the near-optimality of the betting CI (constructed using a mixture strategy) in the finite-\(n\) regime, by showing that the width of the betting CI is of the same order as the fundamental, method-agnostic, limit. Finally, we showed that a similar near-optimality guarantee also holds for betting CSs. Together these results provide strong theoretical arguments supporting the empirically observed superiority of betting CI/CS in both the asymptotic, and finite-sample regimes. Our results open up several interesting directions for future work. For instance, our non-asymptotic analysis leaves a gap (inflation by a constant factor, and a \(\mathcal{O}\left(\log(n)/n\right)\) additive term in the argument of inverse information projection) between the upper bound achieved by the betting CI/CS, and the corresponding method-agnostic lower bounds. Investigating whether this gap can be closed by using more refined analysis techniques is an important question for future work. Another important direction is to extend our results to the without-replacement (WoR) sampling setting. We presented a preliminary result on this topic in Section 6.3, but developing the analogs of our non-asymptotic results for the WoR case will require fundamentally new ideas. Finally, as we showed in Section 6.2, our non-asymptotic lower bounds on the size of CIs can be easily extended to more general observation spaces, beyond the univariate case that was the main focus of this paper. Designing CIs/CSs that match these lower bounds for these general observation spaces using the betting framework is another interesting question for future work. ### Acknowledgement The authors acknowledge support from NSF grants IIS-2229881 and DMS-2310718.
2308.04675
Maximizing Network Connectivity for UAV Communications via Reconfigurable Intelligent Surfaces
It is anticipated that integrating unmanned aerial vehicles (UAVs) with reconfigurable intelligent surfaces (RISs), resulting in RIS-assisted UAV networks, will offer improved network connectivity against node failures for the beyond 5G networks. In this context, we utilize a RIS to provide path diversity and alternative connectivity options for information flow from user equipment (UE) to UAVs by adding more links to the network, thereby maximizing its connectivity. This paper employs the algebraic connectivity metric, which is adjusted by the reflected links of the RIS, to formulate the problem of maximizing the network connectivity in two cases. First, we consider formulating the problem for one UE, which is solved optimally using a linear search. Then, we consider the problem of a more general case of multiple UEs, which has high computational complexity. To tackle this problem, we formulate the problem of maximizing the network connectivity as a semi-definite programming (SDP) optimization problem that can be solved efficiently in polynomial time. In both cases, our proposed solutions find the best combination between UE(s) and UAVs through the RIS. As a result, it tunes the phase shifts of the RIS to direct the signals of the UEs to the appropriate UAVs, thus maximizing the network connectivity. Simulation results are conducted to assess the performance of the proposed solutions compared to the existing solutions.
Mohammed S. Al-Abiad, Mohammad Javad-Kalbasi, Shahrokh Valaee
2023-08-09T03:06:46Z
http://arxiv.org/abs/2308.04675v1
# Maximizing Network Connectivity for UAV Communications via Reconfigurable Intelligent Surfaces ###### Abstract It is anticipated that integrating unmanned aerial vehicles (UAVs) with reconfigurable intelligent surfaces (RISs), resulting in RIS-assisted UAV networks, will offer improved network connectivity against node failures for the beyond 5G networks. In this context, we utilize a RIS to provide path diversity and alternative connectivity options for information flow from user equipment (UE) to UAVs by adding more links to the network, thereby maximizing its connectivity. This paper employs the algebraic connectivity metric, which is adjusted by the reflected links of the RIS, to formulate the problem of maximizing the network connectivity in two cases. First, we consider formulating the problem for one UE, which is solved optimally using a linear search. Then, we consider the problem of a more general case of multiple UEs, which has high computational complexity. To tackle this problem, we formulate the problem of maximizing the network connectivity as a semi-definite programming (SDP) optimization problem that can be solved efficiently in polynomial time. In both cases, our proposed solutions find the best combination between UE(s) and UAVs through the RIS. As a result, it tunes the phase shifts of the RIS to direct the signals of the UEs to the appropriate UAVs, thus maximizing the network connectivity. Simulation results are conducted to assess the performance of the proposed solutions compared to the existing solutions. Network connectivity, algebraic connectivity, RIS-assisted UAV communications, graph theory. ## I Introduction UAVs are expected to have a remarkable impact on the economy by 2026 with a global market value of US$59.2 billion, making the incorporation of UAVs critical in beyond 5G networks [1]. One of the unique features of UAV-assisted communication is improved network connectivity by establishing line-of-sight (LoS) connections with UEs [2]. Meanwhile, RIS is a promising technique that is integrated with UAVs to further improve network connectivity [3], particularly in networks that experience deep fade. In this context, RISs can be leveraged to provide path diversity and alternative connectivity solutions for information flow from UEs to UAVs in RIS-assisted UAV networks. The prime concern of UAV communications is that UAV nodes are prone to failure due to several reasons, such as limited energy, hardware failure, or targeted failure in the case of battlefield surveillance systems. Such UAV failures cause network disintegration, and consequently, information flow from UEs to a fusion center through UAVs can be severely impacted. Hence, it is crucial to always keep the network connected, which was addressed in the literature by adding more backhaul links to the network, e.g., [4]. In spite of recent advances in wireless sensor networks, most of the existing studies consider routing solutions with the focus more on extending the battery lifetime of sensor nodes. These works define network connectivity as network lifetime, in which the first node or all the nodes have failed [5, 6]. However, none of the aforementioned works has ever explicitly considered the exploitation of RISs to add more reflected links for improving network connectivity. Different from works [5, 6] that focused on routing solutions, this paper focuses on designing a more connected RIS-assisted UAV network that enables information flow from the UEs to the UAVs even if some of the UAVs have failed. The algebraic connectivity [7], also called the Fiedler metric or the second smallest eigenvalue of the Laplacian matrix representing a graph, is a metric that measures how well a graph is connected. In the literature, such metric is usually associated with network connectivity [8, 9, 10]. In [8], the authors maximized the algebraic connectivity by positioning the UAV to maximize the connectivity of small-cells backhaul network. A more general study in [9] proposed different network maintenance algorithms to maximize the connectivity of wireless sensor networks. Since the algebraic connectivity is a good measure of how connected the graph is, the more edges that exist between the UEs and the UAVs, the more resilient network can be designed without being disconnected due to node failures [9, 10]. To this end, this paper aims to utilize the RIS to add link redundancy to the network and tune the RIS phase shift configurations to direct UEs' signals to appropriate UAVs, so that the connectivity of RIS-assisted UAV networks is maximized. To the best of our knowledge, the problem of maximizing the network connectivity in RIS-assisted UAV networks has not been studied before in the literature. In this paper, we address this problem by employing the concept of algebraic connectivity [7] of a graph in network connectivity, then we consider two problem cases. First, we formulate the problem for one UE and one RIS and solve it optimally via a linear search. Then, we formulate the problem for a more general case of multiple UEs and one RIS. It is shown that solving this general problem optimally is computationally prohibitive since it requires computing the algebraic connectivity of the resulting network for each possible edge that connects the UEs to the UAVs through the RIS. To tackle this problem, we adjust the algebraic connectivity metric of the original graph network by the candidate edges between the UEs and the UAVs via the RIS. Then, we reformulate the problem of maximizing the network connectivity as a semi-definite programming (SDP) optimization problem that can be solved efficiently in polynomial time. In both cases, our proposed solutions find the best combination between the UE(s) and the UAVs through the RIS by tuning its phase shifts to direct the UEs' signals to the appropriate UAVs, thus maximizing the network connectivity. Simulation results are conducted to assess the performance of the proposed solutions compared to the existing solutions. ## II System Model and Network Connectivity ### _System Model_ We consider a RIS-assisted UAV network with a set of UAVs, one RIS, and multiple UEs that represent ground users, sensors, etc. An example of the considered network is shown in Fig. 1. The sets of UAVs and UEs are denoted as \(\mathcal{A}=\{1,2,\ldots,A\}\) and \(\mathcal{U}=\{1,2,\ldots,U\}\), respectively, where \(A\) is the cardinality of the set \(\mathcal{A}\). All UEs and UAVs are equipped with single antennas. The \(A\) UAVs fly and hover over assigned locations at a fixed flying altitude and connect \(U\) UEs with the fusion center. The locations of the UAVs, UEs, and the RIS are assumed to be fixed. We assume that all channels follow a quasi-static flat-fading model and thus remain constant over one time slot. The RIS is installed with a certain altitude \(z_{R}\). Let \((x_{R},y_{R})\) be the 2D location of the RIS, \((x_{a},y_{a},z_{a})\) be the 3D location of the \(a\)-th UAV, and \((x_{u},y_{u})\) be the 2D location of the \(u\)-th UE, respectively. The distances between the \(u\)-th UE and the RIS and between the RIS and the \(a\)-th UAV are denoted by \(d_{u}^{\text{UR}}\) and \(d_{a}^{\text{RA}}\), respectively. Due to their altitude, UAVs can have good connectivity to UEs. However, UEs may occasionally experience deep fade. To overcome this problem and further improve network connectivity, we propose to utilize a RIS to impose link redundancy to the RIS-assisted UAV network. As such, the network becomes more resilient against node failures by providing path diversity and alternative connectivity options between UEs and UAVs. The RIS is equipped with a controller and \(M_{r}\times M_{c}\) passive reflecting units (PRUs) to form a uniform passive array (UPA). Each column of the UPA has \(M_{r}\) PRUs with an equal spacing of \(d_{c}\) meters (m) and each row of the UPA consists of \(M_{c}\) PRUs with an equal spacing of \(d_{r}\) m. These PRUs can add indirect links between UEs and UAVs with adjustable phase shifts. The phase-shift matrix of the RIS is modeled as the diagonal matrix \(\mathbf{\Theta}=diag(e^{j\theta_{1}},\ldots,e^{j\theta_{2}},\ldots,e^{j\theta _{M}})\), where \(\theta_{m}\in[0,2\pi)\), for \(m=\{1,\ldots,M\}\) and \(M=M_{r}\times M_{c}\). The successful communications between the UEs and the RIS are measured using the distance threshold \(D_{o}\), i.e., the \(u\)-th UE is connected to the RIS with distance \(d_{u}^{(R)}\) if \(d_{u}^{(R)}\leq D_{o}\). The communications between the UEs and UAVs/RIS are assumed to occur over different time slots (i.e., time multiplexing access) to avoid interference among the scheduled UEs. Therefore, we assume that only one UE is transmitting in each time slot to reduce interference. Considering the interference among the different scheduled UEs to the RIS and the UAVs is left for future work. Since this paper focuses on the network connectivity from data link-layer viewpoint, we abstract the physical layer factors and consider a model that relies only on the distance between the nodes. Therefore, we model only the large scale fading and ignore the small scale fading. To quantify the UEs transmission to the UAVs and the RIS, we use the signal-to-noise ratio (SNR). For the \(u\)-th UE, SNR is defined as follows [8] \[\gamma_{u,a}^{(U)}=\frac{d_{u,a}^{-\alpha}p}{N_{0}}, \tag{1}\] where \(d_{u,a}\) is the distance between the \(u\)-th UE and the \(a\)-th UAV, \(p\) is the transmit power of the \(u\)-th UE, which is maintained fixed for all the UEs, \(N_{0}\) is the additive white Gaussian noise (AWGN) variance, and \(\alpha\) is the path loss exponent that depends on the transmission environment. UAVs hover at high altitudes, thus we reasonably assume that they maintain LoS channel between each other. The path loss between the \(a\)-th and the \(a^{\prime}\)-th UAVs can be expressed as \[\Gamma_{a,a^{\prime}}=20\log\left(\frac{4\pi f_{c}d_{a,a^{\prime}}}{c}\right), \tag{2}\] where \(d_{a,a^{\prime}}\) is the distance between the \(a\)-th UAV and the \(a^{\prime}\)-th UAV, \(f_{c}\) is the carrier frequency, and \(c\) is light speed. The SNR in dB between the \(a\)-th UAV and the \(a^{\prime}\)-th UAV is \(\gamma_{a,a^{\prime}}^{(A)}=10\log P-\Gamma_{a,a^{\prime}}-10\log N_{0}\), where \(P\) is the transmit power of the \(a\)-th UAV, which is maintained fixed for all the UAVs. Note that the SNR of the \(u\)-th UE determines whether it has a successful connection to the corresponding UAV \(a\). In other words, the \(a\)-th UAV is assumed to be within the transmission range of the \(u\)-th UE if \(\gamma_{u,a}^{(U)}\geq\gamma_{0}^{\text{UE}}\), where \(\gamma_{0}^{\text{UE}}\) is the minimum SNR threshold for the communication links between the UEs and the UAVs. Similarly, we assume that UAV \(a\) and UAV \(a^{\prime}\) have a successful connection provided Fig. 1: A typical RIS-assisted UAV network with one RIS, 2 UEs, and 4 UAVs. that \(\gamma_{a,a^{\prime}}^{(A)}\geq\gamma_{0}^{\text{UAV}}\), where \(\gamma_{0}^{\text{UAV}}\) is the minimum SNR threshold for the communication links between the UAVs. Since the RIS is deployed in the higher altitude, the signal propagation of UE-to-RIS link is adopted to be a simple yet reasonably accurate LoS channel model [11]. The LoS channel vector between the \(u\)-th UE and the RIS is given by [11] \[\mathbf{h}_{u}^{\text{UR}}=\sqrt{\frac{\beta_{0}}{(\ell_{u}^{\text{IR}})^{2}}} \tilde{\mathbf{h}}_{u}^{\text{UR}}, \tag{3}\] where \(d_{u}^{\text{UR}}\) is the distance between the \(u\)-th UE and the RIS, \(\beta_{0}\) denotes the path loss at the reference distance \(d_{\text{ref}}=1\) m, and \(\tilde{\mathbf{h}}_{u}^{\text{UR}}\) represents the array response component which can be denoted by \[\tilde{\mathbf{h}}_{u}^{\text{UR}} = [1,e^{-j\frac{2\pi d_{c}}{\lambda}\phi_{u}^{\text{UR}}\psi_{u}^{ \text{UR}}},\ldots,e^{-j\frac{2\pi d_{c}}{\lambda}(M_{r}-1)\phi_{u}^{\text{ UR}}\psi_{u}^{\text{UR}}}]^{T}\] \[\otimes[1,e^{-j\frac{2\pi d_{c}}{\lambda}\varphi_{u}^{\text{UR}} \psi_{u}^{\text{UR}}},\ldots,e^{-j\frac{2\pi d_{c}}{\lambda}(M_{c}-1)\varphi_{ u}^{\text{UR}}\psi_{u}^{\text{UR}}}]^{T},\] where \(\phi_{u}^{\text{UR}},\varphi_{u}^{\text{UR}}\), and \(\psi_{u}^{\text{UR}}\) are related to the sine and cosine terms of the vertical and horizontal angles-of-arrival (AoAs) at the RIS [11], and given by \(\phi_{u}^{\text{UR}}=\frac{y_{u}-y_{R}}{\sqrt{(x_{u}-x_{R})^{2}+(y_{u}-y_{R}) ^{2}}}\), \(\varphi_{u}^{\text{UR}}=\frac{x_{R}-x_{u}}{\sqrt{(x_{u}-x_{R})^{2}+(y_{u}-y_{R} )^{2}}}\), \(\psi_{u}^{\text{UR}}=\frac{-x_{R}}{d_{u}^{\text{UR}}}\), \(\lambda\) is the wavelength, and \(T\) denotes transpose. On the other hand, the RIS and UAVs are deployed in the higher altitudes, thus the reflected signal propagation of the RIS-to-UAV link typically occurs in clear airspace where the obstruction or reflection effects diminish. The LoS channel vector between the RIS and the \(a\)-th UAV is given by \[\mathbf{h}_{a}^{\text{RA}}=\sqrt{\frac{\beta_{0}}{(\ell_{a}^{\text{RA}})^{2}}} \tilde{\mathbf{h}}_{a}^{\text{RA}}, \tag{4}\] where \(d_{a}^{\text{RA}}\) is the distance between the RIS and the \(a\)-th UAV, and \(\tilde{\mathbf{h}}_{a}^{\text{RA}}\) represents the array response component which can be denoted by \[\tilde{\mathbf{h}}_{a}^{\text{RA}} = [1,e^{-j\frac{2\pi d_{c}}{\lambda}\phi_{u}^{\text{RA}}\psi_{u}^{ \text{RA}}},\ldots,e^{-j\frac{2\pi d_{c}}{\lambda}(M_{r}-1)\phi_{u}^{\text{ RA}}\phi_{u}^{\text{RA}}}]^{T}\] \[\otimes[1,e^{-j\frac{2\pi d_{c}}{\lambda}\varphi_{u}^{\text{RA}} \psi_{u}^{\text{RA}}},\ldots,e^{-j\frac{2\pi d_{c}}{\lambda}(M_{c}-1)\varphi_{ r,\varphi}^{\text{RA}}\psi_{u}^{\text{RA}}}]^{T},\] where \(\phi_{a}^{\text{RA}},\varphi_{a}^{\text{RA}}\), and \(\psi_{a}^{\text{RA}}\) are related to the sine and cosine terms of the vertical and horizontal angles-of-departure (AoDs) from the RIS to the \(a\)-th UAV [11], and respectively given by \(\phi_{a}^{\text{RA}}=\frac{y_{u}-y_{R}}{\sqrt{(x_{R}-x_{R})^{2}+(y_{R}-y_{R}) ^{2}}}\), \(\varphi_{a}^{\text{RA}}=\frac{x_{R}-x_{u}}{\sqrt{(x_{R}-x_{a})^{2}+(y_{R}-y_{R} )^{2}}}\), and \(\psi_{a}^{\text{RA}}=\frac{x_{R}-x_{u}}{\sqrt{(x_{R}-x_{a})^{2}+(y_{R}-y_{R} )^{2}}}\), and \(\psi_{a}^{\text{RA}}=\frac{x_{R}-x_{u}}{\sqrt{d_{u}^{\text{RA}}}}\). Given the aforementioned channel models, the concatenated channel for the UE-RIS-UAV link between the \(u\)-th UE and the \(a\)-th UAV through the RIS is given by \(h_{u,a}^{\text{URA}}={(\mathbf{h}_{a}^{\text{RA}})}^{H}\mathbf{\Theta}_{u}^{\text{ UR}}\)[11]. Accordingly, the SNR of the reflected link between the \(u\)-th UE and the \(a\)-th UAV through the RIS can be written as \(\gamma_{u,a}^{(R)}=\frac{p\|h_{u,a}^{\text{LR}}}{N_{0}}\)[12]. For successful connection between UE \(u\) and UAV \(a\) via RIS \(r\), \(\gamma_{u,a}^{(R,r)}\geq\gamma_{0}^{\text{RIS}}\), where \(\gamma_{0}^{(RIS)}\) is the minimum SNR threshold for the communication links between the UEs and the UAVs via the RISs. We model the considered RIS-assisted UAV network as an undirected graph \(\mathcal{G}(\mathcal{V},\mathcal{E})\), where \(\mathcal{V}=\{v_{1},v_{2},\cdots,v_{V}\}\) is the set of nodes (i.e., UAVs and UEs) in the network, \(\mathcal{E}=\{e_{1},e_{2},\cdots,e_{E}\}\) is the set of all edges. \(V=|\mathcal{U}\cup\mathcal{A}|=|\mathcal{V}|\) and \(|\mathcal{E}|=E\) are the numbers of vertices and edges in the graph, respectively. The graph \(\mathcal{G}\) implies that all the links in the network are bidirectional, i.e., a node \(v\) is able to reach node \(v^{\prime}\), and vice versa. The edge between any two nodes is created based on a typical SNR threshold. ### _Network Connectivity_ For an edge \(e_{k}\), \(1\leq k\leq E\), that connects two nodes \(\{v_{n},v_{m}\}\in\mathcal{V}\), let \(\mathbf{a}_{k}\) be a vector, where the \(n\)-th and \(m\)-th elements in \(\mathbf{a}_{k}\) are given by \(a_{k,n}=1\) and \(a_{k,m}=-1\), respectively, and zero otherwise. The incidence matrix \(\mathbf{A}\in\mathbf{R}^{V\times E}\) of a graph \(\mathcal{G}\) is the matrix with the \(k\)-th column given by \(\mathbf{a}_{k}\). Hence, in undirected graph \(\mathcal{G}(\mathcal{V},\mathcal{E})\), the Laplacian matrix \(\mathbf{L}\) is an \(V\) by \(V\) matrix, which is defined as follows [9]: \[\mathbf{L}=\mathbf{A}\mathbf{A}^{T}=\sum_{k=1}^{E}\mathbf{a}_{k}\mathbf{a}_{k}^{T}, \tag{5}\] where the entries of \(\mathbf{L}\) are given as follows: \[L(n,m)=\begin{cases}D_{v_{n}}&\text{if }v_{n}=v_{m},\\ -1&\text{if }(v_{n},v_{m})\in\mathcal{E}\\ 0&\text{otherwise}\end{cases} \tag{6}\] where \(n,m\in\{1,2,\ldots,V\}\) are the indices of the nodes, and \(D_{v_{n}}\) is the degree of node \(v_{n}\), which represents the number of all its neighboring nodes. In network connectivity, algebraic connectivity, also called the Fiedler metric or the second smallest eigenvalue [7], measures how well a graph \(\mathcal{G}\) that has the associated Laplacian matrix \(\mathbf{L}\) is connected. From its name, this metric is usually denoted as \(\lambda_{2}(\mathbf{L})\). The motivation of \(\lambda_{2}(\mathbf{L})\) to be used as a network connectivity metric comes from the following two main reasons [7]. First, \(\lambda_{2}(\mathbf{L})>0\) if and only if \(\mathcal{G}\) is connected, i.e., \(\mathcal{G}\) is only one connected graph. It is worth mentioning that when \(\lambda_{2}(\mathbf{L})=0\), the graph is disconnected in which at least one of its vertices is unreachable from any other vertices in the graph. Second, \(\lambda_{2}(\mathbf{L})\) is monotone increasing in the edge set, i.e., if \(\mathcal{G}_{1}=(V,E_{1})\) and \(\mathcal{G}_{2}=(V,E_{2})\) and \(E_{1}\subseteq E_{2}\), then \(\lambda_{2}(\mathbf{L}_{2})\geq\lambda_{2}(\mathbf{L}_{1})\). This implies that \(\lambda_{2}(\mathbf{L})\) qualitatively represents the connectivity of a graph in the sense that the larger \(\lambda_{2}(\mathbf{L})\) is, the more connected the graph will be. To this end, since \(\lambda_{2}(\mathbf{L})\) is a good measure of how connected the graph is, the more edges that exist between the UEs and the UAVs, the longer the network can live without being disconnected due to node failures. Thus, the network becomes more resilient. Based on that, we consider \(\lambda_{2}(\mathbf{L})\) as a quantitative measure of the network resiliency in this paper, similar to [9, 13]. ## III Problem Formulation Given a RIS-assisted UAV network represented network may result in connecting multiple UEs to multiple UAVs, which were not connected together. It may also result in adding new alternative options to the UEs if their scheduled UAVs have failed. In this context, we leverage the RIS to add more links to the network, and by adjusting its phase shifts, RIS can smartly beamform the signals of the UEs to suitable UAVs to maximize the network connectivity. With RIS deployment, a new graph \(\mathcal{G}^{\prime}(\mathcal{V},\mathcal{E}^{\prime})\) is constructed with the same number of \(V\) nodes and a larger set of edges denoted by \(\mathcal{E}^{\prime}\) with \(\mathcal{E}^{\prime}=\mathcal{E}\cup e_{u,a}^{R}\), where \(e_{u,a}^{R}\) is the new edge connecting the \(u\)-th UE to the \(a\)-th UAV through the RIS and \(\mathcal{E}\subseteq\mathcal{E}^{\prime}\). Note that the effect of deploying the RIS appears only in the edge set \(\mathcal{E}\), and not in the node set \(V\)[8, 9, 10]. By adding those new links to the network, the gain can be realized by computing \(\lambda_{2}(\mathbf{L}^{\prime})\geq\lambda_{2}(\mathbf{L})\), where \(\lambda_{2}(\mathbf{L}^{\prime})\) is the resulting Laplacian matrix of a graph \(\mathcal{G}^{\prime}(\mathcal{V},\mathcal{E}^{\prime})\). We consider that in each time slot only one UE can transmit to the RIS, which directs the UE's signal to only one UAV. In what follows, we consider two different cases of network configurations to formulate the optimization problem of maximizing \(\lambda_{2}(\mathbf{L}^{\prime})\) in each time slot. ### Case 1: One UE and One RIS Let \(\mathcal{A}_{0}\) be a set of reachable UAVs that have indirect communication links from the UE through the RIS, i.e., \(\mathcal{A}_{0}=\{a\in\mathcal{A}\backslash\mathcal{A}_{UE}\mid\gamma_{a}^{(R )}\geq\gamma_{0}^{\text{RIS}}\}\), where \(\mathcal{A}_{UE}\) is the set of UAVs that have direct links to the UE. Our aim is to provide an alternative link to connect the UE to a single UAV in the set \(\mathcal{A}_{0}\). As such, the UE does not miss the communication if its scheduled UAV has failed. Now, let \(y_{a}\) be a binary variable that is equal to \(1\) if the RIS is connected to the \(a\)-th UAV (\(a\in\mathcal{A}_{0}\)), and zero otherwise. The considered optimization problem in this case is formulated as follows: \[\max_{y_{a},\theta_{m}}\lambda_{2}(\mathbf{L}^{\prime}) \tag{7a}\] \[\mathrm{subject\ to\ }\sum_{a\in\mathcal{A}_{0}}y_{a}=1,\] (7b) \[\theta_{m}\in[0,2\pi) m=\{1,\ldots,M\},\] (7c) \[y_{a}\in\{0,1\} \forall a\in\mathcal{A}_{0}, \tag{7d}\] where constraint (7b) assures that the RIS directs the signal of the UE to only one UAV. Constraint (7c) is for the RIS phase shift optimization. ### Case 2: Multiple UEs and One RIS Unlike case 1 that considers one UE, case 2 adds an optimization variable that selects the \(u\)-th UE that transmits in each time slot. Let \(\mathcal{A}_{0}^{u}\) be a set of reachable UAVs that have indirect communication links from the \(u\)-th UE through the RIS, i.e., \(\mathcal{A}_{0}^{u}=\{a\in\mathcal{A}\backslash\mathcal{A}_{u}\mid\gamma_{u,a }^{(R)}\geq\gamma_{0}^{\text{RIS}}\}\), where \(\mathcal{A}_{u}\) is the set of UAVs that have direct links to the \(u\)-th UE. Let \(x_{u}\) be a binary variable that is equal to \(1\) if the \(u\)-th UE is connected to the RIS, and zero otherwise. Now, let \(y_{a}^{u}\) be a binary variable that is equal to \(1\) if the RIS is connected to the \(a\)-th UAV when the \(u\)-th UE is selected to transmit, and zero otherwise. The considered optimization problem in this case is formulated as follows: \[\max_{x_{u},y_{a}^{u},\theta_{m}}\lambda_{2}(\mathbf{L}^{\prime}) \tag{8a}\] \[\mathrm{subject\ to\ }\sum_{u\in\mathcal{U}}x_{u}=1,\] (8b) \[\sum_{a\in\mathcal{A}_{0}^{u}}y_{a}^{u}=1,\] (8c) \[\theta_{m}\in[0,2\pi) m=\{1,\ldots,M\},\] (8d) \[x_{u}\in\{0,1\},y_{a}^{u}\in\{0,1\} \forall a\in\mathcal{A}_{0}^{u}, \tag{8e}\] where constraint (8b) and (8c) assure that only one UE can transmit to the RIS and the signal of that UE is reflected from the RIS to only one UAV. Constraint (8d) is for the RIS phase shift optimization. ## IV Proposed Solutions It is computably affordable to optimally solve (7) since it considers only one UE, however solving (8) optimally for the case of multiple UEs is computationally prohibitive. Therefore, this section proposes to solve (7) optimally using a linear search over all the possible UAV nodes. Then, the section formulates (8) as an SDP optimization problem that can be solved efficiently in polynomial time. The process of those proposed solutions are explained in subsections IV-A and IV-B, respectively. ### _Solution of Case 1_ To optimally solve (7), we consider a linear search scheme that computes \(\lambda_{2}(\mathbf{L}^{\prime})\) for each possible UAV node \(a\in\mathcal{A}_{0}\), and then calculate the corresponding phase shift of the RIS to that UAV node. In particular, the corresponding phase shift at PRU of the RIS to the \(a\)-th UAV is calculated as follows [11] \[\theta_{m}=\pi\frac{f_{c}}{c}\bigg{\{}d_{r}(m_{r}-1)\psi_{a}^{ \text{RA}}\phi_{a}^{\text{RA}}+d_{c}(m_{c}-1)\psi_{a}^{\text{RA}}\varphi_{a}^{ \text{RA}}\\ +d_{r}(m_{r}-1)\psi^{\text{UR}}\phi^{\text{UR}}+d_{c}(m_{c}-1)\psi ^{\text{UR}}\varphi^{\text{UR}}\bigg{\}}. \tag{9}\] We argue that the computational complexity of the proposed solution of this case is affordable since it needs to compute only \(|\mathcal{A}_{0}|\) Laplacian matrices. The steps of the proposed method are summarized in Algorithm 1. ### _Solution of Case 2_ The exhaustive search scheme to solve (8) for multiple UEs can be done by computing \(\lambda_{2}(\mathbf{L}^{\prime})\) for a total of \(U\sum_{u=1}^{\mathcal{U}}|\mathcal{A}_{0}^{u}|\) Laplacian matrices, which requires huge amount of computation for large \(U\). For graph \(\mathcal{G}^{\prime}(\mathcal{V},\mathcal{E}^{\prime})\), the time complexity of the exhaustive search is high for large network settings. It runs in \(\mathcal{O}(4E^{\prime}V^{3}/3)\) to compute \(\lambda_{2}(\mathbf{L}^{\prime})\)[14]. To overcome such computational intractability, we instead propose an efficient method to solve (8), which finds the feasible UE-RIS-UAV association to maximize \(\lambda_{2}(\mathbf{L}^{\prime})\) using SDP solvers [15]. We add a link connecting the \(u\)-th UE to the \(a\)-th UAV through the RIS if both \(x_{u}\) and \(y_{a}^{u}\) in (8) are \(1\). Let \(\mathbf{z}\) be a vector representing the UE-RIS-UAV candidate associations, in which case \(x_{u}=1\) and \(y_{a}^{u}=1\), \(\forall u\in\mathcal{U},a\in\mathcal{A}\). Therefor, the problem in (8) can be seen as having a set of \(|\mathbf{z}|\) UE-RIS-UAV candidate associations, and we want to select the optimum UE-RIS-UAV association among these \(|\mathbf{z}|\) associations. This optimization problem can be formulated as \[\max_{\mathbf{z}}\lambda_{2}(\mathbf{L}^{\prime}(\mathbf{z})) \tag{10}\] \[\mathrm{subject\ to\ }\mathbf{1}^{T}\mathbf{z}=1,\mathbf{z} \in\{0,1\}^{|\mathbf{z}|},\] where \(\mathbf{1}\in\mathbf{R}^{|\mathbf{z}|}\) is the all-ones vector and \[\mathbf{L}^{\prime}(\mathbf{z})=\mathbf{L}+\sum_{l=1}^{|\mathbf{z}|}z_{l} \mathbf{a}_{l}\mathbf{a}_{l}^{T}, \tag{11}\] where \(\mathbf{a}_{l}\) is the incidence vector resulting from adding link \(l\) to the original graph \(\mathcal{G}\) and \(\mathbf{L}\) is the Laplacian matrix of the original graph \(\mathcal{G}\). Clearly, the dimension of \(\mathbf{L}\) and \(\mathbf{L}^{\prime}(\mathbf{z})\) is \(V\times V\). The optimization vector in (12) is the vector \(\mathbf{z}\). The \(l\)-th element of \(\mathbf{z}\), denoted by \(z_{l}\), is either \(1\) or \(0\), which corresponds to whether this UE-RIS-UAV association should be chosen or not, respectively. The combinatorial optimization problem in (12) is NP-hard problem with high complexity. Therefore, we relax the constraint on the entries of \(\mathbf{z}\) and allow them to take any value in the interval \([0,1]\). Specifically, we relax the Boolean constraint \(\mathbf{z}\in\{0,1\}^{|\mathbf{z}|}\) to be a linear constraint \(\mathbf{z}\in[0,1]^{|\mathbf{z}|}\), then we can represent the problem (12) as \[\max_{\mathbf{z}}\lambda_{2}(\mathbf{L}^{\prime}(\mathbf{z})) \tag{12}\] \[\mathrm{subject\ to\ }\mathbf{1}^{T}\mathbf{z}=1,0\leq\mathbf{z}\leq 1.\] We emphasize that the optimal solution of the relaxed problem in (14) is an upper bound for the optimal solution of the original problem (12) since it has a larger feasible set. In [10], it was shown that \(\lambda_{2}(\mathbf{L}^{\prime}(\mathbf{z}))\) in (14) is the point-wise infimum of a family of linear functions of \(\mathbf{z}\), which is a concave function in \(\mathbf{z}\). In addition, the relaxed constraints are linear in \(\mathbf{z}\). Therefore, the optimization problem in (14) is a convex optimization problem [10], and it is equivalent to the following SDP optimization problem [16] \[\max_{\mathbf{z},q}q \tag{13}\] \[\mathrm{subject\ to\ }q(\mathbf{I}-\frac{1}{|\mathbf{z}|} \mathbf{1}\mathbf{1}^{T})\preceq\mathbf{L}^{\prime}(\mathbf{z}),\mathbf{1}^{T }\mathbf{z}=1,0\leq\mathbf{z}\leq 1,\] where \(\mathbf{I}\in\mathbf{R}^{V\times V}\) is the identity matrix and \(\mathbf{F}\preceq\mathbf{L}\) denotes that \(\mathbf{L}-\mathbf{F}\) is a positive semi-definite matrix. By solving the SDP optimization problem in (15) efficiently using any SDP standard solver such as the CVX software package [15], the optimization variable \(\mathbf{z}\) is obtained. Since the entries of the output vector \(\mathbf{z}\) are continuous, we consider to round the maximum entry to \(1\) while others are rounded to zero. For the given association vector \(\mathbf{z}\), we optimize the phase shift of the RIS as in (11) to direct the signal of the selected UE to the associated UAV. ## V Numerical Results For the numerical evaluations, we use the same RIS configurations and UAV communications that were used in [8] and [11], respectively. We consider a RIS-assisted UAV system in an area of \(150\)\(m\times 150\)\(m\), where the RIS has a fixed location and the UEs and the UAVs are distributed randomly. The considered simulation parameters are as follows: the RIS is located at (\(35\)\(m,50\)\(m\)) with an altitude of \(20\) m, \(M=100\), \(d_{r}=5\) cm, \(d_{c}=5\) cm, \(\beta_{0}=10^{-6}\), \(N_{0}=-130\) dBm, the altitude of the UAVs is \(50\) m, \(f_{c}=3\times 10^{9}\) Hz, \(c=3\times 10^{8}\) m/s, \(\alpha=4\), \(p=1\) watt, \(P=5\) watt, \(\gamma_{0}^{(\text{U})}=85\) dB, and \(\gamma_{0}^{(\text{A})}=80\) dB. Unless specified otherwise, \(A=7\), \(U=10\), and \(\gamma_{0}^{(\text{RIS})}=30\) dB. For the sake of numerical comparison, the proposed schemes are compared with the following schemes: 1) original benchmark scheme without RIS deployment, 2) random scheme that selects a random link to connect the UE to one of the UAVs through the RIS. For completeness of our work, we also compare the proposed SDP scheme of case \(2\) with the optimal scheme that is considered as a performance upper bound since it searches over all the possible links between the UEs and the UAVs. In the simulations, the network connectivity is calculated over \(500\) iterations, and the average value is presented. In each iteration, we change the locations of the UEs and the UAVs. In Figs. 2 and 3, we show the average network connectivity versus the number of UAVs \(A\) for both cases. For a small num Fig. 2: The average network connectivity \(\lambda_{2}(\mathbf{L}^{\prime})\) of case 1 versus the number of UAVs \(A\). ber of UAVs in Figs. 2 and 3, the proposed optimal and SDP schemes offer a slight performance gain in terms of network connectivity compared to the original and the random schemes. This is because our proposed schemes have a few options of links, where the RIS can direct the signal of the UE to a few number of UAVs. However, when the number of UAVs increases, the proposed schemes smartly selects an effective UE-RIS-UAV link that significantly maximizes the network connectivity. It is noted that \(\lambda_{2}(\mathbf{L}^{\prime})\) of all schemes increases with the number of UAVs since adding more connected nodes to the network increases the number of edges, which increases the network connectivity. It is also noted that the values of \(\lambda_{2}(\mathbf{L}^{\prime})\) in Fig. 3 are smaller than the values of \(\lambda_{2}(\mathbf{L}^{\prime})\) in Fig. 2 for all the UAVs configurations. This is reasonably because the number of unconnected nodes that represent the UEs in Fig. 3 of case \(2\), i.e., \(U=10\), is larger than those of Fig. 2 of case \(1\), which is one UE. This makes the network of case \(2\) less connected (i.e., more UE nodes and no links between them), thus low network connectivity in Fig. 3. When \(A>8\) in Fig. 3, the average network connectivity of all the schemes increases significantly with \(A\), which follows the same behaviour of Fig. 2 that is \(A>U\). In Fig. 4, we plot the network connectivity versus the number of UEs \(U\) for case \(2\). From Fig. 4, we can see that the proposed SDP outperforms the original and the random schemes in terms of network connectivity. Notably, the network connectivity of all the schemes decreases as the number of UEs increases, since adding more unconnected UEs may result in a sparse graph with low network connectivity. In Fig. 5, we show the impact of the SNR threshold \(\gamma_{0}^{\text{(RIS)}}\) on the network connectivity for case \(2\). For small SNR threshold, all the links between the UEs and the UAVs through the RIS can satisfy this SNR threshold, thus many alternative links between the potential UE and the UAVs to select to maximize the network connectivity. On the other hand, for high RIS SNR threshold, a few UE-RIS-UAV links can satisfy such high SNR threshold, thus the network connectivity of all the schemes is degraded, and it becomes close to the original scheme, which does not get affected by changing \(\gamma_{0}^{\text{(RIS)}}\). It is worth remarking that while the random scheme adds a random link to the network, the original scheme does not add a link. The proposed solutions balance between the aforementioned aspects by judiciously selecting an effective link, between a UE and a UAV, that maximizes the network connectivity. This utilizes the benefits of the cooperation between an appropriate scheduling algorithm design and RIS phase shift configurations. Compared to the optimal scheme, our proposed SDP has a certain degradation in network connectivity that comes as the achieved polynomial computational complexity as compared to the high complexity of the optimal scheme that is in the order of \(\mathcal{O}(4E^{\prime}V^{3}/3)\)[14]. ## VI Conclusion In this paper, we proposed a novel joint UE-UAV scheduling and RIS phase shift optimization for achieving connected and resilient RIS-assisted UAV networks. We leveraged the RIS to add more links to the network by opportunistically Fig. 4: The average network connectivity \(\lambda_{2}(\mathbf{L}^{\prime})\) versus the number of UEs \(U\). Fig. 5: The average network connectivity \(\lambda_{2}(\mathbf{L}^{\prime})\) versus SNR threshold \(\gamma_{0}^{\text{(RIS)}}\) in dB. Fig. 3: The average network connectivity \(\lambda_{2}(\mathbf{L}^{\prime})\) of case 2 versus the number of UAVs \(A\). reflecting the signal of the UE to the appropriate UAV such that the network connectivity is maximized. The problem of maximizing the network connectivity was formulated in two cases of a single UE and multiple UEs, and optimal and efficient SDP solutions were proposed for the two problem cases, respectively. Simulation results showed that both the proposed schemes result in improved network connectivity as compared to the existing solutions. Such promising performance gain can be significantly improved for the case of multiple RISs, which will be pursued in our future work.
2310.11754
Study of charm hadronization and in-medium modification at the Electron-ion Collider in China
Charm quark production and its hadronization in ep and eA collisions at the future Electron-Ion Collider in China (EicC) will help us understand the quark/gluon fragmentation processes and the hadronization mechanisms in the nuclear medium, especially within a poorly constrained kinematic region ($x<0.1$). In this paper, we report a study on the production of charmed hadrons, $D^0$ and $\Lambda_c^+$, reconstructed with a dedicated GEANT4 simulation of vertex$\,\&\,$tracking detectors designed for EicC. The $\Lambda_c^+$/$D^0$ ratios as functions of multiplicity and $p_T$, as well as the $D^0$ double ratio are presented with projected statistical precision.
Senjie Zhu, Xiao Huang, Lei Xia, Aiqiang Guo, Yutie Liang, Yifei Zhang, Yuxiang Zhao
2023-10-18T07:29:53Z
http://arxiv.org/abs/2310.11754v1
# Study of charm hadronization and in-medium modification at the Electron-ion Collider in China ###### Abstract Charm quark production and its hadronization in \(ep\) and \(eA\) collisions at the future Electron-Ion Collider in China (EicC) will help us understand the quark/gluon fragmentation processes and the hadronization mechanisms in the nuclear medium, especially within a poorly constrained kinematic region (\(x<0.1\)). In this paper, we report a study on the production of charmed hadrons, \(D^{0}\) and \(\Lambda_{c}^{+}\), reconstructed with a dedicated geant4 simulation of vertex & tracking detectors designed for EicC. The \(\Lambda_{c}^{+}/D^{0}\) ratios as functions of multiplicity and \(p_{T}\), as well as the \(D^{0}\) double ratio are presented with projected statistical precision. ## I Introduction Exploring the most fundamental building blocks of the universe is the paramount objective of modern physics. Over the past five decades, we've come to understand that matter, in its essence, is composed of nuclei and nucleons. These, in turn, are made up of even more basic components known as quarks, which are confined in a colorless bound state through the exchange of gluons. This interaction among quarks and gluons is governed by the theory of strong force, known as Quantum Chromodynamics (QCD). However, up to now, the constituent interactions and the dynamical distributions of quarks and gluons inside nucleons, in particular in the kinematic regime dominated by sea quarks and gluons, are still unclear or poorly constrained by existing experiments [1; 2]. The next-generation experimental facilities with electron-ion collision have been proposed for the quantitative study of matter in this new regime, such as Electron-Ion Collider at BNL (EIC-BNL) [3; 4] and Electron-ion collider in China (EicC) [5]. EicC is proposed to operate at a center-of-mass (c.m.) energy ranging from 15 GeV to 20 GeV, with a luminosity of approximately 2.0 \(\times\) 10\({}^{33}\) cm\({}^{-2}\).s\({}^{-1}\)[5]. This would enable EicC to explore the kinematic region dominated by sea quarks, effectively bridging the gap between the EIC-BNL and JLab 12 GeV experiments. The primary objective of EicC is to delve into the partonic structure and three-dimensional tomography of nucleons and nuclei, the cold nuclear matter effects, as well as the origin of proton mass. This ambitious endeavor aims to significantly advance our understanding of these fundamental aspects of nuclear physics. Understanding the interactions of partons with nuclear matter, as well as their fragmentation [6] or combination [7] to form hadrons--a process known as hadronization--is a fundamental question that future EicC experiments aim to answer. Heavy quarks, due to their large masses, are predominantly produced in the initial hard scatterings of collisions and undergo the complete evolution of the nuclear medium system. This makes them an ideal probe for measuring nuclear medium effects and heavy quark hadronizations in high-energy heavy-ion collisions [8; 9; 10]. For example, the enhancement of \(\Lambda_{c}^{+}/D^{0}\) ratios in heavy-ion collisions compared with elementary particle collisions has been measured by STAR [9] and ALICE [11; 12], revealing the hadronization mechanism of charm quark with light quarks in hot [7; 13] and dense medium [14]. However, in these collisions, it is a complex task to separate the effect of the cold nuclear medium (CNM) from the dominant medium response caused by the hot quark-gluon plasma (QGP). This is where electron-ion collisions come into play, offering an ideal platform to study the CNM effects [15] in a more transparent system where it is believed no QGP is formed. Moreover, two theories with different time scales - parton energy loss and the hadron absorption model [16] - can successfully describe the suppression of light hadrons in electron-ion collisions. The measurement of heavy quark production presents a novel opportunity to reveal the mechanisms of hadronization. To address these inquiries, it is essential to investigate charm hadronization and the production of open charmed hadrons in future EicC experiments. In this paper, we perform a comprehensive simulation study with a geant4[17] detector configuration for open charm hadron production. The report is organized as follows. Section II introduces the simulation setup and process. Section III presents the simulation results and a summary is given in Section IV. ## II Simulation setup This study employs the pythia event generator, as referenced in [18]. Two distinct versions of this generator, namely pythiaERIIC (pythia 6.4) and pythia 8.3, are utilized in our analysis. The configuration for pythia 6.4 is detailed extensively in Ref. [19]. The physical processes including vector-meson diffractive and resolved processes, semihard QCD \(2\to 2\) scattering, neutral boson scattering off heavy quarks within the proton, and photon-gluon fusion, are turned on. Several alternative hadronization models in pythia 8.3 are used for charm hadronization studies, e.g., implemented color reconnection models. The kinematic variables used are listed in Table 1. The kinematic coverage of the collision with electron (\(3.5\,\mathrm{GeV}\)) and proton (\(20\,\mathrm{GeV}\)) is shown in Fig.1 (top panel), which presents the \(\log_{10}(x_{B})-\log_{10}(Q^{2})\) distribution of charmed events per \(1\,\mathrm{fb}^{-1}\) integral luminosity. As shown in the \(x_{B}-Q^{2}\) distribution, one step along \(Q^{2}\) distribution exists at \(\log_{10}(Q^{2})=0.4\). The reason is that the subprocess cross-section (\(\gamma^{*}q\to q\)) in pythia 6 is deliberately set to zero when the photon's virtuality \(Q^{2}\) approaches zero. This is done to prevent double-counting with real-photon physics processes[18]. In the above beam setting and with DIS requirement \(Q^{2}>1\,\mathrm{GeV}^{2}\), the \(x_{B}\) distributions are shown in the bottom panel of Fig.1, including inclusive DIS events and charmed events. Charmed events account for \(1.4\%\) of inclusive DIS events. The momentum-\(\theta\) phase space 2-dimension distribution of \(D^{0}\) is shown in Fig.2 (the first panel). Due to the higher momentum of the beam protons compared to the beam electron, the \(D^{0}/\bar{D}^{0}\) distribution is skewed towards the direction of the proton beam. Consequently, as illustrated in the second panel of Fig.2, the pions from \(D^{0}\to\pi K\) decay are given a boost, causing them to be produced more in the forward direction of the proton beam. As illustrated in Fig.2 for \(\Lambda_{c}^{\pm}\), the distributions of \(\Lambda_{c}^{\pm}\) and the pions from its decay are even more skewed towards the direction of the proton beam. There is a region near \(\theta=0\) where the number of \(\Lambda_{c}^{\pm}\) is significantly larger than in other areas, and the \(\Lambda_{c}^{\pm}\) in this region has much higher energy. Furthermore, \(\Lambda_{c}^{+}\) is much more abundant than \(\Lambda_{c}^{-}\) in this region. This phenomenon is caused by the formation of beam remnants [20], a charm quark produced via the hard scattering and \(u\), \(d\) quarks from the beam proton form a \(\Lambda_{c}^{+}\) together. Because the \(u\), \(d\) quarks from protons carry a large fraction of the proton momentum, the beam remnant \(\Lambda_{c}^{+}\) is very energetic. The specific design and layout for EicC detectors have been detailed in Ref. [21], which include a tracking and vertex detector system. This report primarily focuses on the tracking and vertex detectors. The detector acceptance is \(-3<\eta<3\) (\(5.7^{\circ}\sim 174.3^{\circ}\)). In Fig.2 (panel (b) and (c)), the acceptance region is marked by two black lines. In Ref. [21], the performances have been obtained through geant4 simulation for different particle species (\(e,\mu,\pi,K,p\)) under a \(1.5\,\mathrm{T}\) magnetic field configuration. The performances for \(\pi\) are shown as an example in Fig.3. In this study, the Distance of the Closest Approach (\(DCA\)) are defined as the closest distance between the reconstructed track and the primary vertex in \(r\phi\) plane (\(DCA_{r\phi}\)) and \(z\)-axis (\(DCA_{z}\)). The tracking efficiency, the momentum resolution, the \(DCA_{z}\) and \(DCA_{r\phi}\) resolutions are shown in Panel (a), (b), (c), and (d), respectively. In the absence of particle identification (PID) detector in the simulation, a \(3\sigma\) separation power for PID is assumed in the prospected PID detector acceptance coverage listed in Table 2. It is applied as a momentum hard cut-off as a function of \(\eta\). All the performance parameters mentioned above have been used to conduct a fast sim projection. In this simulation, the track information generated by pythia is adjusted, or "smeared," according to a Gaussian function to mimic the limitations of the detection capability. The number of the tracks in the acceptance (\(-3<\eta<3\)) and passing the tracking efficiency filters is used to determine the primary vertex resolution, which is also smeared in the pythia events. After smearing, different topological requirements are used to separate the signal and background. \(D^{0}\) is reconstructed from channel \(D^{0}\rightarrow\pi^{+}K^{-}\) with a branch ratio (\(\mathcal{B}\)) of 3.83% and \(\Lambda^{+}_{c}\) is reconstructed from channel \(\Lambda^{+}_{c}\rightarrow\pi^{+}K^{-}\) (\(\mathcal{B}\)=2.96% in pythia 6.4 and \(\mathcal{B}\)=3.4% in pythia 8.3). The \(\mathcal{B}\) difference of \(\Lambda^{+}_{c}\) between pythia 6.4 and pythia 8.3 is caused by a missing channel in pythia 6.4 (\(\Lambda^{+}_{c}\rightarrow\Lambda\pi^{+},\Lambda\to pK^{-}\)). In the following part of this report, we use \(D^{0}\) to represent both \(D^{0}\) and \(\bar{D^{0}}\) and use \(\Lambda^{+}_{c}\) to represent both \(\Lambda^{+}_{c}\) and \(\Lambda^{-}_{c}\). Figure. 4, we show topological cut diagrams for \(D^{0}\) (a) and for \(\Lambda^{+}_{c}\) (b). For \(D^{0}\), three topological requirements are applied, including \(DCA_{\pi K}\), \(DecayL^{D^{0}}_{XY}\) and \(\cos\theta_{XY}\), which are defined as: * \(DCA_{\pi K}\) is the closest distance between two daughter tracks. * \(DecayL^{D^{0}}_{XY}\) is the distance between the decay vertex and the reconstructed primary vertex (in the \(xy\) plane). Once \(DCA_{\pi K}\) is defined, two points that symbolize \(DCA_{\pi K}\) on each of the two tracks can be found. Then, the decay vertex of \(D^{0}\) is defined as the middle of the two points. * \(\cos\theta_{XY}\) is the cosine value of the angle between the \(D^{0}\) decay length and the \(D^{0}\) momentum (in the \(xy\) plane). The reason for defining both \(\cos\theta_{XY}\) and \(DecayL^{D^{0}}_{XY}\) in the \(xy\) plane is that the spatial resolution in the \(xy\) plane is much better than that in the \(z\) dimension, especially in the endcap region. However, if the projections of two non-parallel lines in \(xy\) plane cross, \(DCA_{\pi K}\) will always be zero in the \(xy\) plane. Therefore, \(DCA_{\pi K}\) is defined in 3 dimensions instead of \(xy\) plane. For \(\Lambda^{+}_{c}\), the \(DCA_{\pi K}\) is replaced by \(DCA^{\rm max}_{\rm daughters}\), the maximum of all three \(DCA_{\rm daughters}\), where \(DCA_{\rm daughters}\) is the DCA between two of the three daughters. The decay vertex is the center of all three decay vertexes defined by each daughter pair. All three topological cut distributions are shown in Fig.5 for the signal (blue dashed line) and background (red solid line). \begin{table} \begin{tabular}{c|c c c} \(\eta\) & \([-3,-1)\) & \([-1,1)\) & \((1,3]\) \\ \hline \(p_{max}[\)GeV/\(c]\) & 4 & 6 & 15 \\ \end{tabular} \end{table} Table 2: PID acceptance for hadrons at different \(\eta\) regions. Figure 2: The momentum-\(\theta\) distributions of the particles produced by pythia. The particle types are labeled respectively in the panel for \(D^{0}/\bar{D^{0}}\), \(\Lambda^{\pm}_{c}\) and pions from their decay. The \(\pi K\) invariant mass distributions in the \(D^{0}\) mass region are fitted with a Gaussian function for signal and a linear function for background to obtain the \(D^{0}\) signal yield. The significance is defined as \(S/\sqrt{S+B}\), where \(S\) and \(B\) stand for the counts of \(D^{0}\) signal and background, respectively. All the criteria are optimized by maximizing the significance iteratively. Figure. 6 shows \(D^{0}\) significance as a function of all three topological requirements. The optimal topological criteria are: (i) \(DCA_{\pi K}<110\,\mu m\), (ii) \(\cos\theta_{XY}>-0.75\). As seen in Fig.2 (the first panel), the \(p_{T}\) of most \(D^{0}\) is lower than 1GeV/\(c\) and \(D^{0}\) with lower \(p_{T}\) tends to have a smaller \(DecayL_{XY}^{p^{0}}\), so the \(DecayL_{XY}^{p^{0}}\) distribution of signal is similar to that of background. Therefore, the significance is insensitive to \(DecayL_{XY}^{D^{0}}\) as shown in Fig.6. The \(\pi K\) invariant mass distributions with different detector performance configurations are shown in Fig.7. There are three configurations included: **no PID, with PID** and **PID+Vertex**. In the first two configurations, the vertex detectors in the geometry for Geant4 simulation are removed to determine the significance of the vertex detectors in \(D^{0}\) reconstruction. For **no PID**, the PID acceptance in Table 2 isn't applied, and all possible combinations formed by particles with opposite charges are considered. As shown in Fig.7, the signal peak of the green line (**no PID**) is almost invisible. Compared to the green line (**no PID**), the background of the blue line (**with PID**) is greatly suppressed when the PID acceptance is applied. Similarly, compared to the green line (**no PID**), the background of the red line (**PID+Vertex**) is also suppressed, and the mass peak is narrowed. This is because the momentum resolution becomes better when the vertex detectors are installed. After adding the vertex detectors, the significance improves from 13.9 to 21.8. The reconstruction procedure is also carried out for \(\Lambda_{c}^{+}\). The difference is that the shape of the \(\Lambda_{c}^{+}\) peak is non-Gaussian, as shown in the bottom panel of Fig.9. Therefore, we use the line shape of \(\Lambda_{c}^{+}\) after smearing the daughter tracks from pure \(\Lambda_{c}^{+}\) and a linear function in the fit. The distributions of \(\Lambda_{c}^{+}\) topological cuts are shown in Fig.8, where we can find that \(DCA_{\rm daughters}^{\rm max}\) is the only useful criterion. Therefore, in Fig.9, only the criterion optimization for \(DCA_{\rm daughters}^{\rm max}\) is performed and the optimal result is \(DCA_{\rm daughters}^{\rm max}<1000\,\mu m\). The significance is improved from 34 to 38 with this requirement. ## III Results ### Baryon-to-meson ratios The fragmentation ratios of the charm quark obtained from different experiments were thought to be universal. Both HERA \(ep\) collision measurements [23; 24] and the previous \(pp\) collision measurements at LHC [25] are consistent with \(e^{+}e^{-}\) collision measurements [26]. However, an enhancement of \(\Lambda_{c}^{+}/D^{0}\) ratio has been observed in the recent measurement at ALICE [11; 27]. The deviation of the \(\Lambda_{c}^{+}/D^{0}\) ratio from \(pp\) collision to \(e^{+}e^{-}\) collision experiment indicates that even a small hadronic system can affect charm hadronization. Therefore, the \(\Lambda_{c}^{+}/D^{0}\) ratio should be re-examined in the \(ep\) collision system, specifically at EicC, which provides unique kinematic coverage and high luminosity. Figure 3: The detector’s performance for \(\pi\): (a) the efficiency of track reconstruction, (b) the resolution of the reconstructed track momentum, (c) the resolution of the reconstructed track \(DCA\) in \(z\) direction, (d) the resolution of the reconstructed track \(DCA\) in the \(r\phi\) plane. The different marker styles and line colors of the lines represent the different \(\eta\) regions of \(\pi\), which are shown in the legend of the panel (a). Two projections of \(\Lambda_{c}^{+}/D^{0}\) ratios are provided in this subsection. \(ep\) collision data with beam energies 3.5 GeV \(\times\) 20 GeV are generated by pythia 8.3 with QCD color reconnection (QCD-CR) tuning [28] and multiple-parton-interaction color reconnection (MPI-CR) tuning [20]. In Fig.10, the statistical uncertainty projection as a function of \(p_{T}\) is shown. The curves are from pythia 8.3, in which the solid and dashed curves are from the QCD-CR and MPI-CR settings. The values of the red points are the average of the two models at the corresponding rapidity regions. The estimation of statistical uncertainties is the root mean square of the sum of squares of errors propagated from all sources. For \(\Lambda_{c}^{+}/D^{0}\) ratio, the expression is \[\sigma(\frac{\Lambda_{c}^{+}}{D^{0}})=\sqrt{(\frac{\sigma(\Lambda_{c}^{+})}{N _{D^{0}}})^{2}+(\frac{N_{\Lambda_{c}^{+}}}{N_{D^{0}}^{2}}\sigma(D^{0}))^{2}}\] , where the statistical uncertainties of \(D^{0}\) and \(\Lambda_{c}^{+}(\sigma(D^{0})\) and \(\sigma(\Lambda_{c}^{+})\)) are calculated by the signal yields divided by the corresponding significance. The statistical uncertainty is scaled to an integral luminosity 100 fb\({}^{-1}\). The results of \(\Lambda_{c}^{+}/D^{0}\) ratios are presented in two rapidity regions, the mid-rapidity region (\(|y|<1\)) and forward-rapidity region (\(1<y<3\)), shown as solid circles and triangles respectively. The measurement in the \(pp\) collision from ALICE [11] is also presented, and shown as black solid circles in Fig.10. The black points represent the center values, the black line is the statistical uncertainty, Figure 5: Normalized \(D^{0}\) Topological cut distributions, where the blue lines are for signal and the red lines are for background. Figure 6: Topological requirements are optimized to ensure the highest significance of \(D^{0}\). The previous best results are applied to the next round of optimization. Figure 7: The \(\pi K\) invariant mass distributions with different detector performance configurations:(**no PID**) without either PID system or vertex detectors, (**with PID**) with the PID system but without vertex detectors, (**PID+Vertex**) with the PID system and the vertex detectors. The significance is achieved at an integrated luminosity 0.04fb\({}^{-1}\). The significances of **with PID** and **PID+Vertex** are 13.9 and 21.8, respectively. The \(S/B\) of **with PID** and **PID+Vertex** are 0.06 and 0.215, respectively. and the grey box is the systematic uncertainty. Compared to ALICE, the statistical precision is significantly improved in EicC due to high luminosity and much less hadronic combinatorial background. EicC results show that the uncertainties are smaller than the difference between the two models at both mid- and forward-rapidity. This suggests that EicC has sufficient power to distinguish between different hadronization models. In addition, the \(\Lambda_{c}^{+}/D^{0}\) ratios have different behaviors in mid- and forward-rapidity regions. A wide rapidity coverage, benefiting from large acceptance of the detector design, can be achieved at EicC. As mentioned above, there is Figure 10: The values of the red dashed and red solid lines are calculated from pythia 8.3 with QCD-CR and MPI-CR. The red solid triangles and circles are the centers between the curves of the two models at both mid- and forward-rapidity. The red error bars represent the projection of \(\Lambda_{c}^{+}/D^{0}\) ratio statistical uncertainties at an integrated luminosity \(100\,\mathrm{fb}^{-1}\). The black points are the result of ALICE measurement where the error bars are statistical uncertainties and the grey boxes are the systematic uncertainties [11]. Figure 9: The results of \(\Lambda_{c}^{+}\) reconstruction.(a) the significance scanning for \(DCA_{\mathrm{Daughters}}^{max}\). (b) the \(\pi Kp\) invariant mass distributions. The read solid and blue dashed lines represent the distributions without and with requirement application, respectively. Figure 8: \(\Lambda_{c}^{+}\) topological cut distributions. The blue dashed and red solid lines represent the distributions for the background and signal, respectively. Because the differences of \(decayLY_{XY}^{\Lambda_{c}^{+}}\) and \(\cos\theta_{XY}\) distribution between signal and background are small, \(DCA_{\mathrm{Daughters}}^{max}\) is the only topological requirement taken into account. an enhancement of \(\Lambda_{c}^{+}\) production caused by beam remnant formation in the very forward rapidity region. With wider rapidity coverage of EicC, the interaction between beam remnant and charm quark can also be studied in greater detail. Fig.11 shows the statistical uncertainty projections of \(\Lambda_{c}^{+}/D^{0}\) as a function of charged particle multiplicity by red marks. The high multiplicity results from EIC-BNL simulation [22] are shown as the open blue marks. Only the tracks in \(|\eta|<3\), \(p_{T}>0.2\,\mathrm{GeV}/c\) and not from \(D^{0}\) or \(\Lambda_{c}^{+}\) decay are counted in the number of charge tracks. The dashed and solid lines are from the calculation of MPI-CR and QCD-CR pythia 8.3 models, respectively. The red and blue curves are the results for EicC and EIC-BNL, respectively. EicC can provide a measurement of \(\Lambda_{c}^{+}/D^{0}\) ratio at low multiplicity as a complement to EIC-BNL measurements. The measurement of the charm baryon-to-meson ratio can provide insight into the interplay between the hard and soft processes that produce particles. In \(pp\) or \(AA\) collisions, a higher charged particle multiplicity corresponds to higher collision energy and lower impact parameters. The events with different collision energies and impact parameters will have different underlying event structures. Thus, a wider coverage of multiplicity is necessary to study the charm hadronization at \(\gamma-Nucleon\) and \(\gamma-Nuclei\) collision systems. ### \(D^{0}\) double ratio Hadron production is affected by initial and final state interactions. The primary initial effect is EMC effect, which is not of interest in this particular study. We focus on the final state interaction and the effect of the surrounding nuclear medium on hadronization. To minimize the initial state effect, the observable double ratio is defined as \[R_{eA}^{h}(Q^{2},x_{B},z)=\frac{N_{A}^{h}(Q^{2},x_{B},z)/N_{A}^{DIS}(Q^{2},x_{B })}{N_{p}^{h}(Q^{2},x_{B},z)/N_{p}^{DIS}(Q^{2},x_{B})}\] where \(N^{DIS}\) and \(N^{h}\) are the number of DIS events and the number of the SIDIS events in which \(h\) hadrons are produced, respectively. DIS requirements, including (i) \(Q^{2}>2\,\mathrm{GeV}^{2}\), (ii) \(\nu<96\,\mathrm{GeV}\), and (iii) \(W^{2}>4\,\mathrm{GeV}^{2}\) are applied. The statistical uncertainty projection of \(D^{0}\) double ratio as a function of \(z\) is shown in Fig.12. Here, the electron-nucleus collision is the \(eAu\) collision with electron momentum \(3.5\,\mathrm{GeV}/c\) and \(Au\) momentum \(12.93\,\mathrm{GeV}/c/u\). The center values of all the points are set to 1 because there should be no medium modification in pythia. The statistical uncertainty is scaled to the integral luminosity \(10\,\mathrm{fb}^{-1}/\mathrm{u}\) of electron-nucleus and electron-proton collision data. The open red circles are EicC results for the whole kinematic region. The solid red circles are EicC results for additional DIS criteria: \(Q^{2}<8\,\mathrm{GeV}^{2}\) and \(48\,\mathrm{GeV}<\nu<77\,\mathrm{GeV}\). The EicC specific kinematic coverage in Fig.12 is chosen to be a region where \(\nu\) is rather low and the \(D^{0}\) are produced abundantly. There are precise measurements of the light hadron double ratio (\(\pi^{+/-},K^{+/-},p\) and \(\bar{p}\)) versus multiple variables at HERMES. However, as mentioned earlier, two theories with different time scales can adequately describe the HERMES data [16; 29]. To provide an additional powerful constraint to these theories, \(D^{0}\) double ratio is necessary because that \(D^{0}\) double ratio is very different from that of light hadrons. For light hadrons, the values of the double ratio at all \(z\) bins are below unity for nuclear attenuation or parton energy loss. However, Figure 11: The projection of the statistical uncertainty of \(\Lambda_{c}^{+}/D^{0}\) ratio as a function of multiplicity. The description of both lines and points is similar to what has been discussed in Fig.10. The red and blue points are for EicC results and ATHENA results [22]. Figure 12: The \(D^{0}\) double ratio: The statistical uncertainty projections are corresponding to an integral luminosity \(\mathcal{L}_{int}=10\,\mathrm{fb}^{-1}\). The open and solid circles represent the results of EicC total kinematic coverage (EicC Total: \(2\,\mathrm{GeV}^{2}<\mathrm{Q}^{2}<200\,\mathrm{GeV}^{2},\nu<96\,\mathrm{GeV}\)) and a smaller specific kinematic coverage (EicC Specific: \(2\,\mathrm{GeV}^{2}<\mathrm{Q}^{2}<8\,\mathrm{GeV}^{2}\), \(48\,\mathrm{GeV}<\nu<77\,\mathrm{GeV}\)). for \(D^{0}\), there is a peak in \(c\to D^{0}\) fragmentation function at low \(z(z<0.1)\). So \(D^{0}\) double ratio is higher than unity at low \(z\) bins and lower than unity at high \(z\) bins for the same reason as light hadron suppression. In addition, a lower c.m. energy can produce a larger cold nuclear effect [30]. Therefore, EicC, which has a lower c.m. energy than EIC-BNL but can abundantly produce \(D^{0}\), can be an ideal facility to study how the nuclear medium affects the hadronization process. Additionally, it can complement EIC-BNL by covering a wider kinematic region. ## IV Summary In summary, open charm reconstruction is studied with fast simulation, where the detector performance parameterizations are derived from the geant4 simulation with the developing detector geometry designed for EicC. The reconstruction of \(D^{0}(\to K^{-}\pi^{+})\) and \(\Lambda_{c}^{+}(\to p\pi^{+}K^{-})\) in the \(\sqrt{s}=16.7\,\mathrm{GeV}\,ep\) collision demonstrates the importance of vertex detectors in reducing background and improving momentum resolutions. This leads to higher signal significance and lower statistical uncertainties, providing sufficient precision for future charm hadronization studies in EicC. We have projected the statistical uncertainty of \(\Lambda_{c}^{+}/D^{0}\) ratios as functions of \(p_{T}\) and charged-particle multiplicity. Different hadronization models (QCD-CR and MPI-CR) can be distinguished with the projected statistical uncertainties at an integrated luminosity of \(\mathcal{L}_{int}=100\,\mathrm{fb}^{-1}\). A broader rapidity coverage than ALICE can provide abundant information about hadronization in \(\gamma-Nucleon\) and \(\gamma-Nuclei\) collision systems. The statistical uncertainties of the double ratio of \(D^{0}\), which behave differently from light hadron double ratios, are projected as a function of z at an integrated luminosity of \(\mathcal{L}_{int}=10\,\mathrm{fb}^{-1}/\mathrm{u}\). Precise measurements at EicC can provide an excellent opportunity to understand charm hadronization mechanisms and how charm interacts with the nuclear medium. ###### Acknowledgements. This work is supported in part by the Strategic Priority Research Program of the Chinese Academy of Sciences under grant number XDB34000000, the Guangdong Major Project of Basic and Applied Basic Research No. 2020B0301030008, the Guangdong Provincial Key Laboratory of Nuclear Science with No. 2019B121203010, the National Natural Science Foundation of China with Grant No. 11890712, 12061141008 and the National Key R&D Program of China with Grant No. 2018YFE0104700 and 2018YFE0205200. The authors acknowledge the computing resources available at the Southern Nuclear Science Computing Center.
2304.05866
NoisyTwins: Class-Consistent and Diverse Image Generation through StyleGANs
StyleGANs are at the forefront of controllable image generation as they produce a latent space that is semantically disentangled, making it suitable for image editing and manipulation. However, the performance of StyleGANs severely degrades when trained via class-conditioning on large-scale long-tailed datasets. We find that one reason for degradation is the collapse of latents for each class in the $\mathcal{W}$ latent space. With NoisyTwins, we first introduce an effective and inexpensive augmentation strategy for class embeddings, which then decorrelates the latents based on self-supervision in the $\mathcal{W}$ space. This decorrelation mitigates collapse, ensuring that our method preserves intra-class diversity with class-consistency in image generation. We show the effectiveness of our approach on large-scale real-world long-tailed datasets of ImageNet-LT and iNaturalist 2019, where our method outperforms other methods by $\sim 19\%$ on FID, establishing a new state-of-the-art.
Harsh Rangwani, Lavish Bansal, Kartik Sharma, Tejan Karmali, Varun Jampani, R. Venkatesh Babu
2023-04-12T13:56:45Z
http://arxiv.org/abs/2304.05866v1
# NoisyTwins: Class-Consistent and Diverse Image Generation through StyleGANs ###### Abstract StyleGANs are at the forefront of controllable image generation as they produce a latent space that is semantically disentangled, making it suitable for image editing and manipulation. However, the performance of StyleGANs severely degrades when trained via class-conditioning on large-scale long-tailed datasets. We find that one reason for degradation is the collapse of latents for each class in the \(\mathcal{W}\) latent space. With NoisyTwins, we first introduce an effective and inexpensive augmentation strategy for class embeddings, which then decorrelates the latents based on self-supervision in the \(\mathcal{W}\) space. This decorrelation mitigates collapse, ensuring that our method preserves intra-class diversity with class-consistency in image generation. We show the effectiveness of our approach on large-scale real-world long-tailed datasets of ImageNet-LT and iNaturalist 2019, where our method outperforms other methods by \(\sim 19\%\) on FID, establishing a new state-of-the-art. ## 1 Introduction StyleGANs [21, 22] have shown unprecedented success in image generation, particularly on well-curated and articulated datasets (eg. FFHQ for face images, etc.). In addition to generating high fidelity and diverse images, StyleGANs also produce a disentangled latent space, which is extensively used for image editing and manipulation tasks [50]. As a result, StyleGANs are being extensively used in various applications like face-editing [11, 42], video generation [47, 53], face reenactment [3], etc., which are a testament to their usability and generality. However, despite being successful on well-curated datasets, training StyleGANs on in-the-wild and multi-category datasets is still challenging. A large-scale conditional StyleGAN (i.e. StyleGAN-XL) on ImageNet was recently trained successfully by Sauer _et al_. [40] using the ImageNet pre-trained model through the idea of a projection discriminator [39]. While the StyleGAN-XL uses additional pre-trained models, obtaining such models for distinctive image domains like medical, forensics, and fine-grained data may not be feasible, which limits its generalization across domains. In this work, we aim to train vanilla class-conditional StyleGAN without any pre-trained models on challenging real-world long-tailed data distributions. As training StyleGAN with augmentations [55, 20] leads to low recall [24] (which measures diversity in the generated images) and mode collapse, particularly for minority (i.e. tail) classes. For investigating this phenomenon further, we take a closer look at the latent \(\mathcal{W}\) space of StyleGAN that is produced by a fully-connected mapping network that takes the conditioning variables \(\mathbf{z}\) (i.e. random noise) and class embedding \(\mathbf{c}\) as inputs. The vectors \(\mathbf{w}\) in \(\mathcal{W}\) space are used for conditioning various layers of the generator (Fig. 2). We find that output vectors \(\mathbf{w}\) from the mapping network hinge on the conditioning variable \(\mathbf{c}\) and become invariant to random conditioning vector \(\mathbf{z}\). This collapse of latents leads to un Figure 1: **Qualitative Comparison on tail classes (T1-T4) for iNaturalist 2019.** We provide sample(s) from real class (with class frequency), generated by StyleGAN2-ADA and after adding proposed NoisyTwins. NoisyTwins achieves remarkable diversity, class-consistency and quality by just using 38 samples on average. stable training and is one of the causes of poor recall (a.k.a. mode collapse) for minority classes. Further, on augmenting StyleGAN with recent conditioning and regularization techniques [17, 36], we find that they either lead to a poor recall for minority classes or lead to class confusion (Fig. 2) instead of mitigating the collapse. To mitigate the collapse of \(\mathbf{w}\) in \(\mathcal{W}\) space, we need to ensure that the change in conditioning variable \(\mathbf{z}\) leads to the corresponding change in \(\mathbf{w}\). Recently in self-supervised learning, several techniques [2, 54] have been introduced to prevent the collapse of learned representations by maximizing the information content in the feature dimensions. Inspired by them we propose NoisyTwins, in which we first generate inexpensive twin augmentations for class embeddings and then use them to decorrelate the \(\mathbf{w}\) variables through self-supervision. The decorrelation ensures that \(\mathbf{w}\) vectors are diverse for each class and the GAN is able to produce intra-class diversity among the generated images. We evaluate NoisyTwins on challenging benchmarks of large-scale long-tailed datasets of ImageNet-LT [31] and iNaturalist 2019 [49]. These benchmarks are particularly challenging due to a large number of classes present, which makes GANs prone to class confusion. On the other hand, as these datasets are long-tailed with only a few images per class in tail classes, generating diverse images for those classes is challenging. We observe that existing metrics used in GAN evaluations are not able to capture both class confusion and mode collapse. As a remedy, we propose to use intra-class Frechet Inception Distance (FID) [12] based on features obtained from pre-trained CLIP [35] embeddings as an effective metric to measure the performance of class-conditional GANs in long-tailed data setups. Using NoisyTwins enables StyleGAN to generate diverse and class-consistent images across classes, mitigating the mode collapse and class confusion issues in existing state-of-the-art (SotA) (Fig. 1). Further, with NoisyTwins, we obtain diverse generations for tail classes even with \(\leq\) 30 images, which can be attributed to the transfer of knowledge from head classes through shared parameters (Fig. 1 and 6). In summary, we make the following contributions: 1. We evaluate various recent SotA GAN conditioning and regularization techniques on the challenging task of long-tailed image generation. We find that all existing methods either suffer from mode collapse or lead to class confusion in generations. 2. To mitigate mode collapse and class confusion, we introduce NoisyTwins, an effective and inexpensive augmentation strategy for class embeddings that decorrelates latents in the \(\mathcal{W}\) latent space (Sec. 4). 3. We evaluate NoisyTwins on large-scale long-tailed datasets of ImageNet-LT and iNaturalist-2019, where it consistently improves the StyleGAN2 performance (\(\sim 19\%\)), achieving a new SotA. Further, our approach can also prevent mode collapse and enhance the performance of few-shot GANs (Sec. 5.3). ## 2 Related Works **StyleGANs.** Karras introduced StyleGAN [21] and subsequently improved its image quality in StyleGAN2. StyleGAN could produce high-resolution photorealistic images as demonstrated on various category-specific datasets. It introduced a mapping network, which mapped the sampled noise into another latent space, which is more disentangled and semantically coherent, as demonstrated by its downstream usage for image editing and manipulation [1, 33, 34, 44, 33]. Further, StyleGAN has been extended to get novel views from images [27, 45, 29], thus making it possible to get 3D information from it. These downstream advances are possible due to the impressive performance of StyleGANs on class-specific datasets (such as faces). However, similar photorealism levels are yet uncommon on multi-class long-tailed datasets (such as ImageNet). **GANs for Data Efficiency and Imbalance.** Failure of GANs on less data was concurrently reported by Karras _et al_. [20] and Zhao _et al_. [55]. The problem is rooted in the overfitting of the discriminator due to less real data. Since then, the proposed solutions for this problem have relied on a) augmenting the data, b) introducing regularizers, and c) architectural modifications. Karras _et al_. [20] and Zhao _et al_. [55] relied on differentiable data augmentation before passing images into the discriminator to solve this problem. DeceiveD [15] proposed to introduce label-noise for Figure 2: **Schematic illustration of \(\mathcal{W}\) space for different GANs.** Existing conditioning methods either suffer from mode collapse [20] or lead to class confusion [36] in \(\mathcal{W}\) space. With proposed NoisyTwins, we achieve intra class diversity while avoiding class confusion. discriminator training. LeCamGAN [48] finds that enforcing LeCam divergence as a regularization trick in the discriminator can robustify GAN training under a limited data setting. DynamicD [51] tunes the capacity of the discriminator on-the-fly during training. While these methods can handle the data inefficiency, they are ineffective on class imbalanced long-tailed data distribution [36]. CBGAN [37] proposed a solution to train the unconditional GAN model on long-tailed data distribution by introducing a signal from the classifier to balance the classes generated by GAN. In a long-tailed class-conditional setting, gSR [36] proposes to regularize the exploding spectral norms of the class-specific parameters of the GAN. Collapse-by-conditioning [41] addresses the limited data in classes by introducing a training regime that transitions from an unconditional to a class-conditioned setting, thus exploiting the shared information across classes during the early stages of the training. However, these methods suffer from either class confusion or poor generated image quality on large datasets, which is resolved by NoisyTwins. **Self-Supervised Learning for GANs.** Ideas from Self-supervised learning have shown their benefits in GAN training. IC-GAN [6] trains GAN conditioned on embeddings on SwAV [5], which led to remarkable improvement in performance on the long-tailed version of ImageNet. In-SGen [52] and ReACGAN [16, 17] introduce the auxiliary task of instance discrimination for the discriminator, thereby making the discriminator focus on multiple tasks and thus alleviating discriminator overfitting. While InsGen relies on both noise space and image space augmentations, ReACGAN and ContraGAN follow only image space augmentations. Contrary to these, NoisyTwins performs augmentations in the class-embedding space and contrasts them in the \(\mathcal{W}\)-space of the generator instead of the discriminator. ## 3 Preliminaries ### StyleGAN StyleGAN [21] is a Generative Adversarial Network comprising of its unique Style Conditioning Based Generator (\(\mathcal{G}\)) and discriminator network (\(\mathcal{D}\)) trained jointly. We will focus on the architecture of StyleGAN2 [23] as we use it in our experiments, although our work is generally applicable to all StyleGAN architectures. The StyleGAN2 generator is composed of blocks that progressively upsample the features and resolution inspired by Progressive GAN [19], starting from a single root image. The diversity in the images comes from conditioning each block of image generation through conditioning on the latent coming from the mapping network (Fig. 2). The mapping network is a fully connected network that takes in the conditioning variables, the \(\mathbf{z}\in\mathbb{R}^{d}\) coming from a random distribution (e.g., Gaussian, etc.) and class conditioning label \(c\) which is converted to an embedding \(\mathbf{c}\in\mathbb{R}^{d}\). The mapping network takes these and outputs vectors \(\mathbf{w}\) in the \(\mathcal{W}\) latent space of StyleGAN, which is found to be semantically disentangled to a high extent [50]. The \(\mathbf{w}\) is then processed through an affine transform and passed to each generator layer for conditioning the image generation process through Adaptive Instance Normalization (AdaIN) [13]. The images from generator \(\mathcal{G}\), along with real images, are passed to discriminator \(\mathcal{D}\) for training. The training utilizes the non-saturating adversarial losses [9] for \(\mathcal{G}\) and \(\mathcal{D}\) given as: \[\min_{\mathcal{D}}\mathcal{L}_{\mathcal{D}} =\sum_{i=1}^{m}\log(\mathcal{D}(\mathbf{x}_{i}))+\log(1-\mathcal{ D}(\mathcal{G}(\mathbf{z}_{i},\mathbf{c}_{i}))) \tag{1}\] \[\min_{\mathcal{G}}\mathcal{L}_{\mathcal{G}} =\sum_{i=1}^{m}-\log(\mathcal{D}(\mathcal{G}(\mathbf{z}_{i}, \mathbf{c}_{i}))) \tag{2}\] We now describe the issues present in the StyleGANs trained on long-tailed data and their analysis in \(\mathcal{W}\) space. ### Class Confusion and Class-Specific Mode Collapse in Conditional StyleGANs To finely analyze the performance of StyleGAN and its variants on long-tailed datasets, we train them on the CIFAR10-LT dataset. In Fig. 3, we plot the qualitative results of generated images and create a t-SNE plot for latents in \(\mathcal{W}\) space for each class. We first train the StyleGAN2 baseline with augmentations (DiffAug) [20, 55]. We find that it leads to mode collapse, specifically for tail classes (Fig. 3). In conjunction with images, we also observe that corresponding t-SNE embeddings are also collapsed near each class's mean in \(\mathcal{W}\) space. Further, recent methods which have proposed the usage of contrastive learning for GANs, improve their data efficiency and prevent discriminator overfitting [14, 16]. We also evaluate them by adding the contrastive conditioning method, which is D2D-CE loss-based on ReACGAN [17], to the baseline, where in results, we observe that the network omits to learn tail classes and produces head class images at their place (i.e., class confusion). In Fig. 3, it can be seen that the network confuses semantically similar classes, that is, generating cars (head or majority class) in place of trucks and airplanes (head class) instead of ships. In the \(\mathcal{W}\) space, we find the same number of clusters as the number of classes in the dataset. However, the tail label cluster images also belong to the head classes of cars and airplanes. In a very recent work gSR [36], it has been shown that constraining the spectral norm of \(\mathcal{G}\) embedding parameters can help reduce the mode collapse and lead to stable training. However, we find that constraining the embeddings leads to class confusion, as seen in t-SNE visualization in Fig. 3. We find that this class confusion gets further aggravated when StyleGAN is trained on datasets like ImageNet-LT, which contain a large number of classes, along with a bunch of semantically similar classes (Sec. 5.2). Based on our space analysis and qualitative results above, we observe that the class confusion and mode collapse of images is tightly coupled with the structure of \(\mathcal{W}\) space. Further, the recent SotA methods are either unable to prevent collapse or suffer from class confusion. Hence, this work aims to develop a technique that mitigates both confusion and collapse. ## 4 Approach In this section, we present our method NoisyTwins, which introduces noise-based augmentation twins in the conditional embedding space (Sec. 4.1), and then combines it with the Barlow-Twins-based regularizer from the self-supervised learning (SSL) paradigm to resolve the issue of class confusion and mode collapse (Sec. 4.2). ### Noise Augmentation in Embedding Space As we observed in the previous section that \(\mathbf{w}\) vectors for each sample become insensitive to changes in \(\mathbf{z}\). This collapse in \(\mathbf{w}\) vectors for each class leads to mode collapse for baselines (Fig. 3). One reason for this could be the fact that \(\mathbf{z}\) is composed of continuous variables, whereas the embedding vectors \(\mathbf{c}\) for each class are discrete. Due to this, the GAN converges to the easy degenerate solution where it generates a single sample for each class, becoming insensitive to changes in \(\mathbf{z}\). For inducing some continuity in \(\mathbf{c}\) embeddings vectors, we introduce an augmentation strategy where we add i.i.d. noise of small magnitude in each of the variables in \(\mathbf{c}\). Based on our observation (Fig. 3) and existing works [36], there is a high tendency for mode collapse in tail classes. Hence we add noise in embedding space that is proportional to the inverse of the frequency of samples. We provide the mathematical expression of noise augmentation \(\mathbf{\tilde{c}}\) below: \[\mathbf{\tilde{c}}\sim\mathbf{c}+\mathcal{N}(\mathbf{0},\sigma_{c}\mathbb{I}_ {d})\text{ where }\sigma_{c}=\sigma\frac{(1-\alpha)}{1-\alpha^{n_{c}}} \tag{3}\] Here \(n_{c}\) is the frequency of training samples in class \(c\), \(\mathbb{I}\) is the identity matrix of size \(d\times d\), and \(\alpha,\sigma\) are hyper-parameters. The expression of \(\sigma_{c}\) is from the effective number of samples [8], which is a softer version of inverse frequency proportionality. In contrast to the image space augmentation, these _noise augmentations come for free_ as there is no significant additional computation overhead. This noise is added in the embedding \(\mathbf{c}\) before passing it to the generator and the discriminator, which ensures that the class embeddings occupy a continuous region in latent space. **Insight:** The augmentation equation above (Eq. 3) can be interpreted as approximating the discrete random variable \(\mathbf{c}\) with a Gaussian with finite variance and the embedding parameters \(\mathbf{c}\) being the mean \(\mu_{\mathbf{c}}\). \[\mathbf{\tilde{c}}\sim\mathcal{N}(\mu_{\mathbf{c}},\sigma_{c}\mathbb{I}_{d}) \tag{4}\] This leads to the class-embedding input \(\mathbf{\tilde{c}}\) to mapping network to have a Gaussian distribution, similar in nature to \(\mathbf{z}\). This noise augmentation strategy alone mitigates the degenerate solution of class-wise mode collapse to a great extent (Table 1) and helps generate diverse latent \(\mathbf{w}\) for each class. Due to the diverse \(\mathbf{w}\) conditioning of the GAN, it leads to diverse image generation. ### Invariance in \(\mathcal{W}\)-Space with NoisyTwins The augmentation strategy introduced in the previous section expands the region for each class in \(\mathcal{W}\) latent space. Figure 3: **Comparison of GANs and their \(\mathcal{W}\) space for CIFAR10-LT.** We plot the generated images on (_left_) and generate a t-SNE plot of \(\mathbf{w}\) latents for generated images in \(\mathcal{W}\) space (_right_). We find that mode collapse and class confusion in images is linked to the corresponding collapse and confusion in latent \(\mathcal{W}\) space. Our proposed NoisyTwins mitigates both collapse (_left_) and confusion (_right_) simultaneously. Although that does lead to diverse image generation as \(\mathbf{w}\) are diverse; however, this does not ensure that these \(\mathbf{w}\) will generate class-consistent outputs for augmentations in embedding (\(\tilde{\mathbf{c}}\)). To ensure class consistent predictions, we need to ensure invariance in \(\mathbf{w}\) to noise augmentation. For enforcing invariance to augmentations, a set of recent works [2, 10, 54] in self-supervised learning make the representations of augmentations similar through regularization. Among them, we focus on Barlow Twins as it does not require a large batch size of samples. Inspired by Barlow twins, we introduce NoisyTwins (Fig. 4), where we generate twin augmentations \(\tilde{\mathbf{c}}_{a}\) and \(\tilde{\mathbf{c}}_{b}\) of the same class embedding (\(\mu_{c}\)) and concatenate them to same \(\mathbf{z}\). After creating a batch of such inputs, they are passed to the mapping network to get batches of augmented latents (\(\tilde{\mathbf{W}}_{A}\) and \(\tilde{\mathbf{W}}_{B}\)). These batches are then used to calculate the cross-correlation matrix of latent variables given as: \[\mathbf{C}_{j,k}=\frac{\sum\limits_{(\tilde{\mathbf{w}}_{a},\tilde{\mathbf{w} }_{b})\in(\tilde{\mathbf{W}}_{A},\tilde{\mathbf{W}}_{B})}\tilde{\mathbf{w}}_{ a}^{j}\tilde{\mathbf{w}}_{b}^{k}}{\sum\limits_{\tilde{\mathbf{w}}_{a}\in\tilde{ \mathbf{W}}_{A}}\tilde{\mathbf{w}}_{a}^{j}\tilde{\mathbf{w}}_{a}^{j}\sum\limits _{\tilde{\mathbf{w}}_{b}\in\tilde{\mathbf{W}}_{B}}\tilde{\mathbf{w}}_{b}^{k} \tilde{\mathbf{w}}_{b}^{k}} \tag{5}\] The matrix \(\mathbf{C}\) is a square matrix of size same as of latents \(\mathbf{w}\). The final loss based on confusion matrix is given as: \[\mathcal{L}_{\mathcal{NT}}=\sum_{j}(1-\mathbf{C}_{jj}^{2})+\gamma\sum_{j\neq k }\mathbf{C}_{j,k}^{2} \tag{6}\] The first term tries to make the two latents (\(\tilde{\mathbf{w}}_{a}\) and \(\tilde{\mathbf{w}}_{b}\)) invariant to the noise augmentation applied (i.e. similar), whereas the second term tries to de-correlate the different variables, thus maximizing the information in \(\mathbf{w}\) vector [54] (See Appendix). The \(\gamma\) is the hyper-parameter that determines the relative importance of the two terms. This loss is then added to the generator loss term (\(\mathcal{L}_{\mathcal{G}}+\lambda L_{\mathcal{NT}}\)) and optimized through backpropagation. The above procedure comprises our proposed method, NoisyTwins (Fig. 4), which we empirically evaluate in the subsequent sections. ## 5 Experimental Evaluation ### Setup **Datasets:** We primarily apply all methods on long-tailed datasets, as GANs trained on them are more prone to class confusion and mode collapse. We first report on the commonly used CIFAR10-LT dataset with an imbalance factor (ratio of most to least frequent class) of 100. To show our approach's scalability and real-world application, we test our method on the challenging ImageNet-LT and iNaturalist 2019 datasets. The ImageNet-LT [31] is a long-tailed variant of the 1000 class ImageNet dataset, with a plethora of semantically similar classes (e.g., Dogs, Birds etc.), making it challenging to avoid class confusion. iNaturalist-2019 [49] is a real-world long-tailed dataset composed of 1010 different variants of species, some of which have fine-grained differences in their appearance. For such fine-grained datasets, ImageNet pre-trained discriminators [39, 40] may not be useful, as augmentations used to train the model makes it invariant to fine-grained changes. **Training Configuration:** We use StyleGAN2 architecture for all our experiments. All our experiments are performed using PyTorch-StudioGAN implemented by Kang [18], which serves as a base for our framework. We use Path Length regularization (PLR) as it ensures changes in \(\mathcal{W}\) space lead to changes in images, using a delayed PLR on ImageNet-LT following [40]. We use a batch size of 128 for all our experiments, with one G step per D step. Unless stated explicitly, we used the general training setup for StyleGANs from [18], including methods like \(R_{1}\) regularization [32], etc. More details on the exact training configuration for each dataset are provided in Appendix. **Metrics:** In this work we use the following metrics for evaluation of our methods: **a) FID:** Frechet Inception Distance [12] is the Wasserstein-2 Distance between the real and validation data. We use a held-out validation set and 50k generated samples to evaluate FID in each case. As FID is biased towards ImageNet and can be arbitrarily manipulated, we also report \(\text{FID}_{\mathrm{CLIP}}\). **b) Precision & Recall:** As we aim to mitigate the mode collapse and achieve diverse generations across classes, we use improved Precision & Recall [25] metrics, as poor recall indicates mode collapse [40]. **d) Intra-Class \(\text{FID}_{\mathrm{CLIP}}\) (\(\text{\bf{ifID}}_{\mathrm{CLIP}}\)):** The usage of only FID based on Inception-V3 Networks for evaluation of Generative Models has severe limitations, as it has been found that FID can be reduced easily by some fringe features [26]. iFID is computed by taking FID between 5k generated and real samples for the same class. As we want to evaluate both class consistency and diversity, we find that similar limitations exist for intra-class FID (iFID), which has been used to evaluate class-conditional GANs [18]. In Fig. 5, we show the existence of generated images for a particular class (more in Appendix) from models trained on iNaturalist 2019, where iFID is better for the mode collapsed model than the other model generating diverse images. Whereas the \(\text{iFID}_{\mathrm{CLIP}}\), based on CLIP backbone can rank the models correctly with the model having mode collapse having high \(\text{iFID}_{\mathrm{CLIP}}\). Further, we find that the mean iFID can be deceptive in detecting class confusion and collapse cases, as it sometimes ranks models with high realism better than models generating diversity (See Appendix). Hence, mean \(\text{iFID}_{\mathrm{CLIP}}\) (ref. to as \(\text{iFID}_{\mathrm{CLIP}}\) in result section for brevity) can be reliably used to evaluate models for class consistency and diversity. **Baselines:** For evaluating NoisyTwins performance in comparison to other methods, we use the implementations present in StudioGAN [16]. For fairness, we re-run all the baselines on StyleGAN2 in the same hyperparameter setting. We compare our method to the StyleGAN2 (SG2) and StyleGAN2 with augmentation (DiffAug [55] and ADA [20]) baselines. We further tried to improve the baseline by incorporating the recent LeCam regularization method; however, it resulted in gains only for the iNaturalist 2019 dataset, where we use LeCam for all experiments. Further on StyleGAN2, we also use contrastive D2D-CE loss conditioning (ReACGAN) as a baseline. However, the D2D-CE baseline completely ignores learning of tail classes (Fig. 3) for CIFAR10-LT and is expensive to train; hence we do not report results for it for large-scale long-tailed datasets. We also compare our method against the recent SotA group Spectral Normalization (gSR) [36] method, which we implement for StyleGAN2 by constraining the spectral norms of embedding parameters of the generator (\(\mathcal{G}\)) as suggested by authors. As a sanity check, we reproduce their results on CIFAR10-LT and find that our implementation matches the reported results correctly. We provide results on all datasets, for both the proposed Noise Augmentation (+ Noise) and the overall proposed NoisyTwins (+NoisyTwins) method. ### Results on Long-Tailed Data Distributions **CIFAR10-LT**. We applied DiffAug [55] on all baselines, except on gSR, where we found that DiffAug provides inferior results compared to ADA (as also used by authors [36]). It can be observed in Table 2 that the addition of NoisyTwins regularization significantly improves over baseline by (\(\sim\) 14 FID) along with providing superior class consistency as shown by improved \(\text{iFID}_{\mathrm{CLIP}}\). NoisyTwins is also able to outperform the recent gSR regularization method and achieves improved results for all metrics. Further, NoisyTwins improves FID for StyleGAN2-ADA baseline used by gSR too from 32.08 to 23.02, however the final results are inferior than reported DiffAug baseline results. Further, we observed that despite not producing any tail class images (Fig. 3), the D2D-CE baseline has much superior FID in comparison to baselines. Whereas the proposed \(\text{iFID}_{\mathrm{CLIP}}\) value is similar for the baseline and D2D-CE model. This clearly demonstrates the superiority of proposed \(\text{iFID}_{\mathrm{CLIP}}\) in detecting class confusion. **Large-scale Long-Tailed Datasets.** We experiment with iNaturalist 2019 and ImageNet-LT. These datasets are particularly challenging as they contain long-tailed imbalances and semantically similar classes, making GANs prone to mode collapse and class confusion. The baselines StyleGAN2 and StyleGAN2-ADA both suffer from mode collapse (Fig. 6), particularly for the tail classes. Whereas for the recent SotA gSR method, we find that although it undergoes less collapse in comparison to baselines, it suffers from class confusion as seen from similar Intra-\(\text{FID}_{\mathrm{CLIP}}\) in comparison to baselines (Table 1). Compared to that, our method NoisyTwins improves when used with StyleGAN2-ADA significantly, leading to a relative improvement of Figure 5: **Choice of Eval. backbone: intra-FID (iFID) of a class based on InceptionV3 backbone (left plot) is not able to capture the mode collapse (increase in iFID in the absence of mode collapse). This is well-captured by \(\text{iFID}_{\mathrm{CLIP}}\) based on CLIP [35] backbone (right plot, decrease in iFID in the absence of mode collapse).** \(42.7\%\) in FID for ImageNet-LT and \(23.19\%\) on the iNaturalist 2019 dataset when added to StyleGAN2-ADA baseline. Further with Noise Augmentation (+Noise), we observe generations of high-quality class-consistent images, but it also suffers from mode collapse. This can be observed by the high-precision values in comparison to low-recall values. However, adding NoisyTwins regularization over the noise augmentation improves diversity by improving recall (Table 1). Fig. 6 presents the generated images of tail classes for various methods on ImageNet-LT, where NoisyTwins generations show remarkable diversity in comparison to others. The presence of diversity for classes with just 5-6 training images demonstrates successful transfer of knowledge from head classes to tail classes, due to shared parameters. Further, to compare with existing SotA reported results, we compare FID of BigGAN models from gSR [36] and Instance Conditioned GAN (ICGAN) [6]. For fairness, we compare FID on the validation set for which we obtained gSR models from authors and re-evaluate them, as they reported FID on a balanced training set. As BigGAN models are more common for class-conditioned generation [18], their baseline performs superior to StyleGAN2-ADA baselines (Table 3). However, the addition of NoisyTwins to the StyleGAN2-ADA method improves it significantly, even outperforming the existing methods of gSR (by 18.44%) and ICGAN (by 9.44%) based on BigGAN architecture. This shows that NoisyTwins allows the StyleGAN2 baseline to scale to large and diverse long-tailed datasets. ### NoisyTwins on Few-Shot Datasets We now demonstrate the potential of NoisyTwins in another challenging scenario of class-conditional few-shot image generation from GANs. We perform our experiments using a conditional StyleGAN2-ADA baseline, for which we tune hyper-parameters to obtain a strong baseline. We then apply our method of Noise Augmentation and NoisyTwins over the strong baseline for reporting our results. We use the few-shot dataset of LHI-AnimalFaces [46] and a subset of ImageNet Carnivores [30, 41] to report our results. Table 4 shows the results of these experiments, where we find that our method, NoisyTwins, significantly improves the FID of StyleGAN2 ADA baseline by (22.2%) on average for both datasets. Further, combining Noisy Twins with SotA Transitional-cGAN [41] through official code, also leads to effective improvement in FID. These results clearly demonstrate the diverse potential and applicability of our proposed method NoisyTwins. ## 6 Analysis We perform analysis of NoisyTwins w.r.t. to its hyperparameters, standard deviation (\(\sigma\)) of noise augmentation and regularization strength (\(\lambda\)). We also compare NoisyTwins objective (ref. Eq. 6) with contrastive objective. Finally, we compare NoisyTwins over Latent Diffusion Models for long-tailed class conditional generation task. We perform ablation experiments on CIFAR10-LT, for which additional details and results are present in Appendix. We also present comparison of NoisyTwins for GAN fine-tuning. **How much noise and regularization strength is optimal?** In Fig. 7, we ablate over the noise variance parameter \(\sigma\) for CIFAR10-LT. We find that a moderate value of noise strength 0.75 leads to optimal results. For the strength of NoisyTwins loss (\(\lambda\)), we find that the algorithm performs similarly on values near 0.01 and is robust to it (Fig. 7). **Which type of self-supervision to use with noise augmentation?** The goal of our method is to achieve invariance to Noise Augmentation in the \(\mathcal{W}\) latent space. This can be achieved using either contrastive learning-based methods like SimCLR [7] or negative-free method like Barlow Twins [54]. Contrastive loss (SimCLR based) produces FID of 26.23 vs 17.74 by NoisyTwins (BarlowTwins based). We find that contrastive baseline improves over the noise augmentation baseline (28.90) however falls significantly below the NoisyTwins, as the former requires a large batch size to be effective which is expensive for GANs. **How does NoisyTwins compare with modern Vision and Language models?** For evaluating the effectiveness of modern vision language-based diffusion models, we test the generation of the iNaturalist 2019 dataset by creating the prompt "a photo of \(S\)" where we replace the class name in place of S. We use the LDM [38] model trained on LAION-400M to perform inference, generating 50 images per class. We obtained an FID of 57.04 in comparison to best FID of 11.46 achieved by NoisyTwins. This clearly demonstrates that for specific use cases like fine-grained generation, GANs are still ahead of general-purpose LDM. ## 7 Conclusion In this work, we analyze the performance of StyleGAN2 models on the real-world long-tailed datasets including iNaturalist 2019 and ImageNet-LT. We find that existing works lead to either class confusion or mode collapse in the image space. This phenomenon is rooted in collapse and confusion in the latent \(\mathcal{W}\) space of StyleGAN2. Through our analysis, we deduce that this collapse occurs when the latents become invariant to random conditioning vectors \(\mathbf{z}\), and collapse for each class. To mitigate this, we introduce inexpensive noise based augmentation for discrete class embeddings. Further, to ensure class consistency, we couple this augmentation technique with BarlowTwins' objective in the latent \(\mathcal{W}\) space which imparts intra-class diversity to latent \(\mathbf{w}\) vectors. The noise augmentation and regularization comprises our proposed NoisyTwins technique, which improves the performance of StyleGAN2 establishing a new SotA on iNaturalist 2019 and ImageNet-LT. The extension of NoisyTwins for conditioning on more-complex attributes for StyleGANs is a good direction for future work. **Acknowledgements**: This work was supported in part by SERB-STAR Project (STR/2020/000128). Harsh Rangwani is supported by PMRF fellowship. \begin{table} \begin{tabular}{l c c|c c} \hline \hline \multicolumn{5}{c}{ImageNet Carnivore} & \multicolumn{2}{c}{AnimalFace} \\ \hline Method & FID(\(\downarrow\)) & iFID\(\downarrow\)(\(\downarrow\)) & FID(\(\downarrow\)) & iFID\(\downarrow\)(\(\downarrow\)) \\ \hline SG2 [23] & 111.83 & 36.34 & 94.09 & 29.94 \\ SG2+ADA [20] & 22.77 & 12.85 & 20.25 & 11.12 \\ \hline SG2+ADA+Noise (Ours) & 19.25 & 12.51 & 18.78 & 10.42 \\ + NoisyTwins (Ours) & **16.01** & **12.41** & **17.27** & **10.03** \\ \hline \multicolumn{5}{c}{FID(\(\downarrow\))} & \multicolumn{2}{c}{FID(\(\downarrow\))} \\ \hline Transitional-cGAN [41] & 14.60 & & 20.53 \\ + NoisyTwins (Ours) & **13.65** & & **16.15** \\ \hline \hline \end{tabular} \end{table} Table 4: **Quantitative results on ImageNet Carnivore and AnimalFace Datasets.** Our method improves over both StyleGAN2-ADA (SG2-ADA) baseline and SotA Transitional-cGAN. Figure 6: **Qualitative results on ImageNet-LT for tail classes.** We find that existing SotA methods for tail classes show collapsed (a) or arbitrary image generation (b). With NoisyTwins, we observe diverse and class-consistent image generation, even for classes having 5-6 images. The tail classes get enhanced diversity by transferring the knowledge from head classes, as they share parameters. Figure 7: **Ablation of Hyperparameters.** Quantitative comparison on CIFAR10-LT for standard deviation of Noise Augmentation (\(\sigma\)) and strength (\(\lambda\)) of NoisyTwins loss. ## Appendix A Notations and Code We summarize the notations used throughout the paper in Table 5. We provide PyTorch-style pseudo code for NoisyTwins in noisy_twins.py in the supplementary material. We will open-source our code to promote reproducible research. ## Appendix B Comparison of iFID and iFID\({}_{\mathrm{CLIP}}\) In this section, we present failure cases of InceptionV3-based iFID in the detection of mode collapse, and show how CLIP-based iFID can detect these cases. InceptionV3-based iFID assigns a lower value to a generator with mode collapse, compared to another generator which creates diverse and class-consistent images. In addition to the example given in the main text (Fig. 5), we provide examples from three different classes (Fig. 9). In all the four cases, the InceptionV3-based iFID is better for mode collapsed classes. _Whereas iFID\({}_{\mathrm{CLIP}}\) follows the correct behavior, where the class consistent and diverse model is ranked better_. Due to this inconsistent behavior, mean iFID (mean across classes) which is a commonly used as a metric for quantifying class confusion [17] can be incorrect. For example, we observe that the StyleGAN2-ADA baseline with proposed noise augmentation achieves mean iFID (243.88) on ImageNet-LT, compared to 257.29 for the NoisyTwins model (Table 1 in main text). However, while examining the tail class samples (Fig. 8), we find that noise augmented baseline suffers from mode collapse and class confusion, whereas NoisyTwins generates diverse and class-consistent images. Hence, the mean iFID based on Inception-V3 does not align well with qualitative results. On the contrary, the iFID\({}_{\mathrm{CLIP}}\) value is 41.20 for the noise-augmented model compared to 39.37 for NoisyTwins, which correlates with the human observation that the NoisyTwins model should have a lower FID as it is diverse and class-consistent. Hence, the proposed metric iFID\({}_{\mathrm{CLIP}}\) can be used to to evaluate models for class-conditional image generation reliably. ## Appendix C Experimental Details Figure 8: **Qualitative Results and iFID. We observe that the noise-only baseline suffers from the mode collapse and class confusion for tail categories as shown on (_left_). Despite this, it is found that the mean iFID based on Inception V3 shows a smaller value for StyleGAN2ADA+Noise, whereas a higher value for diverse and class-consistent NoisyTwins. Hence, this metric does not align with qualitative results. On the other hand, the proposed mean iFID\({}_{\mathrm{CLIP}}\) is lower for NoisyTwins, demonstrating its reliability for evaluating GAN models.** We run our experiments using PyTorchStudioGAN [18] as the base framework. For most baseline experiments, we use the standard StyleGAN configurations present in the framework. We use a learning rate of 0.0025 for the discriminator (\(\mathcal{D}\)) and the generator (\(\mathcal{G}\)) network. We use a batch size of 128 for all our experiments. In addition, following the observations of previous work [40], we apply a delayed Path Length Regularization (PLR) starting at 60k iterations for all our experiments on ImageNet-LT. For NoisyTwins, the most important hyperparameters are \(\lambda\) (regularization strength) and \(\sigma\) (noise variance). We perform a grid search on \(\lambda\) values of {0, 0.001, 0.01, 0.1} and \(\sigma\) values of {0.10, 0.25, 0.50, 0.75}. We provide a detailed list of optimal hyperparameters used in Table 6. All the models trained on a particular dataset use the same hyperparameters, to maintain fairness in the comparison of models. We summarize all the hyperparameters used for respective datasets in Table 6. For our experiments on few-shot datasets with SotA transitional-cGAN, we use the author's official code implementation available on GitHub 1. We use the same configuration specified to first evaluate on ImageNet Carnivores and AnimalFaces datasets. To integrate NoisyTwins, we generate the noise augmentations by augmenting the class embeddings and then apply NoisyTwins regularization in \(\mathcal{W}\) space. We use the same hyperparameter setting used by \begin{table} \begin{tabular}{r|c|c|c||c|c} \hline \hline \multicolumn{5}{c}{Long-Tail Datasets} & \multicolumn{2}{c}{Few-Shot Datasets} \\ \hline & iNaturalist-2019 & ImageNet-LT & CIFAR10-LT (\(\rho\)=100) & ImageNet Carnivores & AnimalFaces \\ \hline Resolution & 64 & 64 & 32 & 64 & 64 \\ Augmentation & ADA & ADA & DiffAug & ADA & ADA \\ \hline \multicolumn{5}{c}{Regularizers} \\ \hline Effective Samples \(\alpha\) & 0 & 0 & 0.99 & 0 & 0 \\ Noise Scaling \(\sigma\) & 0.1 & 0.25 & 0.75 & 0.5 & 0.5 \\ NoisyTwins Start Iter. & 25k & 60k & 0 & 0 & 0 \\ NoisyTwins Weights (\(\lambda\), \(\gamma\)) & 0.001, 0.005 & 0.001, 0.005 & 0.001, 0.05 & 0.001, 0.05 \\ LeCam Reg Weight & 0.01 & 0 & 0 & 0 & 0 \\ R1 Regularization \(\gamma_{R1}\) & 0.2048 & 0.2048 & 0.01 & 0.01 & 0.01 \\ PLR Start Iter. & 0 & 60k & No PLR & 0 & 0 \\ \hline \multicolumn{5}{c}{StyleGAN} \\ \hline Mapping Net Layers & 2 & 8 & 8 & 2 & 2 \\ \(\mathcal{D}\) Backbone & ResNet & ResNet & Orig & ResNet & ResNet \\ Style Mixing & 0.9 & 0.9 & 0 & 0 & 0 \\ \(\mathcal{G}\) EMA Rampup & None & None & 0.05 & 0.05 & 0.05 \\ \(\mathcal{G}\) EMA Kimg & 20 & 20 & 500 & 500 & 500 \\ MiniBatch Group & 8 & 8 & 32 & 32 & 32 \\ \hline \hline \end{tabular} \end{table} Table 6: **HyperParameter Configurations used for experiments.** We provide a detailed list of hyperparameters used for the experiments across datasets for NoisyTwins on StyleGANs. Figure 9: **IFID Comparison on iNaturalist 2019 dataset.** We provide examples of classes where the quality of images generated by StyleGAN2-ADA is worse, which either suffers from mode collapse or artifacts in generation. Yet iFID based on Inception V3 ranks it higher in terms of quality, which doesn’t align with human judgement. On the other hand the proposed iFID\({}_{\mathrm{CLIP}}\) is able to rank the models correctly and gives a lower value to diverse generations from NoisyTwins. the authors and NoisyTwins with \(\lambda=0.001\) and \(\gamma=0.05\). ### Statistical Significance of the Experiments We report mean and standard deviation over three evaluation runs for all baselines on the CIFAR10-LT (Table 7). It can be observed that most metrics that we have reported have a low standard deviation, and metrics are close to the mean value across runs. As we find standard deviation to be low across the metrics evaluated and the process of evaluating iFID to be expensive, we do not explicitly report them on large multi-class datasets. ## Appendix D Additional Details of Analysis We perform our ablation experiments on CIFAR10-LT using the same configuration as mentioned in Table 6. We provide ablation experiments on the standard deviation of noise (\(\sigma\)) and the strength of regularization loss (\(\lambda\)) (Sec. 6), as we observe that they influence the performance of the system most. We further provide ablation on the parameter \(\gamma\) in Fig. 10, which controls the relative importance between the invariance enforcement and decorrelation enhancement terms in Eq. 6 of the main text. We find that performance remains almost the same while varying \(\gamma\) from 0.005 to 0.1, with optimal value occurring around 0.05 for CIFAR10-LT. Hence, the model is robust to \(\gamma\). We further analyze our method for a range of imbalance ratios (i.e., \(\rho\), ratio of the most frequent to least frequent class) in the class distribution. We present results for CIFAR10-LT with imbalance factors (\(\rho\)) values of 50, 100, and 200 in Table 8. Our method can prevent mode collapse and improves the baseline FID significantly in all cases. Also note that the baseline gets more unstable (high FID) as the imbalance ratio increases, which shows the necessity of using NoisyTwins as it stabilizes the training even \begin{table} \begin{tabular}{l c c c c c} \hline \hline & \multicolumn{4}{c}{CIFAR10-LT (\(\rho\)=100)} \\ \hline Method & FID(\(\downarrow\)) & FID(\(\downarrow\)) & iFID(\(\downarrow\)) & Precision(\(\uparrow\)) & Recall(\(\uparrow\)) \\ \hline SG2+DiffAug [20] & 31.72\(\pm_{0.16}\) & 6.24\(\pm_{0.02}\) & 11.63\(\pm_{0.03}\) & 0.63\(\pm_{0.00}\) & 0.35\(\pm_{0.00}\) \\ SG2+D2D-CE [17] & 20.08\(\pm_{0.15}\) & 4.75\(\pm_{0.04}\) & 11.35\(\pm_{0.01}\) & **0.73\(\pm_{0.00}\)** & 0.43\(\pm_{0.00}\) \\ gSR [36] & 22.50\(\pm_{0.29}\) & 5.55\(\pm_{0.01}\) & 9.94\(\pm_{0.00}\) & 0.70\(\pm_{0.00}\) & 0.28\(\pm_{0.01}\) \\ \hline SG2+DiffAug+Noise (Ours) & 28.85\(\pm_{0.18}\) & 5.29\(\pm_{0.02}\) & 10.64\(\pm_{0.01}\) & 0.71\(\pm_{0.00}\) & 0.38\(\pm_{0.00}\) \\ + NoisyTwins (Ours) & **17.72\(\pm_{0.08}\)** & **3.56\(\pm_{0.01}\)** & **7.27\(\pm_{0.02}\)** & 0.69\(\pm_{0.01}\) & **0.52\(\pm_{0.01}\)** \\ \hline \hline \end{tabular} \end{table} Table 7: **Statistical Analysis for CIFAR10-LT.** This table provides the mean and one standard deviation of metrics for all methods on CIFAR10-LT performed on three independent evaluation runs by generating 50k samples across random seeds. Figure 11: **Comparison of FID curves for CIFAR10-LT (\(\rho\)=100).** NoisyTwins leads to stable training with decreasing FID with iterations. Figure 10: **Ablation on \(\gamma\):** Quantitative comparison on CIFAR10-LT for the strength of hyperparameter (\(\gamma\)) in NoisyTwins loss function. when large imbalances are present in the dataset (Fig. 11). **Information Maximization in \(\mathbf{w}\):** Noisy Twins works on the principle of information maximization (IM), same as BarlowTwins [54] (App. Sec. A), where we maximize the information \(\mathcal{I}\) between mapping \(\mathbf{w}\) and the inputs \([\mathbf{c},\mathbf{z}]\) to the GAN. This ensures variations in \(\mathbf{z}\) are preserved when transformed to \(\mathbf{w}\) vector in \(\mathcal{W}\)-space. To verify this hypothesis, we put the (\(\gamma=0\)) for IM (cross-correlation) term in Eq. 3, which leads to mode collapse (FID 80). ## Appendix E Additional Results Fig. 12 provides the class-wise comparison of the proposed \(\mathrm{iFID}_{\mathrm{CLIP}}\) for the baseline and after adding NoisyTwins. NoisyTwins produces better \(\mathrm{iFID}_{\mathrm{CLIP}}\) for all classes, hence does not lead to performance degradation for head classes while improving performance on tail classes. We now provide additional qualitative results for models. Similar to ImageNet-LT, we also provide a full-scale comparison of images from different methods in Fig. 15 for iNaturalist-2019. In addition to the images from the tail classes, we also show generations from the head and middle classes. In Fig. 15, it is clearly shown that NoisyTwins can obtain high-quality and diverse samples compared to the baseline. We find that the StyleGAN2-ADA baseline produces similar images across a class for tail classes, which confirms the occurrence of class-wise mode collapse even in large datasets. Further, it can be seen that the regularizer-based method (gSR) is unable to capture the identity of the real class and suffers from the issue of class confusion (as also seen in t-SNE of Fig. 2 of the main text). Our method NoisyTwins, can produce realistic-looking diverse images even for tail classes, which shows the successful transfer of knowledge from head classes. Training a class-conditioned GAN on long-tailed datasets leads to class confusion when the extent of knowledge transfer is not controlled. NoisyTwins strikes the right balance between knowledge transfer from the head classes to benefit the quality of generation in the tail classes, thus not allowing class confusion. This would not be possible if we train GAN independently on tail classes (\(\sim\) 30 images), which shows the practical usefulness of joint training on complete long-tailed data (i.e., our setup). We showcase qualitative results of generations from few-shot datasets (i.e., ImageNet Carnivore and AnimalFaces). Fig. 13 and 14 show the results of the SotA few-shot baseline of Transitional-cGAN (_left_) and after augmenting it with our proposed NoisyTwins (_right_). Our proposed method, NoisyTwins, can further stabilize the training of Transitional-cGAN and improve the quality and diversity of the generated samples on both datasets of ImageNet Carnivores and AnimalFaces. Figure 14: **Qualitative comparison on few-shot AnimalFaces dataset.** Figure 13: **Qualitative comparison on few-shot ImageNet Carnivores dataset.** Figure 15: **Qualitative Analysis on iNaturalist2019 (1010 classes).** Examples of generations from various classes for evaluated baselines (Table 1). The baseline ADA suffers from mode collapse, whereas gSR suffers from class confusion particularly for tail classes, particularly for tail classes as seen above on the left. NoisyTwins generates diverse and class-consistent images across all categories. on larger resolutions as demonstrated on few-shot Animal-Faces (AF) dataset using Transitional-cGAN [41] in Table 9, where we observe a significant improvement if FID for both \(128\times 128\) and \(64\times 64\) resolution data. Further, on large-scale iNat-19 StyleGAN2-ADA baseline in Tab. 10, we also find that NoisyTwins is able to improve performance. The NoisyTwins method also converges faster as at intermediate stage of 80k iterations in full run of 150k iterations, the FID for NoisyTwins is lower than baseline. As NoisyTwins method is based on the information maximization principle [54] and generalizes on datasets, we expect it benefits other large resolutions of StyleGAN too, similar to what is observed in Sauer _et al_. [40].
2310.07136
Exponential Quantum Communication Advantage in Distributed Inference and Learning
Training and inference with large machine learning models that far exceed the memory capacity of individual devices necessitates the design of distributed architectures, forcing one to contend with communication constraints. We present a framework for distributed computation over a quantum network in which data is encoded into specialized quantum states. We prove that for models within this framework, inference and training using gradient descent can be performed with exponentially less communication compared to their classical analogs, and with relatively modest overhead relative to standard gradient-based methods. We show that certain graph neural networks are particularly amenable to implementation within this framework, and moreover present empirical evidence that they perform well on standard benchmarks. To our knowledge, this is the first example of exponential quantum advantage for a generic class of machine learning problems that hold regardless of the data encoding cost. Moreover, we show that models in this class can encode highly nonlinear features of their inputs, and their expressivity increases exponentially with model depth. We also delineate the space of models for which exponential communication advantages hold by showing that they cannot hold for linear classification. Our results can be combined with natural privacy advantages in the communicated quantum states that limit the amount of information that can be extracted from them about the data and model parameters. Taken as a whole, these findings form a promising foundation for distributed machine learning over quantum networks.
Dar Gilboa, Hagay Michaeli, Daniel Soudry, Jarrod R. McClean
2023-10-11T02:19:50Z
http://arxiv.org/abs/2310.07136v3
# Exponential Quantum Communication Advantage in Distributed Learning ###### Abstract Training and inference with large machine learning models that far exceed the memory capacity of individual devices necessitates the design of distributed architectures, forcing one to contend with communication constraints. We present a framework for distributed computation over a quantum network in which data is encoded into specialized quantum states. We prove that for certain models within this framework, inference and training using gradient descent can be performed with exponentially less communication compared to their classical analogs, and with relatively modest time and space complexity overheads relative to standard gradient-based methods. To our knowledge, this is the first example of exponential quantum advantage for a generic class of machine learning problems with dense classical data that holds regardless of the data encoding cost. Moreover, we show that models in this class can encode highly nonlinear features of their inputs, and their expressivity increases exponentially with model depth. We also find that, interestingly, the communication advantage nearly vanishes for simpler linear classifiers. These results can be combined with natural privacy advantages in the communicated quantum states that limit the amount of information that can be extracted from them about the data and model parameters. Taken as a whole, these findings form a promising foundation for distributed machine learning over quantum networks. ## 1 Introduction As the scale of the datasets and parameterized models used to perform computation over data continues to grow [43, 53], distributing workloads across multiple devices becomes essential for enabling progress. The choice of architecture for large-scale training and inference must not only make the best use of computational and memory resources, but also contend with the fact that communication may become a bottleneck [85]. When using modern optical interconnects, classical computers exchange bits represented by light. This however does not fully utilize the potential of the physical substrate; given suitable computational capabilities and algorithms, the _quantum_ nature of light can be harnessed as a powerful communication resource. Here we show that for a broad class of parameterized models, if quantum bits (_qubits_) are communicated instead of classical bits, an exponential reduction in the communication required to perform inference and gradient-based training can be achieved. This protocol additionally guarantees improved privacy of both the user data and model parameters through natural features of quantum mechanics, without the need for additional cryptographic or privacy protocols. To our knowledge, this is the first example of generic, exponential quantum advantage on problems that occur naturally in the training and deployment of large machine learning models. These types of communication advantages help scope the future roles and interplay between quantum and classical communication for distributed machine learning. Quantum computers promise dramatic speedups across a number of computational tasks, with perhaps the most prominent example being the ability revolutionize our understanding of nature by enabling the simulation of quantum systems, owing to the natural similarity between quantum computers and the world [34, 63]. However, much of the data that one would like to compute with in practice seems to come from an emergent classical world rather than directly exhibiting quantum properties. While there are some well-known examples of exponential quantum speedups for classical problems, most famously factoring [93] and related hidden subgroup problems [28], these tend to be isolated and at times difficult to relate to practical applications. In addition, even though significant speedups are known for certain ubiquitous problems in machine learning such as matrix inversion [41] and principal component analysis [64], the advantage is often lost when including the cost of loading classical data into the quantum computer or of reading out the result into classical memory [1]. In applications where an efficient data access model avoids the above pitfalls, the complexity of quantum algorithms tends to depend on condition numbers of matrices which scale with system size in a way that reduces or even eliminates any quantum advantage [72]. It is worth noting that much of the discussion about the impact of quantum technology on machine learning has focused on computational advantage. However quantum resources are not only useful in reducing computational complexity -- they can also provide an advantage in communication complexity, enabling exponential reductions in communication for some problems [14, 89]. Inspired by these results, we study a setting where quantum advantage in communication is possible across a wide class of machine learning models. This advantage holds without requiring any sparsity assumptions or elaborate data access models such as QRAM [36]. We focus on compositional distributed learning, known as _pipelining_[15, 47]. While there are a number of strategies for distributing machine learning workloads that are influenced by the requirements of different applications and hardware constraints [52, 99], splitting up a computational graph in a compositional fashion (Figure 1) is a common approach. We describe distributed, parameterized quantum circuits that can be used to perform inference over data when distributed in this way, and can be trained using gradient methods. The ideas we present can also be used to optimize models that use certain forms of data parallelism (Appendix C). In principle, such circuits could be implemented on quantum computers that are able to communicate quantum states. For the general class of distributed, parameterized quantum circuits that we study, we show the following: * Even for simple circuits in this class, there is an exponential quantum advantage in communication for the problem of estimating the loss and the gradients of the loss with respect to the parameters (Section 3). This additionally implies a privacy advantage from Holevo's bound (Section 6). We also show that this is advantage is not a trivial consequence of the data encoding used, since it does not hold for certain problems like linear classification (Appendix E). * For a subclass of these circuits, there is an exponential advantage in communication for the entire training process, and not just for a single round of gradient estimation. This subclass includes circuits for fine-tuning using pre-trained features. The proof is based on convergence rates for stochastic gradient descent under convexity assumptions (Section 4). * The ability to interleave multiple unitaries encoding nonlinear features of data enables expressivity to grow exponentially with depth, and universal function approximation in some settings. This implies that these models are highly expressive in contrast to popular belief about linear restrictions in quantum neural networks (Section 5). ## 2 Preliminaries ### Large-scale learning problems and distributed computation Pipelining is a commonly used method of distributing a machine learning workload, in which different layers of a deep model are allocated distinct hardware resources [47, 77]. Training and inference then require communication of features between nodes. Pipelining enables flexible changes to the model architecture in a task-dependent manner, since subsets of a large model can be combined in an adaptive fashion to solve many downstream tasks. Additionally, pipelining allows sparse activation of a subset of a model required to solve a task, and facilitates better use of heterogeneous compute resources since it does not require storing identical copies of a large model. The potential for large models to be easily fine-tuned to solve multiple tasks is well-known [18, 22], and pipelined architectures which facilitate this are the norm in the latest generation of large language models [15, 87]. Data parallelism, in contrast, involves storing multiple copies of the model on different nodes, training each on a subsets of the data and exchanging information to synchronize parameter updates. In practice, different parallelization strategies are combined in order to exploit trade-offs between latency and throughput in a task-dependent fashion [52, 85, 99]. Distributed quantum models were considered recently in [82], but the potential for quantum advantage in communication in these settings was not discussed. ### Communication complexity Communication complexity [56, 86, 100] is the study of distributed computational problems using a cost model that focuses on the communication required between players rather than the time or computational complexity. It is naturally related to the study of the space complexity of streaming algorithms [91]. The key object of study in this area is the tree induced by a communication protocol whose nodes enumerate all possible communication histories and whose leaves correspond to the outputs of the protocol. The product structure induced on the leaves of this tree as a function of the inputs allows one to bound the depth of the tree from below, which gives an unconditional lower bound on the communication complexity. The power of replacing classical bits of communication with qubits has been the subject of extensive study [20, 24, 27]. For certain problems such as Hidden Matching [14] and a variant of classification with deep linear models [89] an exponential quantum communication advantage holds, while for other canonical problems such as Disjointness only a polynomial advantage is possible [90]. Exponential advantage was also recently shown for the problem of sampling from a distribution defined by the solution to a linear regression problem [73]. At a glance, the development of networked quantum computers may seem much more challenging than the already herculean task of building a fault tolerant quantum computer. However, for some quantum network architectures, the existence of a long-lasting fault tolerant quantum memory as a quantum repeater, may be the enabling component that lifts low rate shared entanglement to a fully functional quantum network [76], Figure 1: _Left:_ Distributed, compositional computation. Dashed lines separate devices with computational and storage resources. The circular nodes represent parameterized functions that are allocated distinct hardware resources and are spatially separated, while the square nodes represent data (yellow) and outputs corresponding to different tasks (green). The vertical axis represents time. This framework of hardware allocation enables flexible modification of the model structure in a task-dependent fashion. _Right:_ Computation of gradient estimators \(g_{\ell}\) at different layers of a model distributed across multiple devices by pipelining. Computing forward features \(\mu_{\ell}\) and backwards features \(\nu_{\ell}\) (also known as computing a forward or backward pass) requires a large amount of classical communication (grey) but an exponentially smaller amount of quantum communication (yellow). \(\mathcal{L}\) is the classical loss function, and \(\mathcal{P}_{0}\) an operator whose expectation value with respect to a quantum model gives the analogous loss function in the quantum case. and hence the timelines for small fault tolerant quantum computers and quantum networks may be more coincident than it might seem at first. As such, it is well motivated to consider potential communication advantages alongside computational advantages when talking about the applications of fault tolerant quantum computers. In Appendix G we briefly survey approaches to implementing quantum communication in practice, and the associated challenges. In addition, while we largely restrict ourselves here to discussions of communication advantages, and most other studies focus on purely computational advantages, there may be interesting advantages at their intersection. For example, it is known that no quantum state built from a simple (or polynomial complexity) circuit can confer an exponential communication advantage, however states made from simple circuits can be made computationally difficult to distinguish [50]. Hence the use of quantum pre-computation [48] and communication may confer advantages even when traditional computational and communication cost models do not admit such advantages due to their restriction in scope. ## 3 Distributed learning with quantum resources In this work we focus on parameterized models that are representative of the most common models used and studied today in quantum machine learning, sometimes referred to as quantum neural networks [25, 33, 69, 92]. We will use the standard Dirac notation of quantum mechanics throughout. A summary of relevant notation and the fundamentals of quantum mechanics is provided in Appendix A. We define a class models with parameters \(\Theta\), taking an input \(x\) which is a tensor of size \(N\). The models take the following general form: **Definition 3.1**.: \(\{A_{\ell}(\theta^{A}_{\ell},x)\},\{B_{\ell}(\theta^{B}_{\ell},x)\}\) _for \(\ell\in\{1,\ldots,L\}\) are each a set of unitary matrices of size \(N^{\prime}\times N^{\prime}\) for some \(N^{\prime}\) such that \(\log N^{\prime}=O(\log N)\)1. The \(\theta^{A}_{\ell},\theta^{B}_{\ell}\) are vectors of \(P\) parameters each. For every \(\ell,i\), we assume that \(\frac{\partial A_{\ell}}{\partial\theta^{A}_{\ell}}\) is anti-hermitian up to a real scaling factor and has at most two eigenvalues, and similarly for \(B_{\ell}\)._ Footnote 1: We will consider some cases where \(N^{\prime}=N\), but will find it helpful at times to encode nonlinear features of \(x\) in these unitaries, in which case we may have \(N^{\prime}>N\). _The model we consider is defined by_ \[\left|\varphi(\Theta,x)\right\rangle\equiv\left(\prod_{\ell=L}^{1}A_{\ell}( \theta^{A}_{\ell},x)B_{\ell}(\theta^{B}_{\ell},x)\right)\left|\psi(x)\right\rangle, \tag{3.1}\] _where \(\psi(x)\) is a fixed state of \(\log N^{\prime}\) qubits._ _The loss function is given by_ \[\mathcal{L}(\Theta,x)\equiv\left\langle\varphi(\Theta,x)\right|\mathcal{P}_{0 }\left|\varphi(\Theta,x)\right\rangle, \tag{3.2}\] _where \(\mathcal{P}_{0}\) is a Pauli matrix that acts on the first qubit._ In standard linear algebra notation, the output of the model is a unit norm \(N^{\prime}\)-dimensional complex vector \(\varphi_{L}\), defined recursively by \[\varphi_{0}=\psi(x),\quad\varphi_{\ell}=A_{\ell}(\theta^{A}_{\ell},x)B_{\ell} (\theta^{B}_{\ell},x)\varphi_{\ell-1}, \tag{3.3}\] where the entries of \(\varphi_{L}\) are represented by the amplitudes of a quantum state. The loss takes the form \(\mathcal{L}(\Theta,x)=\left(\varphi_{L}^{*}\right)^{T}\mathcal{P}_{0}\varphi_ {L}\) where \({}^{*}\) indicates the entrywise complex conjugate, and this definition includes the standard \(L^{2}\) loss as a special case. Subsequently we omit the dependence on \(x\) and \(\Theta\) (or subsets of it) to lighten notation, and consider special cases where only subsets of the unitaries depend on \(x\), or where the unitaries take a particular form and may not be parameterized. Denote by \(\nabla_{A(B)}\mathcal{L}\) the entries of the gradient vector that correspond to the parameters of \(\{A_{\ell}\}(\{B_{\ell}\})\). While seemingly stringent, the condition on the derivatives is in fact satisfied by many of the most common quantum neural network architectures [25, 31, 92]. This condition is satisfied for example if \[A_{\ell}=\prod_{j=1}^{P}\!e^{i\alpha_{\ell}^{A}\theta_{\ell}^{A}\mathcal{P}_{ \ell j}^{A}} \tag{3.4}\] and the \(\mathcal{P}_{\ell j}^{A}\) are both unitary and Hermitian (e.g. Pauli matrices), while \(\beta_{\ell j}^{A}\) are scalars. Such models, or parameterized quantum circuits, are naturally amenable to implementation on quantum devices, and for \(P=O(N^{2})\) any unitary over \(\log N^{\prime}\) qubits can be written in this form. In the special case where \(x\) in a unit norm \(N\)-dimensional vector, a simple choice of \(\ket{\psi(x)}\) is the amplitude encoding of \(x\), given by \[\ket{\psi(x)}=\ket{x}=\sum_{i=0}^{N-1}\!x_{i}\ket{i}. \tag{3.5}\] However, despite its exponential compactness in representing the data, a naive implementation of the simplest choice is restricted to representing quadratic features of the data that can offer no substantial quantum advantage in a learning task [46], so the choice of data encoding is critical to the power of a model. The interesting parameter regime for classical data and models is one where \(N,P\) are large, while \(L\) is relatively modest. For general unitaries \(P=O(N^{2})\), which matches the scaling of the number of parameters in fully-connected networks. When the input tensor \(x\) is a batch of datapoints, \(N\) is equivalent to the product of batch size and input dimension. The model in Definition 3.1 can be used to define distributed inference and learning problems by dividing the input \(x\) and the parameterized unitaries between two players, Alice and Bob. We define their respective inputs as follows: \[\begin{split}\text{Alice}:&\ket{\psi(x)},\{A_{ \ell}\},\\ \text{Bob}:&\{B_{\ell}\}.\end{split} \tag{3.6}\] The problems of interest require that Alice and Bob compute certain joint functions of their inputs. As a trivial base case, it is clear that in a communication cost model, all problems can be solved with communication cost at most the size of the inputs times the number of parties, by a protocol in which each party sends its inputs to all others. We will be interested in cases where one can do much better by taking advantage of quantum communication. Given the inputs eq. (3.6), we will be interested chiefly in the two problems specified below. **Problem 1** (Distributed Inference).: _Alice and Bob each compute an estimate of \(\bra{\varphi}\mathcal{P}_{0}\ket{\varphi}\) up to additive error \(\varepsilon\)._ The straightforward algorithm for this problem, illustrated in fig. 2, requires \(L\) rounds of communication. The other problem we consider is the following: **Problem 2** (Distributed Gradient Estimation).: _Alice computes an estimate of \(\nabla_{A}\bra{\varphi}\mathcal{P}_{0}\ket{\varphi}\), while Bob computes an estimate of \(\nabla_{B}\bra{\varphi}Z_{0}\ket{\varphi}\), up to additive error \(\varepsilon\) in \(L^{\infty}\)._ Figure 2: Distributed quantum circuit implementing \(\mathcal{L}\) for \(L=2\). Both \(\mathcal{L}\) and its gradients with respect to the parameters of the unitaries can be estimated with total communication that is polylogarithmic in the size of the input data \(N\) and the number of trainable parameters per unitary \(P\). ### Communication complexity of inference and gradient estimation We show that inference and gradient estimation are achievable with a logarithmic amount of quantum communication, which will represent an exponential improvement over the classical cost for some cases: **Lemma 1**.: _Problem 1 can be solved by communicating \(O(\log N)\) qubits over \(O(L/\varepsilon^{2})\) rounds._ Proof: Appendix B. **Lemma 2**.: _Problem 2 can be solved with probability greater than \(1-\delta\) by communicating \(\tilde{O}(\log N(\log P)^{2}\log(L/\delta)/\varepsilon^{4})\) qubits over \(O(L^{2})\) rounds. The time and space complexity of the algorithm is \(\sqrt{P}\)\(L\)\(\operatorname{poly}(N,\log P,\varepsilon^{-1},\log(1/\delta))\)._ Proof: Appendix B. This upper bound is obtained by simply noting that the problem of gradient estimation at every layer can be reduced to a shadow tomography problem [6]: **Theorem 1** (Shadow Tomography [3] solved with Threshold Search [12]).: _For an unknown state \(\ket{\psi}\) of \(\log N\) qubits, given \(K\) known two-outcome measurements \(E_{i}\), there is an explicit algorithm that takes \(\ket{\psi}^{\otimes k}\) as input, where \(k=\tilde{O}(\log^{2}K\log N\log(1/\delta)/\varepsilon^{4})\), and produces estimates of \(\bra{\psi}E_{i}\ket{\psi}\) for all \(i\) up to additive error \(\varepsilon\) with probability greater than \(1-\delta\). \(\tilde{O}\) hides subdominant polylog factors._ Using immediate reductions from known problems in communication complexity, we can show that the amount of classical communication required to solve these problem is polynomial in the size of the input, and additionally give a lower bound on the number of rounds of communication required by any quantum or classical algorithm: **Lemma 3**.: 1. _The classical communication complexity of Problem_ 1 _and Problem_ 2 _is_ \(\Omega(\max(\sqrt{N},L))\)_._ 2. _Any algorithm (quantum or classical) for Problem_ 1 _or Problem_ 2 _requires either_ \(\Omega(L)\) _rounds of communication or_ \(\Omega(N/L^{4})\) _qubits (or bits) of communication._ Proof: Appendix B The implication of the second result in Lemma 3 is that \(\Omega(L)\) rounds of communication are necessary in order to obtain an exponential communication advantage for small \(L\), since otherwise the number of qubits of communication required can scale linearly with \(N\). Combining Lemma 2 and Lemma 3, in the regime where \(L=O(\operatorname{polylog}(N))\), which is relevant for classical machine learning models, we obtain an exponential advantage in communication complexity for both inference and gradient estimation. The required overhead in terms of time and space is only polynomial when compared to the straightforward classical algorithms for these problems. The distribution of the model as in eq. (3.6) is an example of pipelining. Data parallelism is another common approach to distributed machine learning in which subsets of the data are distributed to identical copies of the model. In Appendix C we show that it can also be implemented using quantum circuits, which can then trained using gradient descent requiring quantum communication that is logarithmic in the number of parameters and input size. Quantum advantage is possible in these problems because there is a bound on the complexity of the final output, whether it be correlated elements of the gradient up to some finite error or the low-dimensional output of a model. This might lead one to believe that whenever the output takes such a form, encoding the data in the amplitudes of a quantum state will trivially give an exponential advantage in communication complexity. We show however that the situation is slightly more nuanced, by considering the problem of inference with a linear model: **Lemma 4**.: _For the problem of distributed linear classification, there can be no exponential advantage in using quantum communication in place of classical communication._ The precise statement and proof of this result are presented in Appendix E. This result also highlights that the worst case lower bounds such as Lemma 3 may not hold for circuits with certain low-dimensional or other simplifying structure. Exponential advantages in end-to-end training So far we have discussed the problems of inference and estimating a single gradient vector. It is natural to also consider when these or other gradient estimators can be used to efficiently solve an optimization problem (i.e. when the entire training processes is considered rather than a single iteration). Applying the gradient estimation algorithm detailed in Lemma 2 iteratively gives a distributed stochastic gradient descent algorithm which we detail in Algorithm 2, yet one may be concerned that a choice of \(\varepsilon=O(\log N)\) which is needed to obtain an advantage in communication complexity will preclude efficient convergence. Here we present a simpler algorithm that requires a single quantum measurement per iteration, and can provably solve certain convex problems efficiently, as well as an application of shadow tomography to fine-tuning where convergence can be guaranteed, again with only logarithmic communication cost. In both cases, there is an exponential advantage in communication even when considering the entire training process. ### "Smooth" circuits Consider the case where \(A_{\ell}\) are product of rotations for all \(\ell\), namely \[A_{\ell}=\prod_{j=1}^{P}\!\!e^{-\frac{1}{2}i\beta_{\ell j}^{A}\theta_{\ell j}^ {A}\mathcal{P}_{\ell j}^{A}}, \tag{4.1}\] where \(\mathcal{P}_{\ell j}^{A}\) are Pauli matrices acting on all qubits, and similarly for \(B_{\ell}\). These can also be interspersed with other non-trainable unitaries. This constitutes a slight generalization of the setting considered in [42], and the algorithm we present is essentially a distributed distributed version of theirs. Denote by \(\beta\) an \(2PL\)-dimensional vector with elements \(\beta_{\ell j}^{Q}\) where \(Q\in\{A,B\}\)2. The quantity \(\left\|\beta\right\|_{1}\) is the total evolution time if we interpret the state \(\left|\varphi\right\rangle\) as a sequence of Hamiltonians applied to the initial state \(\left|x\right\rangle\). Footnote 2: [42] actually consider a related quantity for which has smaller norm in cases where multiple gradient measurements commute, leading to even better rates. In Appendix D.1 we describe an algorithm that converges to the neighborhood of a minimum, or achieves \(\mathbb{E}\mathcal{L}(\Theta)-\mathcal{L}(\Theta^{\star})\leq\varepsilon_{0}\), for a convex \(\mathcal{L}\) after \[\frac{2\left\|\Theta^{(0)}-\Theta^{\star}\right\|_{2}^{2}\left\|\beta\right\| _{1}^{2}}{\varepsilon_{0}^{2}} \tag{4.2}\] iterations, where \(\Theta^{\star}\) are the parameter values at the minimum of \(\mathcal{L}\). The expectation is with respect to the randomness of quantum measurement and additional internal randomness of the algorithm. The algorithm is based on classically sampling a single coordinate to update at every iteration, and computing an unbiased estimator of the gradient with a single measurement. It can thus be seen as a form of probabilistic coordinate descent. This implies an exponential advantage in communication for the entire training process as long as \(\left\|\Theta^{(0)}-\Theta^{\star}\right\|_{2}^{2}\left\|\beta\right\|_{1}^{2} =\text{polylog}(N)\). Such circuits either have a small number of trainable parameters (\(P=O(\text{polylog}(N))\)), depend weakly on each parameter (e.g. \(\beta_{\ell j}^{Q}=O(1/P)\) for arbitrary \(P\)), or have structure that allows initial parameter guesses whose quality diminishes quite slowly with system size. Nevertheless, over a convex region the loss can rapidly change by an \(O(1)\) amount. One may also be concerned that in the setting \(\left\|\Theta^{(0)}-\Theta^{\star}\right\|_{2}^{2}\left\|\beta\right\|_{1}^{2} =\text{polylog}(N)\) only a logarithmic number of parameters is updated during the entire training process and so the total effect of the training process may be negligible. It is important to note however that each such sparse update depends on the structure of the entire gradient vector as seen in the sampling step. In this sense the algorithm is a form of probabilistic coordinate descent, since the probability of updating a coordinate \(\left|\beta_{\ell j}^{Q}\right|/\left\|\beta\right\|_{1}\) is proportional to the the magnitude of the corresponding element in the gradient (actually serving as an upper bound for it). Remarkably, the time complexity of a single iteration of this algorithm is proportional to a forward pass, and so matches the scaling of classical backpropagation. This is in contrast to the polynomial overhead of shadow tomography (Theorem 1). Additionally, it requires a single measurement per iteration, without any of the additional factors in the sample complexity of shadow tomography. ### Fine-tuning the last layer of a model Consider a model given by eq. (3.1) where only the parameters of \(A_{L}\) are trained, and the rest are frozen, and denote this model by \(\left|\varphi_{f}\right\rangle\). The circuit up to that unitary could include multiple data-dependent unitaries that represent complex features in the data. Training only the final layer in this manner is a common method of fine-tuning a pre-trained model [45]. If we now define \[\tilde{E}_{Li}^{A}=\left|1\right\rangle\left\langle 0\right|\otimes A_{L}^{ \dagger}\mathcal{P}_{0}\frac{\partial A_{L}}{\partial\theta_{Li}^{A}}+\left|0 \right\rangle\left\langle 1\right|\otimes\left(\frac{\partial A_{L}}{ \partial\theta_{Li}^{A}}\right)^{\dagger}\mathcal{P}_{0}A_{L}, \tag{4.3}\] the expectation value of \(\tilde{E}_{Li}^{A}\) using the state \(\left|+\right\rangle\left|\mu_{A}^{A}\right\rangle\) gives \(\frac{\partial\mathcal{L}}{\partial\theta_{Li}^{A}}\). Here \[\left|\mu_{L}^{A}\right\rangle=B_{L}(x)\prod_{k=L-1}^{1}A_{k}(x)B_{k}(x)\left| \psi(x)\right\rangle \tag{4.4}\] is the forward feature computed by Alice at layer \(L\) with the parameters of all the other unitaries frozen (hence the dependence on them is dropped). Since the observables in the shadow tomography problem can be chosen in an online fashion [4, 5, 12], and adaptively based on previous measurements, we can simply define a stream of measurement operators by measuring \(P\) observables to estimate the gradients w.r.t. an initial set of parameters, updating these parameters using gradient descent with step size \(\eta\), and defining a new set of observables using the updated parameters. Repeating this for \(T\) iterations gives a total of \(PT\) observables (a complete description of the algorithm is given in Algorithm 3). By the scaling in Lemma 2, the total communication needed is \(\hat{O}(\log N(\log TP)^{2}\log(1/\delta)/\varepsilon^{4})\) over \(O(L)\) rounds (since only \(O(L)\) rounds are needed to create copies of \(\left|\mu^{A_{L}}\right\rangle\). This implies an exponential advantage in communication for the entire training process (under the reasonable assumption \(T=O(\mathrm{poly}(N,P))\)), despite the additional stochasticity introduced by the need to perform quantum measurements. For example, assume one has a bound \(\left\|\nabla\mathcal{L}\right\|_{2}^{2}\leq K\). If the circuit is comprised of unitaries with Hermitian derivatives, this holds with \(K=PL\). In that case, denoting by \(g\) the gradient estimator obtained by shadow tomography, we have \[\left\|g\right\|_{2}^{2}\leq\left\|\nabla\mathcal{L}\right\|_{2}^{2}+\left\| \nabla\mathcal{L}-g\right\|_{2}^{2}\leq K+\varepsilon^{2}PL. \tag{4.5}\] It then follows directly from Lemma 6 that for an appropriately chosen step size, if \(\mathcal{L}\) is convex one can find parameter values \(\overline{\Theta}\) such that \(\mathcal{L}(\overline{\Theta})-\mathcal{L}(\Theta^{\star})\leq\varepsilon_{0}\) using \[T=2\frac{\left\|\Theta^{(0)}-\Theta^{\star}\right\|_{2}^{2}(K+\varepsilon^{ 2}PL)^{2}}{\varepsilon_{0}^{2}} \tag{4.6}\] iterations of gradient descent. Similarly if \(\mathcal{L}\) is \(\lambda\)-strongly convex then \(T=2(K+\varepsilon^{2}PL)^{2}/\lambda\varepsilon_{0}+1\) iterations are sufficient. In both cases therefore an exponential advantage is achieved for the optimization process as a whole, since in both cases one can implement the circuit that is used to obtain the lower bounds in Lemma 3. ## 5 Expressivity of compositional models It is natural to ask how expressive models of the form of eq. (3.1) can be, given the unitarity constraint of quantum mechanics on the matrices \(\{A_{\ell},B_{\ell}\}\). This is a nuanced question that can depend on the encoding of the data that is chosen and the method of readout. On the one hand, if we pick \(\left|\psi(x)\right\rangle\) as in eq. (3.5) and use \(\{A_{\ell},B_{\ell}\}\) that are independent of \(x\), the resulting state \(\left|\varphi\right\rangle\) will be a linear function of \(x\) and the observables measured will be at most quadratic functions of those entries. On the other hand, one could map bits to qubits 1-to-1 and encode any reversible classical function of data within the unitary matrices \(\{A_{\ell}(x)\}\) with the use of extra space qubits. However, this negates the possibility of any space or communication advantages (and does not provide any real computational advantage without additional processing). As above, one prefers to work on more generic functions in the amplitude and phase space, allowing for an exponential compression of the data into a quantum state, but one that must be carefully worked with. We investigate the consequences of picking \(\{A_{\ell}(x)\}\) that are _nonlinear_ functions of \(x\), and \(\{B_{\ell}\}\) that are data-independent. This is inspired by a common use case in which Alice holds some data or features of the data, while Bob holds a model that can process these features. Given a scalar variable \(x\), define \(A_{\ell}(x)=\operatorname{diag}((e^{-2\pi i\lambda_{\ell 1}x},\ldots,e^{-2\pi i\lambda_{\ell N^{\prime}}x}))\) for \(\ell\in\{1,\ldots,L\}\). We also consider parameterized unitaries \(\{B_{\ell}\}\) that are independent of the \(\{\lambda_{\ell i}\}\) and inputs \(x,y\), and the state obtained by interleaving the two in the manner of eq. (3.1) by \(\left|\varphi(x)\right\rangle\). We next set \(\lambda_{\ell 1}=0\) for all \(\ell\in\{1,\ldots,L\}\) and \(\lambda_{L2}=0\). If we are interested in expressing the frequency \[\Lambda_{\overline{j}}=\sum_{\ell=1}^{L-1}\lambda_{\ell j_{\ell}}, \tag{5.1}\] where \(j_{\ell}\in\{2,\ldots,N^{\prime}\}\), we simply initialize with \(\left|\psi(x)\right\rangle=\left|+\right\rangle_{0}\left|0\right\rangle\) and use \[B_{\ell}=\left|j_{\ell}-1\right\rangle\left\langle j_{\ell-1}-1\right|+\left| j_{\ell-1}-1\right\rangle\left\langle j_{\ell}-1\right|, \tag{5.2}\] with \(j_{1}=j_{L}=2\). It is easy to check that the resulting state is \(\left|\varphi(x)\right\rangle=\left(\left|0\right\rangle+e^{-2\pi i\Lambda_{ \overline{j}}x}\left|1\right\rangle\right)/\sqrt{2}\). Since the basis state \(\left|0\right\rangle\) does not accumulate any phase, while the \(B_{\ell}\)s swap the \(\left|1\right\rangle\) state with the appropriate basis state at every layer in order to accumulate a phase corresponding to a single summand in eq. (5.1). Choosing to measure the operator \(\mathcal{P}_{0}=X_{0}\), it follows that \(\left\langle\varphi(x)\right|X_{0}\left|\varphi(x)\right\rangle=\cos(2\pi \Lambda_{\overline{j}}x)\). It is possible to express \(O((N^{\prime})^{L-1})\) different frequencies in this way, assuming the \(\Lambda_{\overline{j}}\) are distinct, which will be the case for example with probability \(1\) if the \(\{\lambda_{\ell i}\}\) are drawn i.i.d. from some distribution with continuous support. This further motivates the small \(L\) regime where exponential advantage in communication is possible. These types of circuits with interleaved data-dependent unitaries and parameterized unitaries was considered for example in [92], and is also related to the setting of quantum signal processing and related algorithms [65, 68]. We also show that such circuits can express dense function in Fourier space, and for small \(N\) we additionally find that these circuits are universal function approximators (Appendix F.1), though in this setting the possible communication advantage is less clear. The problem of applying nonlinearities to data encoded efficiently in quantum states is non-trivial and is of interest due to the importance of nonlinearities in enabling efficient function approximation [67]. One approach to resolving the constraints of unitarity with the potential irreversibility of nonlinear functions is the introduction of slack variables via additional ancilla qubits, as typified by the techniques of block-encoding [26, 35]. Indeed, these techniques can be used to apply nonlinearities to amplitude encoded data efficiently, as was recently shown in [88]. This approach can be applied to the distributed setting as well. Consider the communication problem where Alice is given \(x\) as input and Bob is given unitaries \(\{U_{1},U_{2}\}\) over \(\log N\) qubits. Denote by \(\sigma:\mathbb{R}\rightarrow\mathbb{R}\) a nonlinear function such as the sigmoid, exponential or standard trigonometric functions, and \(n=2^{N}\). We show the following: **Lemma 5**.: _There exists a model \(\left|\varphi_{\sigma}\right\rangle\) of the form definition 3.1 with \(L=O(\log 1/\varepsilon),N^{\prime}=2^{n^{\prime}}\) where \(n^{\prime}=2n+4\) such that_ \[\left|\varphi_{\sigma}\right\rangle=\alpha\left|0\right\rangle^{\otimes n+4} \left|\hat{y}\right\rangle+\left|\phi\right\rangle \tag{5.3}\] _for some \(\alpha=O(1)\), where \(\left|\hat{y}\right\rangle\) is a state that obeys_ \[\left\|\hat{y}\right\rangle-\left|U_{2}\frac{1}{\left\|\sigma(U_{1}x)\right\| _{2}}\sigma(U_{1}x)\right\rangle\right\|_{2}<\varepsilon. \tag{5.4}\] \(\left|\phi\right\rangle\) _is a state whose first \(n+4\) registers are orthogonal to \(\left|0\right\rangle^{\otimes n+4}\)._ Proof: Appendix B. This result implies that with constant probability, after measurement of the first \(n+4\) qubits of \(\left|\varphi_{\sigma}\right\rangle\), one obtains a state whose amplitudes encode the output of a single hidden layer neural network. It may also be possible to generalize this algorithm and apply it recursively to obtain a state representing a deep feed-forward network with unitary weight matrices. It is also worth noting that the general form of the circuits we consider resembles self-attention based models with their nonlinearities removed (motivated for example by [94]), as we explain in Appendix F.2. Finally, in Appendix F.3 we discuss other strategies for increasing the expressivity of these quantum circuits by combining them with classical networks. Privacy of Quantum Communication In addition to an advantage in communication complexity, the quantum algorithms outlined above have an inherent advantage in terms of privacy. It is well known that the number of bits of information that can be extracted from an unknown quantum state is proportional to the number of qubits. It follows immediately that since the above algorithm requires exchanging a logarithmic number of copies of states over \(O(\log N)\) qubits, even if all the communication between the two players is intercepted, an attacker cannot extract more than a logarithmic number of bits of classical information about the input data or model parameters. Specifically, we have: **Corollary 1**.: _If Alice and Bob are implementing the quantum algorithm for gradient estimation described in Lemma 2, and all the communication between Alice and Bob is intercepted by an attacker, the attacker cannot extract more than \(\tilde{O}(L^{2}(\log N)^{2}(\log P)^{2}\log(L/\delta)/\varepsilon^{4})\) bits of classical information about the inputs to the players._ This follows directly from Holevo's theorem [44], since the multiple copies exchanged in each round of the protocol can be thought of as a quantum state over \(\tilde{O}((\log N)^{2}(\log P)^{2}\log(L/\delta)/\varepsilon^{4})\) qubits. As noted in [3], this does not contradict the fact that the protocol allows one to estimate all \(P\) elements of the gradient, since if one were to place some distribution over the inputs, the induced distribution over the gradient elements will generally exhibit strong correlations. An analogous result holds for the inference problem described in Lemma 1. It is also interesting to ask how much information either Bob or Alice can extract about the inputs of the other player by running the protocol. If this amount is logarithmic as well, it provides additional privacy to both the model owner and the data owner. It allows two actors who do not necessarily trust each other, or the channel through which they communicate, to cooperate in jointly training a distributed model or using one for inference while only exposing a vanishing fraction of the information they hold. It is also worth mentioning that data privacy is also guaranteed in a scenario where the user holding the data also specifies the processing done on the data. In this setting, Alice holds both data \(x\) and a full description of the unitaries she wishes to apply to her state. She can send Bob a classical description of these unitaries, and as long as the data and features are communicated in the form of quantum states, only a logarithmic amount of information can be extracted about them. In this setting there is of course no advantage in communication complexity, since the classical description of the unitary will scale like \(\mathrm{poly}(N,P)\). ## 7 Discussion This work constitutes a preliminary investigation into a generic class of quantum circuits that has the potential for enabling an exponential communication advantage in problems of classical data processing including training and inference with large parameterized models over large datasets, with inherent privacy advantages. Communication constraints may become even more relevant if such models are trained on data that is obtained by inherently distributed interaction with the physical world [32]. The ability to compute using data with privacy guarantees can be potentially applied to proprietary data. This could become highly desirable even in the near future as the rate of publicly-available data production appears to be outstripped by the growth rate of training sets of large language models [96]. Our results naturally raise further questions regarding the expressive power and trainability of these types of circuits, which may be of independent interest. We collect some of these in Appendix H. ## 8 Acknowledgements The authors would like to thank Amira Abbas, Ryan Babbush, Dave Bacon, Robbie King and Daniel Soudry for helpful discussions and comments on the manuscript.
2301.06569
Self-complementary distance-regular Cayley graphs over abelian groups
In this paper, we study self-complementary distance-regular Cayley graphs over abelian groups. We prove that if a regular graph is self-complementary distance-regular, then it is self-complementary strongly regular. We also deal with self-complementary strongly regular Cayley graphs over abelian groups and give an example of a self-complementary strongly regular Cayley graph over a non-elementary abelian group.
Mojtaba Jazaeri
2023-01-16T19:04:55Z
http://arxiv.org/abs/2301.06569v1
# Self-complementary distance-regular Cayley graphs over abelian groups ###### Abstract. In this paper, we study self-complementary distance-regular Cayley graphs over abelian groups. We prove that if a regular graph is self-complementary distance-regular, then it is self-complementary strongly regular. We also deal with self-complementary strongly regular Cayley graphs over abelian groups and give an example of a self-complementary strongly regular Cayley graph over a non-elementary abelian group. Key words and phrases:Self-complementary graph; Distance-regular graph; Strongly regular graph; Cayley graph; Abelian group; Schur ring 2 self-complementary strongly regular Cayley graphs come from Paley type partial difference sets. Arasu, Jungnickel, Ma and Pott stated the following questions in [1]. **Question 1.1**.: _Let \(G\) be an abelian group of order \(4t+1\). If \(4t+1\) is not a prime power, does there exist a Paley type partial difference set in the group \(G\)? If \(4t+1\) is a prime power, does the group \(G\) need to be elementary abelian?_ Davis in Corollary 3.1 of [3] answered the second part of this question by proving that there exists a Paley type partial difference set in the abelian group \(\mathbb{Z}_{p^{2}}\times\mathbb{Z}_{p^{2}}\). Furthermore, Leung and Ma in [4] generalized this by the construction of Paley type partial difference set in abelian \(p\)-groups with any given exponent. Moreover, Polhill in [8] proved that there exist Paley type partial difference sets in the group \(\mathbb{Z}_{3}^{2}\times\mathbb{Z}_{p}^{4t}\), where \(t\) is a natural number and \(p\) is an odd prime number that answered the first part of this question. Finally, the order of abelian groups which admit Paley type partial difference sets has been discovered in [10]. It is proven that if an abelian group admits a Paley type partial difference set and its order is not a prime power, then its order is \(n^{4}\) or \(9n^{4}\), where \(n>1\) is an odd integer, but, however, Paley type partial difference sets which come from self-complementary strongly regular Cayley graphs over abelian groups could be restricted than this. Therefore we state a similar question as follows. **Question 1.2**.: _Let \(G\) be an abelian group of order \(4t+1\). If \(4t+1\) is not a prime power, does there exist a self-complementary strongly regular graph in the group \(G\)? If \(4t+1\) is a prime power, does the group \(G\) need to be elementary abelian?_ In this paper, we deal with self-complementary strongly regular Cayley graphs over abelian groups, and partially answer the second part of this question by giving an example of a self-complementary strongly regular Cayley graph over the abelian group \(\mathbb{Z}_{9}\times\mathbb{Z}_{9}\). ## 2. Preliminaries In this paper, all graphs are undirected and simple, i.e., there are no loops or multiple edges. Moreover, we consider the eigenvalues of the adjacency matrix of a graph. A connected graph \(\Gamma\) is called distance-regular with diameter \(d\) and intersection array \[\{b_{0},b_{1},\ldots,b_{d-1};c_{1},c_{2},\ldots,c_{d}\}\] whenever for each pair of vertices \(x\) and \(y\) at distance \(i\), where \(0\leq i\leq d\), the number of neighbors of \(x\) at distance \(i+1\) and \(i-1\) from \(y\) are constant numbers \(b_{i}\) and \(c_{i}\), respectively. This implies that a distance-regular graph is regular with valency \(b_{0}=k\) and the number of neighbors of \(x\) at distance \(i\) from \(y\) is the constant number \(k-b_{i}-c_{i}\) which is denoted by \(a_{i}\). A distance-regular graph with diameter \(d=2\) is a strongly regular graph. It is well known that a strongly regular graph with \(n\) vertices and degree \(k\) has parameters \((n,k,\lambda,\mu)\) such that two adjacent vertices have \(\lambda\) common neighbors and two non-adjacent vertices has \(\mu\) common neighbors. There exist two other important parameters for a strongly regular graph as follows. \[\beta=\lambda-\mu,\Delta=(\lambda-\mu)^{2}+4(k-\mu).\] Recall that a strongly regular graph with parameters \((n,k,\lambda,\mu)\) has three distinct eigenvalues \(k\), \(\frac{1}{2}(\beta\pm\sqrt{\Delta})\). Let \(G\) be a finite group and \(S\) be an inverse-closed subset of \(G\) not containing the identity element; we call \(S\) the connection set. Then the Cayley graph \(Cay(G,S)\) is the graph whose vertex set is \(G\), where two vertices \(a\) and \(b\) are adjacent whenever \(ab^{-1}\in S\). We note that the Cayley graph \(Cay(G,S)\) is a regular graph of degree \(|S|\) and it is connected if and only if the subgroup generated by the connection set \(S\) is equal to \(G\). The complement of a graph \(\Gamma\) is denoted by \(\Gamma^{c}\) and the identity element of a group by \(e\). We also note that the complement of the Cayley graph \(\Gamma=Cay(G,S)\) is \(\Gamma^{c}=Cay(G,G\setminus(S\cup\{e\}))\). A self-complementary graph is a graph that is isomorphic to its complement. It follows that if \(\Gamma\) is a \(k\)-regular self-complementary graph with \(n\) vertices, then \(k=n-k-1\) and therefore \(n=2k+1\). Moreover, \(k\) must be an even number since the number of vertices is odd. This implies that the number of vertices of a \(k\)-regular self-complementary graph must be congruent to \(1\) modulo \(4\). Furthermore, it is well known that a self-complementary graph has diameter \(d=2\) or \(d=3\) because if a connected graph has diameter \(d\geq 3\), then its complement has diameter \(d\leq 3\). **Proposition 2.1**.: _The diameter of a self-complementary regular graph is two._ Proof.: Let \(\Gamma\) be a self-complementary regular graph. Then it has \(4t+1\) vertices and degree \(2t\) for some natural numbers \(t\). We calculate the diameter of \(\Gamma^{c}\). Let \(v\) and \(w\) be two adjacent vertices of the graph \(\Gamma\). If these two vertices have no common neighbor in the graph \(\Gamma\), then it is easy to see that there is a unique vertex \(u\) such that the vertices \(v\) and \(w\) are not adjacent to \(u\) in the graph \(\Gamma\) because this regular graph has degree \(2t\) with \(4t+1\) vertices. Therefore the distance between \(v\) and \(w\) is two in \(\Gamma^{c}\). If these two vertices have at least one common neighbor in the graph \(\Gamma\), then it is easy to see that there is a unique vertex \(u\) such that the vertices \(v\) and \(w\) are not adjacent to \(u\) in the graph \(\Gamma\) because this regular graph has degree \(2t\) with \(4t+1\) vertices. Therefore the distance between \(v\) and \(w\) is two in \(\Gamma^{c}\). neighbor in the graph \(\Gamma\), then a similar argument works and there are more than one vertex such that the vertices \(v\) and \(w\) are not adjacent to these vertices in the graph \(\Gamma\). Therefore the diameter of the graph \(\Gamma^{c}\) is two. This implies that the diameter of the graph \(\Gamma\) is two because it is isomorphic to its complement and this completes the proof. **Corollary 2.2**.: _Let \(\Gamma\) be a self-complementary distance-regular graph. Then \(\Gamma\) is a self-complementary strongly regular graph._ Let \(\Gamma\) be a self-complementary strongly-regular graph with parameters \((n=4t+1,k=2t,\lambda,\mu)\). Recall that there is a relation among the parameters of a strongly regular graph that is \((n-k-1)\mu=k(k-\lambda-1)\). It follows that \(2t\mu=2t(2t-\lambda-1)\) and therefore \(\mu+\lambda=2t-1\) in the graph \(\Gamma\). On the other hand, the complement of a strongly regular graph with parameters \((n,k,\lambda,\mu)\) is a strongly regular graph with parameters \((n,n-k-1,n-2-2k+\mu,n-2k+\lambda)\). It follows that \(\lambda-\mu=-1\) because the parameters of the self-complementary strongly regular graph \(\Gamma\) and its complement are the same. This implies that \(\lambda=t-1\) and \(\mu=t\). Such a strongly regular graph is called a conference graph, i.e., the strongly regular graph with parameters \((4t+1,2t,t-1,t)\), and such parameters are called Payley type parameters. It follows that a self-complementary strongly regular graph is a conference graph. Furthermore, its distinct eigenvalues are \(\{2t,\frac{-1\pm\sqrt{4t+1}}{2}\}\), \(\beta=-1\), and \(\Delta=n=4t+1\). There exist two infinite families of self-complementary strongly regular Cayley graphs; Payley graphs and Peisert graphs. Let \(\mathbb{F}\) be a finite field of order \(q=p^{r}\), where \(p\) is a prime number and \(q\) congruent to \(1\) modulo \(4\). Then the Paley graph \(P_{q}\) is a Cayley graph \(Cay(G,S)\) over the elementary abelian group \(G\) with the connection set \(S=\{x^{2}\mid x\in\mathbb{F},x\neq 0\}\). If the multiplicative generator of the finite field \(\mathbb{F}\) is denoted by \(a\), where \(p\) is a prime number congruent to \(3\) modulo \(4\) and \(r\) is even, then the Peisert graph \(P_{q}^{*}\) is a Cayley graph \(Cay(G,S)\) over the elementary abelian group \(G\) with the connection set \(S=\{a^{i}\mid i\equiv 0,1\mod 4\}\). ### The lexicographic product and self-complementary Cayley graphs Let \(\Gamma_{1}\) and \(\Gamma_{2}\) be two graphs with vertex set \(V(\Gamma_{1})\) and \(V(\Gamma_{1})\), respectively. Then the lexicographic product of the graph \(\Gamma_{1}\) with \(\Gamma_{2}\) which is denoted by \(\Gamma_{1}[\Gamma_{2}]\), is a graph with vertex set \(V(\Gamma_{1})\times V(\Gamma_{2})\) such that two vertices \((a,b)\) and \((c,d)\), where \(a,c\in V(\Gamma_{1})\) and \(b,d\in V(\Gamma_{2})\), are adjacent whenever \(a\) is adjacent to \(c\) in or \(a=c\) and \(b\) is adjacent to \(d\) in \(\Gamma_{2}\). It is well known that the lexicographic product of a self-complementary graph \(\Gamma_{1}\) with another self-complementary graph \(\Gamma_{2}\) is a self-complementary graph. On the other hand, it is trivial to see that the lexicographic product of a Cayley graph \(Cay(G_{1},S_{1})\) with another Cayley graph \(Cay(G_{2},S_{2})\) is a Cayley graph \(Cay(G_{1}\times G_{2},S)\), where \(S=\{(a,g)\mid a\in S_{1},g\in G_{2}\}\cup\{(e,s)\mid s\in S_{2}\}\). This implies that the lexicographic product of a self-complementary Cayley graph with another self-complementary Cayley graph is a self-complementary Cayley graph. ### Strongly regular Cayley graphs and Schur rings Let \(G\) be a finite group of order \(n\) and \(D\) its subset of order \(k\). Then the subset \(D\) is called a \((n,k,\lambda,\mu)\)-partial difference set in the group \(G\) whenever every nonidentity element \(g\in G\) can be expressible \(\lambda\) and \(\mu\) times as \(d_{1}d_{2}^{-1}\) with the elements \(d_{1},d_{2}\in D\) depending on whether \(g\in D\) or not. For more background, we refer to the excellent survey of partial difference sets by Ma [5]. Let \(G\) be a group and \(R\) a commutative ring with identity. Then the group algebra \(RG\) consists of the elements of form \(\sum_{g\in G}a_{g}g\), where \(a_{g}\in R\), with the following operations. \[\sum_{g\in G}a_{g}g+\sum_{g\in G}b_{g}g=\sum_{g\in G}(a_{g}+b_{g})g,\] \[(\sum_{g\in G}a_{g}g)(\sum_{h\in G}b_{h}h)=\sum_{g,h\in G}(a_{g}b_{h})gh,\] and the scalar multiplication, \[c\sum_{g\in G}a_{g}g=\sum_{g\in G}(ca_{g})g.\] Let \(T\) be a subset of the group \(G\). Then the element \(\sum_{t\in T}t\), in this algebra, is denoted by \(\overline{T}\). Let \(Cay(G,S)\) be a strongly regular Cayley graph with the parameters \((n,k,\lambda,\mu)\). Then the connection set \(S\) is a partial difference set in the group \(G\) and the following equation holds (cf. [5, Proposition 1.1 and Theorem 1.3]). \[\overline{S}^{2}=\mu\overline{G}+(\lambda-\mu)\overline{S}+(k-\mu)e. \tag{2.1}\] Let \(\{T_{0},T_{1},\ldots,T_{d}\}\), where \(T_{0}=\{e\}\), be the partition of the group \(G\) with disjoint inverse-closed subsets. Then the subalgebra \(\mathcal{S}\) generated by \(\alpha=\{\overline{T_{0}},\overline{T_{1}},\ldots,\overline{T_{d}}\}\) is called symmetric Schur ring over the ring \(R\) whenever the \(R\)-module \(\mathcal{S}\) is generated by \(\alpha\), and in this case, \(\alpha\) is called the simple basis of the Schur ring \(\mathcal{S}\). We don't aim to deal with Schur rings in the general case in this paper. Let \(Cay(G,S)\) be a strongly regular Cayley graph with the parameters \((4t+1,2t,t-1,t)\) over the abelian group \(G\) of order \(4t+1\) for some natural numbers \(t\). Then its complement is also a strongly regular Cayley graph with parameters \((4t+1,2t,t-1,t)\). It follows that the subalgebra \(\mathcal{S}\) in the group algebra \(\mathbb{Z}G\), where \(\mathbb{Z}\) is the ring of integer numbers, generated by \(\alpha=\{e,\overline{S},\overline{G\setminus(S\cup\{e\})}\}\) is indeed a Schur ring. To see this it is sufficient to consider the following equations (see Equation 2.1). \[\overline{S}^{2} =t\overline{G}-\overline{S}+te,\] \[\overline{G\setminus(S\cup\{e\})}^{2} =t\overline{G}-\overline{G\setminus(S\cup\{e\})}+te.\] Moreover, \[\overline{G\setminus\{e\}}^{2}=(\overline{S}+\overline{G\setminus(S\cup\{e\} )})^{2}=\overline{S}^{2}+\overline{G\setminus(S\cup\{e\})}^{2}+2(\overline{S} )(\overline{G\setminus(S\cup\{e\})}).\] On the other hand, \[\overline{G\setminus\{e\}}^{2}=(|G|-1)e+(|G|-2)(\overline{G\setminus\{e\}})= 4t(e)+(4t-1)(\overline{G\setminus\{e\}}).\] This implies that \[(\overline{S})(\overline{G\setminus(S\cup\{e\})})=t(\overline{G\setminus\{e \}})=t(\overline{S}+\overline{G\setminus(S\cup\{e\})}). \tag{2.2}\] This implies that if \(Cay(G,S)\) is a strongly regular Cayley graph with the parameters \((4t+1,2t,t-1,t)\), then every non-identity element \(g\) in the group \(G\) can be expressible \(t\) times as the product of two distinct elements \(x\) and \(y\) in the connection set \(S\) and \(G\setminus S\cup\{e\}\), respectively. ## 3. Self-complementary strongly regular Cayley graphs The Cayley graphs over cyclic groups are the same as circulant graphs and distance-regular circulant graphs have been classified in [6]. Therefore the Paley graphs on the prime number of vertices are the only self-complementary distance-regular circulant graphs. Let \(G\) be the abelian group \(G=\mathbb{Z}_{p^{2}}\times\mathbb{Z}_{p^{2}}\), where \(p\) is a prime number. Davis [3, Corollary 3.1] constructed a Paley type partial difference set in the abelian group \(G\) as follows. Let \(G=\langle(i,j)|i,j=0,1,\ldots,p^{2}-1\rangle\) and \(C\) be the set of the elements of order \(p^{2}\) in the following subgroups of order \(p^{2}\). \[\{\langle(1,1)\rangle,\langle(1,2)\rangle,\ldots,\langle(1,\frac{p(p-1)}{2}) \rangle,\langle(p,1)\rangle,\langle(2p,1)\rangle,\ldots,\langle(\frac{(p-1)p}{ 2},1)\rangle\},\] and \(D\) be the set of all elements without the identity element in the following subgroups of order \(p^{2}\). \[\{\langle(1,0)\rangle,\langle(0,1)\rangle,\langle(1,\frac{p^{2}-p}{2}+1) \rangle,\ldots,\langle(1,\frac{p^{2}+1}{2}-2)\rangle\}.\] Then \(S=C\cup D\) is a Paley type partial difference set in the group \(G\) which is inverse-closed. We checked it with GAP [11] that the Cayley graph \(Cay(G,S)\) is a self-complementary strongly regular graph for \(p=3\). This shows that self-complementary strongly regular Cayley graphs over abelian groups are not on the elementary abelian groups and answers the second part of Question 1.2. But this construction is not self-complementary in general because is not self-complementary for \(p=5\). Furthermore, the construction of partial difference sets with Paley parameters by Leung and Ma [4] does not give rise to self-complementary strongly regular Cayley graphs for the group \(G=\mathbb{Z}_{27}\times\mathbb{Z}_{27}\). We note that a self-complementary symmetric graph is isomorphic to a self-complementary strongly regular Cayley graph over an elementary abelian \(p\)-group, where \(p\) is an odd prime number, and this family of self-complementary graphs has been classified in [7]; these are the Paley graphs, Peisert graphs, and the exceptional graph on \(23^{2}\) vertices. As far as we know, there is no self-complementary strongly regular Cayley graph over the elementary abelian \(p\)-groups other than the mentioned graphs and therefore we conclude this paper with the following question. **Question 3.1**.: _Is it true that the Paley graphs, Peisert graphs, and the exceptional graph on \(23^{2}\) vertices are the only self-complementary strongly regular Cayley graphs over elementary abelian groups?_ ## Acknowledgements The author is grateful to the Research Council of Shahid Chamran University of Ahvaz for financial support (SCU.MM1401.29248).
2307.02481
Non-equilibrium steady state of the symmetric exclusion process with reservoirs
Consider the open symmetric exclusion process on a connected graph with vertexes in $[N-1]:=\{1,\ldots, N-1\}$ where points $1$ and $N-1$ are connected, respectively, to a left reservoir and a right reservoir with densities $\rho_L,\rho_R\in(0,1)$. We prove that the non-equilibrium steady state of such system is $$\mu_{\text{stat}} = \sum_{I\subset \mathcal P([N-1]) }F(I)\bigg(\otimes_{x\in I}\rm{Bernoulli}(\rho_R)\otimes_{y\in [N-1]\setminus I}\rm{Bernoulli}(\rho_L) \bigg).$$ In the formula above $ \mathcal P([N-1])$ denotes the power set of $[N-1]$ while the numbers $F(I)> 0$ are such that $\sum_{I\subset \mathcal P([N-1]) }F(I)=1$ and given in terms of absorption probabilities of the absorbing stochastic dual process. Via probabilistic arguments we compute explicitly the factors $F(I)$ when the graph is a homogeneous segment.
Simone Floreani, Adrián González Casanova
2023-07-05T17:55:39Z
http://arxiv.org/abs/2307.02481v1
# Non-equilibrium steady state of the symmetric exclusion process with reservoirs ###### Abstract. Consider the open symmetric exclusion process on a connected graph with vertexes in \([N-1]:=\{1,\ldots,N-1\}\) where points \(1\) and \(N-1\) are connected, respectively, to a left reservoir and a right reservoir with densities \(\rho_{L},\rho_{R}\in(0,1)\). We prove that the non-equilibrium steady state of such system is \[\mu_{\rm stat}=\sum_{I\subset\mathcal{P}([N-1])}F(I)\bigg{(}\otimes_{x\in I} \operatorname{Bernoulli}(\rho_{\rm R})\otimes_{y\in[N-1]\setminus I} \operatorname{Bernoulli}(\rho_{\rm L})\bigg{)}.\] In the formula above \(\mathcal{P}([N-1])\) denotes the power set of \([N-1]\) while the numbers \(F(I)>0\) are such that \(\sum_{I\subset\mathcal{P}([N-1])}F(I)=1\) and given in terms of absorption probabilities of the absorbing stochastic dual process. Via probabilistic arguments we compute explicitly the factors \(F(I)\) when the graph is a homogeneous segment. Key words and phrases:Non-equilibrium steady state; Correlations; Boundary driven systems; Duality; Exclusion process 2020 Mathematics Subject Classification: 60K35; 82C22; 60K37; 82C23 ## 1. introduction ### Boundary driven systems In the context of non-equilibrium statistical physics, a lot of attention has been devoted to the study of stationary properties of _open_ particle systems evolving on a finite graph (see, e.g., [4] and [26] for an overview on the subject). The word open refers to the fact that the dynamics does not conserve the total number of particles due to an interaction with the external world, typically modelled via particle reservoirs (see, e.g., [10] and [6] ) or, in heat transfer, via thermal baths (see, e.g., [22] and [18]): such systems are thus referred to as _boundary driven_. Reservoirs are mechanisms that inject and remove particles from the system, imposing a fix density of particles at a given site of the graph. When multiple reservoirs, each imposing different density values, are connected to the graph, the system is considered to be out of equilibrium. This condition is characterized by the presence of a non-zero particle current at stationarity, and the stationary measure of the system is commonly referred to as the _non-equilibrium steady state_. While for many closed (as opposite of open) particle systems, the stationary, actually reversible, measures are explicit and in product form, the action of the reservoirs destroy reversibility and long range correlations can emerge in the non-equilibrium steady state as shown in the seminal paper [31]. Finding explicit stationary measures for open systems is a key problem in statistical physics, and it continues to generate substantial interest (see, e.g., the recent works [14], [16] and [15]). For some systems, the celebrated _matrix product ansatz method_ has been developed in [10] to obtain in explicit form such long range correlations. This method works, for instance, for the open simple symmetric exclusion process (SSEP) on a one dimensional segment with sites \(\{1,\ldots,N-1\}\) where points \(\{1,N-1\}\) are connected, respectively, to a left and a right reservoir. This is a system of simple symmetric random walks subject to the exclusion rule: only one particle per site is allowed and thus attempted jumps to occupied sites are suppressed. Moreover at sites \(\{1,N-1\}\) particles are destroyed and created at specific given rates. After [10], other matrix and algebraic methods have been proposed to compute explicitly the correlations of the open SSEP (see [29], [24] and [15]) and the research around this system is extremely active (see, e.g., [5, 11, 17, 20, 23, 28]). However, a probabilistic representation of the non-equilibrium steady state of the open SSEP complementing the explicit correlations computed via the matrix ansatz is still not available in the literature and the matrix computations performed in e.g. [10] lack a probabilistic interpretation. Moreover, matrix ansatz methods strongly rely on the fact that the underlying graph is a homogeneous segment and particles perform nearest neighbor jumps, while there are several natural reasons why one would like to overcome such limitation. First, many physical systems are not one dimensional and modelling the open SEP on a graph approximating a \(d\)-dimensional domain started to receive attention recently (see [8]). Second, realistic models of particles should take care of the presence of spatial inhomogeneities (see, e.g., [27]) in the underlying media and these are often modelled with edge dependent weights. Third, extensive research has been conducted on the effects of symmetric long-range jumps in open exclusion processes, revealing interesting phenomena (see, e.g., [2], [1], [21] and [3]). The boundary driven symmetric exclusion process (SEP) on a general graph satisfies a property that turns out to be extremely useful: it is in stochastic duality relation with a dual particle system where particles evolve as in the primal model but the reservoirs are replaced by absorbing sites (see, e.g., [6] and the recent work [30]). Thus the dual system is conservative and if the graph is connected, all the particles will be eventually absorbed. More precisely, this relation allow to compute the expected evolution of products of occupation variables of \(n\) sites via the dual absorbing process with \(n\) particles only. In this paper we will use this relation to show that on a general graph with symmetric weights, the non-equilibrium steady state of the open SEP is a mixture measure of product of Bernoulli measures. Moreover we develop a probabilistic approach to derive the explicit formulas previously achieved by the matrix product ansatz and other algebraic methods. ### Boundary driven SEP and its stationary distribution Let \(G\) be a finite connected graph with vertex set \([N-1]:=\{1,\ldots,N-1\}\) and edge set \(E\). To each edge \(\{x,y\}\in E\) we associate a symmetric weight \(\omega_{x,y}\in(0,\infty)\) called conductance. We thus identify \(G\) with the triple \(([N-1],E,(\omega_{x,y})_{\{x,y\}\in E})\). The boundary driven SEP on \(G\) with reservoirs parameters \(\rho_{L},\rho_{R}\in(0,1)\) and \(\omega_{L},\omega_{R}>0\) is the Markov process with state space \(\mathcal{X}=\{0,1\}^{[N-1]}\) and infinitesimal generator \[\mathcal{L}=\mathcal{L}+\omega_{L}\,\mathcal{L}_{L}+\omega_{R}\,\mathcal{L}_{R} \tag{1.1}\] where, for all bounded functions \(f:\mathcal{X}\to\mathbb{R}\), \[\mathcal{L}^{\text{bulk}}f(\eta)= \sum_{\{x,y\}\in E}\omega_{x,y}\left\{\begin{array}{l}\eta(x )\left(1-\eta(y)\right)\left(f(\eta^{x,y})-f(\eta)\right)\\ +\eta(y)\left(1-\eta(x)\right)\left(f(\eta^{y,x})-f(\eta)\right)\end{array} \right\}\,\] \[\mathcal{L}_{L}f(\eta) =\eta(1)\left(1-\rho_{L}\right)\left(f(\eta^{1,-})-f(\eta)\right)\] \[+\rho_{L}\left(1-\eta(1)\right)\left(f(\eta^{1,+})-f(\eta)\right)\] and \[\mathcal{L}_{R}f(\eta) =\eta(N-1)\left(1-\rho_{R}\right)\left(f(\eta^{N-1,-})-f(\eta)\right)\] \[+\rho_{R}\left(1-\eta(N-1)\right)\left(f(\eta^{N-1,+})-f(\eta) \right)\.\] Above \(\eta^{x,y}\) is the configuration in which a particle (if any) has been removed from \(x\in[N-1]\) and moved at \(y\in[N-1]\), while \(\eta^{x,-}\in\mathcal{X}\) is the configuration obtained from \(\eta\) by destroying a particle (if present) from site \(x\in\{1,N-1\}\) and \(\eta^{x,+}\in\mathcal{X}\) is the configuration obtained from \(\eta\) by creating a particle (if not already present) at site \(x\in\{1,N-1\}\). In the above dynamics, the action of the reservoirs corresponds to the \(\omega_{L}\,\mathcal{L}_{L}+\omega_{R}\,\mathcal{L}_{R}\) part of the generator. \(\rho_{L}\) is the particle density imposed by the left reservoir and \(\rho_{R}\) the one imposed by the right reservoir. \(\omega_{L}\) is the conductance connecting the site \(1\) to the fictitious point \(0\) representing the left reservoir and \(\omega_{R}\) is the conductance connecting the site \(N-1\) to the fictitious point \(N\) representing the right reservoirs. It is well known that \((\eta_{t})_{t\geq 0}\) is in stochastic duality relation with the Markovian interacting particle system \((\xi_{t})_{t\geq 0}\) with state space \(\mathcal{X}\times\mathbb{N}_{0}^{\{0,N\}}\) and which evolves as the SEP on \(G\) but the reservoirs are now replaced by absorbing sites \(\{0,N\}\). More precisely, in the dual system a particle at site \(1\) can be absorbed at site \(0\) at rate \(\omega_{L}\) and a particle at site \(N-1\) can be absorbed at site \(N\) at rate \(\omega_{R}\). We denote by \(\hat{\mathbb{P}}_{\xi}\) the law of \((\xi_{t})_{t\geq 0}\) starting from the configuration \(\xi\) which can be identified with a subset \(I\) of vertexes in \([N-1]\cup\{0,N\}\) via the relation \(\xi(x)=\mathbf{1}_{\{x\in I\}}\). Thus we write either \(\hat{\mathbb{P}}_{\xi}\) or \(\hat{\mathbb{P}}_{I}\). We refer the reader to Section 2.1 below for the precise definition of \((\xi_{t})_{t\geq 0}\) and for the duality relation satisfied by the two processes. Figure 1. Schematic description of the dynamics. Our first main contribution is the following theorem. We denote by \(\mathcal{P}([N-1])\) the power set of \([N-1]\) and given \(I\in\mathcal{P}([N-1])\), \(|I|\) denotes its cardinality. **Theorem 1.1**.: _The stationary distribution of the boundary driven SEP on \(G\) with reservoirs parameters \(\rho_{L},\rho_{R},\omega_{L},\omega_{R}>0\) is_ \[\mu_{\rm stat}=\sum_{I\subset\mathcal{P}([N-1])}F(I)\bigg{(}\otimes_{x\in I} \operatorname{Bernoulli}(\rho_{\rm R})\otimes_{y\in[N-1]\setminus I} \operatorname{Bernoulli}(\rho_{\rm L})\bigg{)}, \tag{1.2}\] _where_ \[F(I):=\sum_{J\subset\mathcal{P}([N-1]):\ I\subset J}(-1)^{|J\setminus I|} \hat{\mathbb{P}}_{J}(\xi_{\infty}(N)=|J|)>0 \tag{1.3}\] _satisfies_ \[\sum_{I\subset\mathcal{P}([N-1])}F(I)=1.\] _Remark 1.2_ (Probabilistic interpretation of \(F(I)\)).: As it will be clear from Theorem 2.2 below, each factor \(F(I)\) is the probability that all the particles initially at \(I\) are absorbed at \(N\), while the remaining ones at \(0\), in the dual system of the boundary driven SEP, built via the labelled stirring construction and started from particles in each site of the bulk \([N-1]\). _Remark 1.3_.: From (1.2) one can deduce that for all \(\eta\in\mathcal{X}\), \[\mu_{\rm stat}(\eta)=\sum_{\ell=0}^{|\eta|}\sum_{k=0}^{N-1-|\eta|}\rho_{R}^{\ell }\rho_{L}^{|\eta|-\ell}(1-\rho_{R})^{k}(1-\rho_{L})^{N-1-|\eta|-k}\left(\sum_{I \in C(\ell,k)}F(I)\right)\] where \(C(\ell,k)=\{I\in\mathcal{P}([N-1]):|I|=k+\ell\ {\rm and}\ |{\rm I}\cap\eta|=\ell\}\). ### Explicit formulas for boundary driven SEP Providing explicitly the factors \(F(I)\) amounts to compute the absorption probabilities in the dual system: via a coupling technique we compute such probabilities when the graph is a homogeneous one dimensional segment. **Theorem 1.4**.: _If \(\omega_{x,y}=\mathbf{1}_{\{|x-y|=1\}}\) for each \(x,y\in[N-1]\) and \(\omega_{L}=\omega_{R}=1\), then, given \(1\leq x_{1}<\ldots<x_{n}\leq N-1\),_ \[\hat{\mathbb{P}}_{\{x_{1},\ldots,x_{n}\}}(\xi_{\infty}(N)=n)=\prod_{i=1}^{n} \frac{x_{i}-(i-1)}{N-(i-1)}. \tag{1.4}\] In several works (see, e.g., [17, 20]) the following choice of parameters is taken \[\rho_{L}=\frac{\alpha}{\alpha+\gamma} \omega_{L}= \frac{1}{\alpha+\gamma}\] \[\rho_{L}=\frac{\delta}{\delta+\beta} \omega_{R}= \frac{1}{\delta+\beta}\] obtaining the \(\alpha,\beta,\gamma,\delta\) model where particles are created at rate \(\alpha\) on the left and at rate \(\delta\) on the right while particles are destroyed at rate \(\gamma\) on the left and \(\beta\) on the right. In the setting of Theorem 1.4 but with general values of \(\omega_{L}\) and \(\omega_{R}\) as in the \(\alpha,\beta,\gamma,\delta\) model we have that (see Lemma 3.3 below), given \(x_{1}<\ldots<x_{n}\), \[\hat{\mathbb{P}}_{\{x_{1},\ldots,x_{n}\}}(\xi_{\infty}(N)=n):=\prod_{i=1}^{n} \frac{\frac{1}{\omega_{L}}+x_{i}-i}{\frac{1}{\omega_{L}}+\frac{1}{\omega_{R}}+ N-1-i}. \tag{1.5}\] However we emphasize as \(\rho_{L}\) and \(\rho_{R}\) do not enter in the terms \(F(I)\) defined in (1.3). Having obtained the probabilities that all the particles in the initial configuration are absorbed at \(N\), it is then possible to recover all the absorption probabilities via the following relation. **Proposition 1.5**.: _Given \(1\leq x_{1}<\ldots<x_{n}\leq N-1\) and \(\ell\leq n\)_ \[\hat{\mathbb{P}}_{\{x_{1},\ldots,x_{n}\}}(\xi_{\infty}(N)=\ell)=\sum_{k=\ell} ^{n}(-1)^{k-\ell}\binom{k}{\ell}\sum_{1\leq i_{1}<\ldots<i_{k}\leq n}\hat{ \mathbb{P}}_{\{x_{i_{1}},\ldots,x_{i_{k}\}\}}(\xi_{\infty}(N)=k).\] As a direct consequence of Theorem 1.4 and Proposition 1.5 we obtain the non-equilibrium \(n\)-point centered and non-centered correlations of the boundary driven SEP on the homogeneous segment. **Proposition 1.6** (\(n\)-point non-equilibrium correlations).: _If \(\omega_{x,y}=\mathbf{1}_{\{|x-y|=1\}}\) for each \(x,y\in[N-1]\) and \(\omega_{L}=\omega_{R}=1\), then given \(n\in\mathbb{N}\) and \(1\leq x_{1}<\ldots<x_{n}\leq N-1\),_ \[\mathbb{E}_{\mu_{\mathrm{stat}}}\left[\prod_{i=1}^{n}\left(\eta(x_{i})- \mathbb{E}[\eta(x_{i})]\right)\right]=(\rho_{R}-\rho_{L})^{n}\psi(x_{1},\ldots,x_{n}) \tag{1.6}\] _where \(\psi(x_{1},\ldots,x_{n})\) is given by_ \[\sum_{j=0}^{n}(-1)^{j}\sum_{1\leq i_{1}<\ldots<i_{j}\leq n}\;\prod_{\ell=1}^{j }\;\frac{x_{i_{\ell}}-(\ell-1)}{N-(\ell-1)}\prod_{r\in[n]\setminus\{i_{1}, \ldots,i_{j}\}}\frac{x_{r}}{N} \tag{1.7}\] _and_ \[\mathbb{E}_{\mu_{\mathrm{stat}}}\left[\prod_{i=1}^{n}\eta(x_{i})\right]=\sum_{ j=0}^{n}\rho_{L}^{n-j}(\rho_{R}-\rho_{L})^{j}\;\sum_{1\leq i_{1}<\ldots<i_{j} \leq n}\;\prod_{\ell=1}^{j}\;\frac{x_{i_{\ell}}-(\ell-1)}{N-(\ell-1)}. \tag{1.8}\] _Remark 1.7_.: For generic \(\omega_{L},\omega_{R}>0\), (1.7) above becomes \[\sum_{j=0}^{n}(-1)^{j}\sum_{1\leq i_{1}<\ldots<i_{j}\leq n}\;\prod_{\ell=1}^{j }\;\frac{\frac{1}{\omega_{L}}+x_{i_{\ell}}-\ell}{\frac{1}{\omega_{L}}+\frac{1 }{\omega_{R}}+N-1-\ell}\prod_{r\in[n]\setminus\{i_{1},\ldots,i_{j}\}}\frac{ \frac{1}{\omega_{L}}+x_{r}-1}{\frac{1}{\omega_{L}}+\frac{1}{\omega_{R}}+N-2}\] and the right hand side of (1.8) \[\sum_{j=0}^{n}\rho_{L}^{n-j}(\rho_{R}-\rho_{L})^{j}\;\sum_{1\leq i_{1}<\ldots< i_{j}\leq n}\;\prod_{\ell=1}^{j}\;\frac{\frac{1}{\omega_{L}}+x_{i_{\ell}}-\ell}{ \frac{1}{\omega_{L}}+\frac{1}{\omega_{R}}+N-1-\ell}.\] ### Organization of the paper The rest of the paper is organized as follows. In Section 2 we introduce properly the dual process of the boundary driven SEP. We then first express the stationary distribution as a product of Bernulli measures with random parameters and from that we prove Theorem 1.1. We also prove Proposition 1.5 and Proposition 1.6 relying on Theorem 1.4 which is proved in the next sections. In Section 3 we provide a simple probabilistic proof of the \(2\) point correlations of the \(\alpha,\beta,\gamma,\delta\) model by computing the absorption probabilities in a system with \(2\) interacting particles. We also show how to compute the absorption probabilities of an arbitrary number of particles by induction after having made a guess of the expression. We conclude with Section 4 where we provide a probabilistic coupling between the dual boundary driven SSEP on segments with different sizes to compute the absorption probabilities without the need to make an ansatz. ## 2. The non-equilibrium steady state of the boundary driven SSEP The main goal of this section is to prove Theorem 1.1. As explained in the introduction our main technical tool is the stochastic duality relation satisfied by the boundary driven SEP that we are going to recall precisely below. ### The dual process and the duality relation Consider the graph \[G=([N-1],E,(\omega_{x,y})_{\{x,y\}\in E})\] introduced in Section 1.2, denote by \([N]_{0}=[N-1]\cup\{0,N\}\), \[\hat{E}=E\cup\{\{0,1\},\{N-1,N\}\}\] and put \(\omega_{0,1}=\omega_{L}\) and \(\omega_{N-1,N}=\omega_{R}\). The dual process \((\xi_{t})_{t\geq 0}\) is the Markovian interacting particle system evolving on the extended graph \[\hat{G}=([N]_{0},\hat{E},(\omega_{x,y})_{\{x,y\}\in\hat{E}}),\] which behaves in the same way as the boundary driven SEP in the bulk \([N-1]\) but the reservoirs are now substituted by the absorbing sites \(\{0,N\}\). Thus \(\xi=(\xi_{t})_{t\geq 0}\) has state space \(\hat{\mathcal{X}}:=\mathbb{N}^{\{0\}}\times\{0,1\}^{[N-1]}\times\mathbb{N}^{\{N\}}\) and its generator is given by \[\hat{\mathcal{L}}=\hat{\mathcal{L}}^{\text{bulk}}+\omega_{L}\,\hat{\mathcal{L }}_{L}+\omega_{R}\,\hat{\mathcal{L}}_{R} \tag{2.1}\] where, for all bounded functions \(f:\hat{\mathcal{X}}\to\mathbb{R}\), \[\hat{\mathcal{L}}^{\text{bulk}}f(\xi)=\sum_{\{x,y\}\in E}\omega_{x,y}\left\{ \begin{array}{l}\xi(x)\left(1-\xi(y)\right)\left(f(\xi^{x,y})-f(\xi)\right) \\ +\,\xi(y)\left(1-\xi(x)\right)\left(f(\xi^{y,x})-f(\xi)\right)\end{array} \right\}\,,\] \[\hat{\mathcal{L}}_{L}f(\xi)=\xi(1)\left(f(\xi^{1,0})-f(\xi)\right)\] and \[\hat{\mathcal{L}}_{R}f(\xi)=\xi(N-1)\,\left(f(\xi^{N-1,N})-f(\xi)\right).\] Recall that a configuration \(\xi\in\hat{\mathcal{X}}\) will be often identified with the set \(I\subset[N]_{0}\) of points \(x\) such that \(\xi(x)>0\), i.e. \(\xi(x)=\mathbf{1}_{\{x\in I\}}\), and that we denote by \(\hat{\mathbb{P}}_{\xi}\) (or by \(\hat{\mathbb{P}}_{I}\)) and by \(\hat{\mathbb{E}}_{\xi}\) or (by \(\hat{\mathbb{E}}_{I}\)) the law and the corresponding expectation of the process \((\xi_{t})_{t\geq 0}\) starting from the configuration \(\xi\). For \(\eta\in\mathscr{X}\) and \(\xi\in\hat{\mathscr{X}}\) we define \[D(\eta,\xi)=\rho_{L}^{\xi(0)}\left(\prod_{x\in[N-1]:\ \xi(x)=1}\eta(x)\right) \rho_{R}^{\xi(N)}.\] We recall below the duality relation between the boundary driven SEP and the process \((\xi_{t})_{t\geq 0}\). **Proposition 2.1** (See [6] and [13]).: _The boundary driven SEP \((\eta_{t})_{t\geq 0}\) with generator given in (1.1) and the process \((\xi_{t})_{t\geq 0}\) with generator given in (2.1) satisfy the following relation: for all \(\eta\in\mathscr{X}\), \(\xi\in\hat{\mathscr{X}}\) and \(t\geq 0\)_ \[\mathbb{E}_{\eta}[D(\eta_{t},\xi)]=\hat{\mathbb{E}}_{\xi}[D(\eta,\xi_{t})]. \tag{2.2}\] ### The labelled stirring dual process In order to prove Theorem 1.1 we consider the labelled stirring construction (see, e.g., [9] or [25]) of the dual process on \(\hat{G}\). More precisely, define on the same probability space independent random variables \((\Gamma_{x,y})_{\{x,y\}\in E}\), \(\Gamma_{0}\), \(\Gamma_{N}\) where \(\Gamma_{x,y}\) is a Poisson point process with rate \(\omega_{x,y}\) providing the swapping times across the edge \(\{x,y\}\) and \(\Gamma_{0}\) and \(\Gamma_{N}\) are Poisson processes with rate \(\omega_{L}\) and \(\omega_{R}\) respectively, giving the absorbing times from sites \(1\) and \(N-1\) respectively. The labelled dual particles system \((X_{t}^{x_{1}},\ldots,X_{t}^{x_{n}})_{t\geq 0}\) starting from \(x_{1}<\ldots<x_{n}\) is obtained by following deterministically the arrows obtained as a realization of the Poisson processes just introduced. Namely \((X_{t}^{x_{1}},\ldots,X_{t}^{x_{n}})_{t\geq 0}\) is constant until a Poisson mark is encountered. If \(t\in\Gamma_{x,y}\) and if at \(t^{-}\) there is a particle at \(x\) or at \(y\) or at both sites, then these particles swap their positions (i.e. if a particle is at \(x\) at \(t^{-}\) then it moves to \(y\) at time \(t\) and viceversa), keeping track of their label given by the initial position. If \(t\in\Gamma_{0}\) (or \(t\in\Gamma_{N}\)) and at \(t^{-}\) a particle is present at \(1\) (or at \(N-1\)) then that particles is absorbed at \(0\) (or at \(N\)). For each \(n\in[N-1]\) and \(x_{1}<\ldots<x_{n}\), we denote by \(\mathbb{P}_{\xi}^{stir}\) (respectively \(\mathbb{E}_{\xi}^{stir}\)) the probability (respectively the expectation) induced by the random arrows on the space of trajectories of the labelled stirring process started from \(\xi=\{x_{1},\ldots,x_{n}\}\). Notice that by the construction it follows that for all \(\Gamma\subset[N]\) and \(f:[N]_{0}^{\Gamma}\to\mathbb{R}\) \[\mathbb{E}_{[N]}^{stirr}[f((X_{t}^{x})_{x\in\Gamma})]=\mathbb{E}_{\Gamma}^{stirr }[f((X_{t}^{x})_{x\in\Gamma})]. \tag{2.3}\] Figure 2. Schematic description of the dual dynamics. Moreover, if \(f:[N]_{0}^{\Gamma}\to\mathbb{R}\) is symmetric (where symmetric means invariant by permutations of its entries), then \[\mathbb{E}_{\Gamma}^{stirr}[f((X_{t}^{x})_{x\in\Gamma})]=\hat{\mathbb{E}}_{\Gamma }[f(\xi_{t})]. \tag{2.4}\] ### Probabilistic interpretation of the non-equilibrium steady state and proof of Theorem 1.1 We introduce random variables \((U_{x})_{x\in[N-1]}\) on the same probability space of the stirring construction, with \(U_{x}\in\{\rho_{L},\rho_{R}\}\), which are not independent and such that their joint law is given by \[\mathbb{P}_{[N-1]}^{stirr}((U_{x})_{x\in[N-1]}=\bar{\rho}_{I})=\mathbb{E}_{[N-1 ]}^{stirr}\left[\prod_{x\in I}\mathbf{1}_{\{X_{\infty}^{x}=N\}}\prod_{x\in[N-1 ]\setminus I}\mathbf{1}_{\{X_{\infty}^{x}=0\}}\right]\] where, given \(I\subset[N-1]\), \(\bar{\rho}_{I}\) is the vector \((\bar{\rho}_{I}(x))_{x\in I}\) with \(\bar{\rho}_{I}(x)=\rho_{R}\) if \(x\in I\) and \(\bar{\rho}_{I}(x)=\rho_{L}\) otherwise. We start by showing the following result, which clarifies the probabilistic interpretation of the non-equilibrium steady state. **Theorem 2.2**.: _The stationary distribution of the boundary driven SEP on \(G\) with reservoirs parameters \(\rho_{L},\rho_{R},\omega_{L},\omega_{R}>0\) is_ \[\mu_{\rm stat}=\mathbb{E}_{[N-1]}^{stirr}\left[\otimes_{x\in[N-1]}\mathrm{Ber }(\mathrm{U_{x}})\right]. \tag{2.5}\] Proof.: The stationary measure \(\mu_{\rm stat}\) is completely characterized by the moments \[\mathbb{E}_{\mu_{\rm stat}}\left[\prod_{i=1}^{n}\eta(x_{i})\right],\;\forall n \in[N-1]\;\mathrm{and}\;\;1\leq x_{1}<\ldots<x_{n}\leq N-1.\] By duality, we have \[\mathbb{E}_{\mu_{\rm stat}}\left[\prod_{i=1}^{n}\eta(x_{i})\right] =\lim_{t\to\infty}\mathbb{E}_{\eta}\left[\prod_{i=1}^{n}\eta_{t}(x_{i})\right]\] \[=\lim_{t\to\infty}\hat{\mathbb{E}}_{\{x_{1},\ldots,x_{n}\}}[D( \eta,\xi_{t})]=\hat{\mathbb{E}}_{\{x_{1},\ldots,x_{n}\}}\left[\rho_{L}^{ \xi_{\infty}(0)}\rho_{R}^{\xi_{\infty}(N)}\right]\] \[=\sum_{\ell=0}^{n}\rho_{R}^{\ell}\rho_{L}^{n-\ell}\hat{\mathbb{P} }_{\{x_{1},\ldots,x_{n}\}}[\xi_{\infty}(N)=\ell]. \tag{2.6}\] Thus we only need to show that for all \(n\in[N-1]\) and \(1\leq x_{1}<\ldots<x_{n}\leq N-1\) \[\int\prod_{i=1}^{n}\eta(x_{i})\mathrm{d}\mathbb{E}_{[N-1]}^{stirr}\left[ \otimes_{x\in[N-1]}\mathrm{Ber}(\mathrm{U_{x}})\right]=\sum_{\ell=0}^{n}\rho_{R }^{\ell}\rho_{L}^{n-\ell}\hat{\mathbb{P}}_{\{x_{1},\ldots,x_{n}\}}[\xi_{\infty }(N)=\ell]. \tag{2.7}\] We then have, using (2.3), \[\int\prod_{i=1}^{n}\eta(x_{i})\mathrm{d}\mathbb{E}_{[N-1]}^{stirr }\left[\otimes_{x\in[N-1]}\mathrm{Ber}(\mathrm{U_{x}})\right]=\mathbb{E}_{\{x_ {1},\ldots,x_{n}\}}^{stirr}\left[\prod_{i=1}^{n}U_{x_{i}}\right]\] \[=\sum_{I\subset\{x_{1},\ldots,x_{n}\}}\rho_{R}^{|I|}\rho_{L}^{n-| I|}\mathbb{E}_{\{x_{1},\ldots,x_{n}\}}^{stirr}\;\left[\prod_{x\in I}\mathbf{1}_{ \{X_{\infty}^{x}=N\}}\prod_{x\in\{x_{1},\ldots,x_{n}\}\setminus I}\mathbf{1}_{ \{X_{\infty}^{x}=0\}}\right]\] \[=\sum_{\ell=0}^{n}\rho_{R}^{\ell}\rho_{L}^{n-\ell}\mathbb{E}_{\{x_{1}, \ldots,x_{n}\}}^{stirr}\ \left[\sum_{I\subset\{x_{1},\ldots,x_{n}\}\ :\ |I|=\ell}\left(\prod_{x\in I}\mathbf{1}_{\{X_{\infty}^{x}=N\}}\prod_{x\in\{x_{1 },\ldots,x_{n}\}\setminus I}\mathbf{1}_{\{X_{\infty}^{x}=0\}}\right)\right].\] Noticing that \[\sum_{I\subset\{x_{1},\ldots,x_{n}\}\ :\ |I|=\ell}\left(\prod_{x\in I}\mathbf{1}_ {\{X_{\infty}^{x}=N\}}\prod_{x\in\{x_{1},\ldots,x_{n}\}\setminus I}\mathbf{1} _{\{X_{\infty}^{x}=0\}}\right)=\mathbf{1}_{\{\sum_{\ell=1}^{n}\mathbf{1}_{\{X _{\infty}^{x}=N\}}=\ell\}}\] is a symmetric function and using (2.4), we have that \[\mathbb{E}_{\{x_{1},\ldots,x_{n}\}}^{stirr}\ \left[\sum_{I \subset\{x_{1},\ldots,x_{n}\}\ :\ |I|=\ell}\left(\prod_{x\in I}\mathbf{1}_{\{X_{\infty}^{x}=N\}}\prod_{x\in\{x_{ 1},\ldots,x_{n}\}\setminus I}\mathbf{1}_{\{X_{\infty}^{x}=0\}}\right)\right]\\ =\widehat{\mathbb{P}}_{\{x_{1},\ldots,x_{n}\}}\ \left(\xi_{\infty}(N)=\ell\right) \tag{2.8}\] from which (2.7) follows and in particular (2.5) also follows. We can finally obtain Theorem 1.1. Proof of Theorem 1.1.: For any \(I\subset[N-1]\) \[\mathbb{P}_{[N-1]}^{stirr}\left(\cup_{x\in I}\{U_{x}=\rho_{R} \}\cup_{y\in[N-1]\setminus I}\{U_{y}=\rho_{L}\}\right)\] \[=\mathbb{E}_{[N-1]}^{stirr}\left[\prod_{x\in I}\mathbf{1}_{\{X_{ \infty}^{x}=N\}}\prod_{x\in\{x_{1},\ldots,x_{n}\}\setminus I}\mathbf{1}_{\{X _{\infty}^{x}=0\}}\right]\] Figure 3. Schematic description of the proof via the stirring construction of the dual process. \[=\mathbb{E}_{[N-1]}^{stirr}\left[\prod_{x\in I}\mathbf{1}_{\{X_{\infty} ^{x}=N\}}\prod_{x\in[N-1]\setminus I}\big{(}1-\mathbf{1}_{\{X_{\infty}^{x}=N\}} \big{)}\right]\] \[=\sum_{J\subset[N-1]\setminus I}(-1)^{|J|}\mathbb{E}_{I\cup J}^{ stirr}\left[\prod_{x\in I\cup J}\mathbf{1}_{\{X_{\infty}^{x}=N\}}\right]\] where we used (2.3) in the last equality. Noticing that \(\prod_{x\in I\cup J}\mathbf{1}_{\{X_{\infty}^{x}=N\}}\) is a symmetric function, using 2.4, we conclude that \[\mathbb{P}_{[N-1]}^{stirr}\left(\cup_{x\in I}\{U_{x}=\rho_{R}\} \cup_{y\in[N-1]\setminus I}\{U_{y}=\rho_{L}\}\right)\\ =\sum_{J\subset[N-1]\setminus I}(-1)^{|J|}\hat{\mathbb{P}}_{I \cup J}\left(\xi_{\infty}(N)=|I\cup J|\right) \tag{2.9}\] from which we obtain \[\mathbb{E}^{stirr}\left[\otimes_{x\in[N-1]}\mathrm{Ber}(\mathrm{U}_{\mathrm{x }})\right]=\sum_{J\subset\mathcal{P}([N-1]):\ I\subset J}F(I)\bigg{(}\otimes_ {x\in I}\mathrm{Ber}(\rho_{\mathrm{R}})\otimes_{\mathrm{x}\in[\mathbb{N}-1] \setminus I}\mathrm{Ber}(\rho_{\mathrm{L}})\bigg{)},\] with \(F(I)\) given in (1.3) and thus the thesis of Theorem 1.1. ### Proofs of Proposition 1.5 and Proposition 1.6 We conclude the section by proving Proposition 1.5 and Proposition 1.6 which are also achieved by duality. Proof of Proposition 1.5.: Arguing as in the Proof of Theorem 1.1, we obtain \[\hat{\mathbb{P}}_{\{x_{1},\dots,x_{n}\}}(\xi_{\infty}(N)=\ell)\] \[=\sum_{I\subset\{x_{1},\dots,x_{n}\}\ :\ |I|=\ell} \mathbb{E}_{\{x_{1},\dots,x_{n}\}}^{stirr}\ \left[\prod_{x\in I}\mathbf{1}_{\{X_{\infty}^{x}=N\}} \prod_{x\in\{x_{1},\dots,x_{n}\}\setminus I}(1-\mathbf{1}_{\{X_{\infty}^{x}=N \}})\right]\] \[=\sum_{I\subset\{x_{1},\dots,x_{n}\}\ :\ |I|=\ell} \sum_{J\subset\{x_{1},\dots,x_{n}\}\setminus I}(-1)^{|J|}\hat{ \mathbb{P}}_{I\cup J}(\xi_{\infty}(N)=|I\cup J|).\] Exchanging the order of the above summations we get \[\hat{\mathbb{P}}_{\{x_{1},\dots,x_{n}\}}(\xi_{\infty}(N)=\ell)\] \[=\sum_{K\subset\{x_{1},\dots,x_{n}\}\ :\ |K|\geq\ell}\hat{\mathbb{P}}_{K}(\xi_{ \infty}(N)=|K|)\left(\sum_{I\subset K\ :\ |I|=\ell}(-1)^{|K|-|I|}\right)\] \[=\sum_{K\subset\{x_{1},\dots,x_{n}\}\ :\ |K|\geq\ell}(-1)^{|K|-\ell} \binom{|K|}{\ell}\hat{\mathbb{P}}_{K}(\xi_{\infty}(N)=|K|)\] concluding the proof. Proof of Proposition 1.6.: For what concerns the centered \(n\) point correlations, equations (1.6) and (1.7) follows by the combination of Theorem 1.4 above (which is proved in Section 3 and Section 4 below with two different approaches) with [13, Theorem 5.1]. For what concerns the non-centered \(n\) point correlations, recall from (2.6) that \[\mathbb{E}_{\mu_{\mathrm{stat}}}\left[\prod_{i=1}^{n}\eta(x_{i})\right]=\sum_{ \ell=0}^{n}\rho_{R}^{\ell}\rho_{L}^{n-\ell}\hat{\mathbb{P}}_{\{x_{1},\ldots,x _{n}\}}[\xi_{\infty}(N)=\ell].\] Employing Proposition 1.5 we obtain \[\mathbb{E}_{\mu_{\mathrm{stat}}}\left[\prod_{i=1}^{n}\eta(x_{i})\right]=\sum_ {\ell=0}^{n}\rho_{R}^{\ell}\rho_{L}^{n-\ell}\sum_{k=\ell}^{n}(-1)^{k-\ell} \binom{k}{\ell}\sum_{1\leq i_{1}<\ldots<i_{k}\leq n}\hat{\mathbb{P}}_{\{x_{i_{ 1}},\ldots,x_{i_{k}}\}}(\xi_{\infty}(N)=k)\] and changing the order of summations we have \[\mathbb{E}_{\mu_{\mathrm{stat}}}\left[\prod_{i=1}^{n}\eta(x_{i}) \right]=\sum_{k=0}^{n}\ \sum_{1\leq i_{1}<\ldots<i_{k}\leq n}\hat{\mathbb{P}}_{\{x_{i_{1}},\ldots,x_{ i_{k}}\}}(\xi_{\infty}(N)=k)\sum_{\ell=0}^{k}(-1)^{k-\ell}\binom{k}{\ell}\rho_{R}^{ \ell}\rho_{L}^{n-\ell}\] \[=\sum_{k=0}^{n}\ \sum_{1\leq i_{1}<\ldots<i_{k}\leq n}\hat{ \mathbb{P}}_{\{x_{i_{1}},\ldots,x_{i_{k}}\}}(\xi_{\infty}(N)=k)\rho_{L}^{n-k} \sum_{\ell=0}^{k}\binom{k}{\ell}\rho_{R}^{\ell}(-\rho_{L})^{k-\ell}\] \[=\sum_{k=0}^{n}\rho_{L}^{n-k}(\rho_{R}-\rho_{L})^{k}\sum_{1\leq i _{1}<\ldots<i_{k}\leq n}\hat{\mathbb{P}}_{\{x_{i_{1}},\ldots,x_{i_{k}}\}}(\xi_ {\infty}(N)=k)\] and employing Theorem 1.4 above we conclude the proof. ## 3. 2-point correlations and the absorption probabilities via induction In this section, our objective is to calculate the two-point stationary correlations for the boundary driven SSEP on a segment where all the conductances are set to one, except for the conductances that connect site \(1\) with the left reservoir and site \(N-1\) with the right reservoir. Thus we consider the \(\alpha\), \(\beta\), \(\gamma\), \(\delta\) model and we allow the possibility to rescale the intensity of the conductances connected to reservoirs as done in hydrodynamic limits (see [19]). Because by duality, this amount to compute the absorption probabilities of two dual interacting particles, we start by computing such quantities in Subsection 3.1 below. Based on the resulting expression, we then make an educated guess for the absorption probability of an arbitrary number of particles, which we subsequently prove to be true by induction in Subsection 3.2 below. Additionally, in Section 4 below, we present an alternative approach to compute the absorption probabilities. This method relies on a probabilistic coupling technique that eliminates the need to make assumptions regarding the specific form of these probabilities. ### 2-point correlations via three martingales Consider the open SSEP on the homogeneous segment \(([N-1],E,\)\((\omega_{x,y})_{\{x,y\}\in E})\), with \(\{x,y\}\in E\) if and only if \(|x-y|=1\), \(\omega_{x,x+1}=1\) for each \(x\in\{1,\ldots,N-2\}\in E\), and set \(\omega_{L},\omega_{R}>0\). We prove the following expression for the two point centered correlation previously derived in [31] and [12] via other methods. **Proposition 3.1**.: _For \(x<y\)_ \[\mathrm{Cor}(\mathrm{x},\mathrm{y}):= \hat{\mathbb{E}}_{\mu_{\mathrm{stat}}}[\eta(x)\eta(y)]-\hat{ \mathbb{E}}_{\mu_{\mathrm{stat}}}[\eta(x)]\hat{\mathbb{E}}_{\mu_{\mathrm{stat} }}[\eta(y)]\] \[=-(\rho_{R}-\rho_{L})^{2}\frac{\left(\frac{1}{\omega_{L}}+x-1 \right)\left(\frac{1}{\omega_{R}}+N-1-y\right)}{\left(\frac{1}{\omega_{L}}+ \frac{1}{\omega_{R}}+N-2\right)^{2}\left(\frac{1}{\omega_{L}}+\frac{1}{\omega _{R}}+N-3\right)}.\] We achieve the above result by computing the absorption probabilities of two dual particles evolving on the extended segment \([N-1]\cup\{0,N\}\), where \(\{0,N\}\) are the two absorbing sites. In particular, in order to compute asymptotic absorption probabilities, it is enough to consider the skeleton chain of the dual process \((\xi_{t})_{t\geq 0}\) starting from \(\xi_{0}=\{x,y\}\). Denote by \((X_{n}^{x},X_{n}^{y})\) the position of two dual particles starting from \(x<y\), where swapping is not allowed, after \(n\) steps (i.e. \(n\) is the number of jumps). Notice that \(X_{n}^{x}=X_{n}^{y}\) if and only if \(X_{n}^{x}=X_{n}^{y}\in\{0,N\}\), i.e. both particles are eventually absorbed in the same site. Recall that in a segment weighted by symmetric conductances \(\omega_{x,x+1}\), harmonic functions at \(z\) are given by \[\sum_{i=0}^{z-1}\frac{1}{\omega_{i,i+1}}+\text{constant}.\] Thus, in our setting we set \[\begin{cases}h(0):=0\\ h(x):=\frac{1}{\omega_{L}}+x-1,\quad x\in\{1,\ldots,N-1\}\\ h(N):=\frac{1}{\omega_{L}}+\frac{1}{\omega_{R}}+N-2\end{cases} \tag{3.1}\] and the absorption probability on the point \(N\) of a particle starting from \(x\in[N]_{0}\) is then given by \[\hat{\mathbb{P}}_{\{x\}}[\xi_{\infty}(N)=1]=\frac{h(x)}{h(N)}.\] In order to compute the absorption probabilities we take advantage of three martingales with respect to the natural filtration generated by \(\{X_{n}^{x},X_{n}^{y}\}_{n\in\mathbb{N}}\) which are given in Lemma below. **Lemma 3.2**.: _Let \(x<y\) and define the following process_ \[D_{0}=0\] \[D_{n}:=\sum_{k=0}^{n-1}\mathbf{1}_{\{X_{k}^{y}-X_{k}^{x}=1\}} \mathbf{1}_{\{X_{k}^{y},X_{k}^{x}\notin\{0,N\}\}}\\ \times\left(\mathbf{1}_{\{X_{k}^{y},X_{k}^{x}\notin\{1,N-1\}\}}+ \frac{2}{\omega_{L}+1}\mathbf{1}_{\{X_{k}^{x}=1\}}+\frac{2}{\omega_{R}+1} \mathbf{1}_{\{X_{k}^{y}=N-1\}}\right). \tag{3.2}\] _Then the processes_ \[(h(X_{n}^{x})+h(X_{n}^{y}))_{n\geq 0}, \tag{3.3}\] \[(h(X_{n}^{y})-h(X_{n}^{x})-D_{n})_{n\geq 0} \tag{3.4}\] _and_ \[(h(X_{n}^{x})h(X_{n}^{y})+\frac{1}{2}D_{n})_{n\geq 0} \tag{3.5}\] _are martingales with respect to the filtration \(\mathcal{F}_{n}:=\sigma\{X_{k}^{x},X_{k}^{y},k\in\{0,\ldots,n\}\}\)._ Proof.: We provide only the proof of the fact that \((h(X_{n}^{x})h(X_{n}^{y})+\frac{1}{2}D_{n})_{n\geq 0}\) is a martingale since for the other two processes the conclusion follows by similar arguments. Denote by \(\hat{\mathbb{E}}_{(x,y)}\) the expectation of the skeleton chain \(\{X_{n}^{x},X_{n}^{y}\}_{n\in\mathbb{N}}\). We need to show that \[\hat{\mathbb{E}}_{(x,y)}\left[h(X_{n}^{x})h(X_{n}^{y})+\frac{1}{2}D_{n}| \mathcal{F}_{n-1}\right]=h(X_{n-1}^{x})h(X_{n-1}^{y})+\frac{1}{2}D_{n-1}. \tag{3.6}\] Let us now write \[1 =(1-\mathbf{1}_{\{X_{n-1}^{y},X_{n-1}^{x}\notin\{0,N\}\}})\] \[+\mathbf{1}_{\{X_{n-1}^{y},X_{n-1}^{x}\notin\{0,N\}\}}\left( \mathbf{1}_{\{X_{n-1}^{y}-X_{n-1}^{x}\neq 1\}}\right.\] \[\left.+\frac{1}{\omega_{L}+1}\mathbf{1}_{\{X_{n-1}^{x}=1\}}+ \frac{1}{\omega_{R}+1}\mathbf{1}_{\{X_{n-1}^{y}=N-1\}}\right)\right). \tag{3.7}\] First notice that if at time \(n-1\) both particles are absorbed, then no extra jumps occur. If only one particle is absorbed at time \(n-1\) then \(D_{n-1}=D_{n}\), the particle not absorbed is an independent random walk and because \(h\) is harmonic, we conclude \[\hat{\mathbb{E}}_{(x,y)}\left[(1-\mathbf{1}_{\{X_{n-1}^{y},X_{n-1 }^{x}\notin\{0,N\}\}})(h(X_{n}^{x})h(X_{n}^{y})+\frac{1}{2}D_{n})|\mathcal{F}_{ n-1}\right]\\ =(1-\mathbf{1}_{\{X_{n-1}^{y},X_{n-1}^{x}\notin\{0,N\}\}})(h(X_{n- 1}^{x})h(X_{n-1}^{y})+\frac{1}{2}D_{n-1}). \tag{3.8}\] Consider now \[\hat{\mathbb{E}}_{(x,y)}\left[\mathbf{1}_{\{X_{n-1}^{y},X_{n-1}^{x}\notin\{0, N\}\}}\mathbf{1}_{\{X_{n-1}^{y}-X_{n-1}^{x}\neq 1\}}(h(X_{n}^{x})h(X_{n}^{y})+\frac{1}{2}D_{n })|\mathcal{F}_{n-1}\right].\] On the event \(\mathbf{1}_{\{X_{n-1}^{y}-X_{n-1}^{x}\neq 1\}}\) we have \(D_{n}=D_{n-1}\) and since particles are not nearest neighbor, they behave as independent random walks an thu \[\hat{\mathbb{E}}_{(x,y)}\left[\mathbf{1}_{\{X_{n-1}^{y},X_{n-1}^{x} \notin\{0,N\}\}}\mathbf{1}_{\{X_{n-1}^{y}-X_{n-1}^{x}=1\}}(h(X_{n}^{x})h(X_{n}^{ y})+\frac{1}{2}D_{n})|\mathcal{F}_{n-1}\right]\\ =\mathbf{1}_{\{X_{n-1}^{y},X_{n-1}^{x}\notin\{0,N\}\}}\mathbf{1}_ {\{X_{n-1}^{y}-X_{n-1}^{x}\neq 1\}}(h(X_{n-1}^{x})h(X_{n-1}^{y})+\frac{1}{2}D_{n-1}). \tag{3.9}\] Denote by \[A:=\mathbf{1}_{\{X_{n-1}^{y},X_{n-1}^{x}\notin\{0,N\}\}}\mathbf{1}_{\{X_{n-1}^ {y}-X_{n-1}^{x}=1\}}\mathbf{1}_{\{X_{n-1}^{y},X_{n-1}^{x}\notin\{1,N-1\}\}}\] and notice that in this case \[\hat{\mathbb{E}}_{(x,y)}\left[AD_{n}|\mathcal{F}_{n-1}\right]=A(D_{n-1}+1).\] Moreover \[\hat{\mathbb{E}}_{(x,y)}\left[Ah(X_{n}^{x})h(X_{n}^{y})|\mathcal{F }_{n-1}\right]=A\hat{\mathbb{E}}_{(X_{n-1}^{x},X_{n-1}^{y})}\left[h(X_{n}^{x}) h(X_{n}^{y})\right]\] \[=A\left(\frac{1}{2}h(X_{n-1}^{x}-1)h(X_{n-1}^{y})+\frac{1}{2}h(X_ {n-1}^{x})h(X_{n-1}^{y}+1)\right)\] \[=Ah(X_{n-1}^{x})h(X_{n-1}^{y})-\frac{1}{2}A(h(X_{n-1}^{y})-h(X_{n -1}^{x}))\] \[=Ah(X_{n-1}^{x})h(X_{n-1}^{y})-\frac{1}{2}A\] from which we obtain \[\hat{\mathbb{E}}_{(x,y)}\left[A(h(X_{n}^{x})h(X_{n}^{y})+\frac{1}{2}D_{n})| \mathcal{F}_{n-1}\right]=A(h(X_{n-1}^{x})h(X_{n-1}^{y})+\frac{1}{2}D_{n-1}).\] Denote by \[B:=\mathbf{1}_{\{X_{n-1}^{y},X_{n-1}^{x}\notin\{0,N\}\}}\mathbf{1}_{\{X_{n-1} ^{y}-X_{n-1}^{x}=1\}}\mathbf{1}_{\{X_{n-1}^{x}=1\}}\] and notice that in this case \[\hat{\mathbb{E}}_{(x,y)}\left[BD_{n}|\mathcal{F}_{n-1}\right]=B(D_{n-1}+\frac{ 2}{\omega_{L}+1}).\] Moreover \[\hat{\mathbb{E}}_{(x,y)}\left[Bh(X_{n}^{x})h(X_{n}^{y})|\mathcal{F }_{n-1}\right]=B\hat{\mathbb{E}}_{(X_{n-1}^{x},X_{n-1}^{y})}\left[h(X_{n}^{x}) h(X_{n}^{y})\right]\] \[=B\left(\frac{\omega_{L}}{1+\omega_{L}}h(X_{n-1}^{x}-1)h(X_{n-1}^{ y})+\frac{1}{1+\omega_{L}}h(X_{n-1}^{x})h(X_{n-1}^{y}+1)\right)\] \[=0+B\frac{1}{1+\omega_{L}}\left(h(X_{n-1}^{x})h(X_{n-1}^{y})+h(X_ {n-1}^{x})\right)\] \[=Bh(X_{n-1}^{x})h(X_{n-1}^{y})-\frac{B}{1+\omega_{L}}(h(X_{n-1}^{ y})-h(X_{n-1}^{x}))\] \[=Bh(X_{n-1}^{x})h(X_{n-1}^{y})-\frac{B}{1+\omega_{L}}\] from which we obtain \[\hat{\mathbb{E}}_{(x,y)}\left[A(h(X_{n}^{x})h(X_{n}^{y})+\frac{1}{2}D_{n})| \mathcal{F}_{n-1}\right]=A(h(X_{n-1}^{x})h(X_{n-1}^{y})+\frac{1}{2}D_{n-1}).\] Denote by \[B:=\mathbf{1}_{\{X_{n-1}^{y},X_{n-1}^{x}\notin\{0,N\}\}}\mathbf{1}_{\{X_{n-1} ^{y}-X_{n-1}^{x}=1\}}\mathbf{1}_{\{X_{n-1}^{y}=1\}}\] and notice that in this case \[\hat{\mathbb{E}}_{(x,y)}\left[BD_{n}|\mathcal{F}_{n-1}\right]=B(D_{n-1}+\frac{ 2}{\omega_{L}+1}).\] Moreover \[\hat{\mathbb{E}}_{(x,y)}\left[Bh(X_{n}^{x})h(X_{n}^{y})|\mathcal{F }_{n-1}\right]=B\hat{\mathbb{E}}_{(X_{n-1}^{x},X_{n-1}^{y})}\left[h(X_{n}^{x}) h(X_{n}^{y})\right]\] \[=B\left(\frac{\omega_{L}}{1+\omega_{L}}h(X_{n-1}^{x}-1)h(X_{n-1}^ {y})+\frac{1}{1+\omega_{L}}h(X_{n-1}^{x})h(X_{n-1}^{y}+1)\right)\] \[=0+B\frac{1}{1+\omega_{L}}\left(h(X_{n-1}^{x})h(X_{n-1}^{y})+h(X_ {n-1}^{x})\right)\] \[=Bh(X_{n-1}^{x})h(X_{n-1}^{y})-\frac{B}{1+\omega_{L}}(h(X_{n-1}^{ y})-h(X_{n-1}^{x}))\] \[=Bh(X_{n-1}^{x})h(X_{n-1}^{y})-\frac{B}{1+\omega_{L}}\] from which we obtain \[\hat{\mathbb{E}}_{(x,y)}\left[A(h(X_{n}^{x})h(X_{n}^{y})+\frac{1}{2}D_{n})| \mathcal{F}_{n-1}\right]=A(h(X_{n-1}^{x})h(X_{n-1}^{y})+\frac{1}{2}D_{n-1}).\] Denote by \[B:=\mathbf{1}_{\{X_{n-1}^{y},X_{n-1}^{x}\notin\{0,N\}\}}\mathbf{1}_{\{X_{n-1}^{ y}-X_{n-1}^{x}=1\}}\mathbf{1}_{\{X_{n-1}^{y},X_{n-1}^{y}\notin\{0,N\}\}}\mathbf{1}_{\{X_{n-1}^{y},X_{n-1}^{y}\notin\{0,N\}\}}\mathbf{1}_{\{X_{n-1}^{y},X_{n-1}^{y}\notin\{0,N\}\}}\] and notice that in this case \[\hat{\mathbb{E}}_{(x,y)}\left[BD_{n}|\mathcal{F}_{n-1}\right]=B(D_{n-1}+\frac{ 2}{\omega_{L}+1}).\] Moreover \[\hat{\mathbb{E}}_{(x,y)}\left[BD_{n}|\mathcal{F}_{n-1}\right]=B(D_{n-1}+\frac{ 2}{\omega_{L}+1}). \[\hat{\mathbb{E}}_{(x,y)}\left[B(h(X_{n}^{x})h(X_{n}^{y})+\frac{1}{2}D_{n})| \mathcal{F}_{n-1}\right]=B(h(X_{n-1}^{x})h(X_{n-1}^{y})+\frac{1}{2}D_{n-1}).\] Denote by \[C:=\mathbf{1}_{\{X_{n-1}^{y},X_{n-1}^{x}\notin\{0,N\}\}}\mathbf{1}_{\{X_{n-1}^ {y}-X_{n-1}^{x}=1\}}\mathbf{1}_{\{X_{n-1}^{y}=N-1\}}\] and notice that in this case \[\hat{\mathbb{E}}_{(x,y)}\left[CD_{n}|\mathcal{F}_{n-1}\right]=C(D_{n-1}+\frac{ 2}{\omega_{R}+1}).\] Moreover \[\hat{\mathbb{E}}_{(x,y)}\left[Ch(X_{n}^{x})h(X_{n}^{y})|\mathcal{ F}_{n-1}\right]=C\hat{\mathbb{E}}_{(X_{n-1}^{x},X_{n-1}^{y})}\left[h(X_{n}^{x})h(X _{n}^{y})\right]\] \[=C\left(\frac{1}{1+\omega_{R}}h(X_{n-1}^{x}-1)h(X_{n-1}^{y})+ \frac{\omega_{R}}{1+\omega_{R}}h(X_{n-1}^{x})h(X_{n-1}^{y}+1)\right)\] \[=\frac{C}{1+\omega_{R}}\left[\left(h(X_{n-1}^{x})h(X_{n-1}^{y})-h (X_{n-1}^{y})\right)+\omega_{R}\left(h(X_{n-1}^{x})h(X_{n-1}^{y})+\frac{1}{ \omega_{R}}h(X_{n-1}^{x})\right)\right]\] \[=Ch(X_{n-1}^{x})h(X_{n-1}^{y})-\frac{C}{1+\omega_{R}}(h(X_{n-1}^{ y})-h(X_{n-1}^{x}))\] \[=Ch(X_{n-1}^{x})h(X_{n-1}^{y})-\frac{C}{1+\omega_{R}}\] from which we obtain \[\hat{\mathbb{E}}_{(x,y)}\left[C(h(X_{n}^{x})h(X_{n}^{y})+\frac{1}{2}D_{n})| \mathcal{F}_{n-1}\right]=C(h(X_{n-1}^{x})h(X_{n-1}^{y})+\frac{1}{2}D_{n-1}).\] Putting everything together and recalling (3.7) we obtain (3.6), and thus that the process \((h(X_{n}^{x})h(X_{n}^{y})+\frac{1}{2}D_{n})_{n\geq 0}\) is a martingale. We can thus prove Proposition 3.1. Proof of Proposition 3.1.: By employing the martingales of Lemma 3.2 above and sending \(n\) to infinity we obtain (by Doob's optional sampling theorem), for \(x<y\), \[\begin{cases}h(x)+h(y)=2h(N)\hat{\mathbb{P}}_{\{x,y\}}[\xi_{\infty}(N)=2]+h(N) \hat{\mathbb{P}}_{\{x,y\}}[\xi_{\infty}(N)=1]\\ h(y)-h(x)=h(N)\hat{\mathbb{P}}_{\{x,y\}}[\xi_{\infty}(N)=1]-\hat{\mathbb{E}}_{ (x,y)}[D_{\infty}]\\ h(x)h(y)=h(N)^{2}\hat{\mathbb{P}}_{\{x,y\}}[\xi_{\infty}(N)=1]+\frac{1}{2}\hat {\mathbb{E}}_{(x,y)}[D_{\infty}]\end{cases} \tag{3.10}\] and thus \[\hat{\mathbb{P}}_{\{x,y\}}[\xi_{\infty}(N)=2]=\frac{h(x)(h(y)-1)}{h(N)(h(N)-1 )}, \tag{3.11}\] \[\hat{\mathbb{P}}_{\{x,y\}}[\xi_{\infty}(N)=1]=\frac{h(x)+h(y)}{h(N)}-2\frac{h( x)(h(y)-1)}{h(N)(h(N)-1)} \tag{3.12}\] and \[\hat{\mathbb{P}}_{\{x,y\}}[\xi_{\infty}(N)=0] =1-\hat{\mathbb{P}}_{\{x,y\}}[\xi_{\infty}(N)=2]-\hat{\mathbb{P}}_{ \{x,y\}}[\xi_{\infty}(N)=1]\] \[=1-\frac{h(x)+h(y)}{h(N)}+\frac{h(x)(h(y)-1)}{h(N)(h(N)-1)}. \tag{3.13}\] Writing \[\hat{\mathbb{E}}_{\mu_{\rm stat}}[\eta(x)\eta(y)]=\rho_{L}^{2} \hat{\mathbb{P}}_{\{x,y\}}[\xi_{\infty}(N)=0]\\ +\rho_{L}\rho_{R}\hat{\mathbb{P}}_{\{x,y\}}[\xi_{\infty}(N)=1]+ \rho_{R}^{2}\hat{\mathbb{P}}_{\{x,y\}}[\xi_{\infty}(N)=2]\] and \[\hat{\mathbb{E}}_{\mu_{\rm stat}}[\eta(x)]\hat{\mathbb{E}}_{\mu_{ \rm stat}}[\eta(y)]\\ =\rho_{R}^{2}\hat{\mathbb{P}}_{\{x\}}[\xi_{\infty}(N)=1]\hat{ \mathbb{P}}_{\{y\}}[\xi_{\infty}(N)=1]\\ +\rho_{R}\rho_{L}(\hat{\mathbb{P}}_{\{x\}}[\xi_{\infty}(N)=0] \hat{\mathbb{P}}_{\{y\}}[\xi_{\infty}(N)=1]+\hat{\mathbb{P}}_{\{x\}}[\xi_{ \infty}(N)=1]\hat{\mathbb{P}}_{\{x\}}[\xi_{\infty}(N)=0])\\ +\rho_{L}^{2}\hat{\mathbb{P}}_{\{x\}}[\xi_{\infty}(N)=0]\hat{ \mathbb{P}}_{\{x\}}[\xi_{\infty}(N)=0]\] and recalling that \(\hat{\mathbb{P}}_{\{x\}}[\xi_{\infty}(N)=1]=\frac{h(x)}{h(N)}\), we finally obtain, using (3.11), (3.12) and (3.13), the desired identity. ### Absorption probabilities via induction In the proof above we showed that for \(x<y\), \[\hat{\mathbb{P}}_{\{x,y\}}[\xi_{\infty}(N)=2] =\frac{h(x)(h(y)-1)}{h(N)(h(N)-1)} \tag{3.14}\] \[=\frac{\frac{1}{\omega_{L}}+x-1}{\frac{1}{\omega_{L}}+\frac{1}{ \omega_{R}}+(N-1)-1}\frac{\frac{1}{\omega_{L}}+y-2}{\frac{1}{\omega_{L}}+\frac {1}{\omega_{R}}+(N-1)-2}. \tag{3.15}\] From the formula above we can make a guess for the absorption probability that starting with \(k\) particles all of them are absorbed at \(N\) and then prove that our guess is correct by induction. **Lemma 3.3**.: _Recall the function \(h\) given in (3.1). Then, given \(1\leq x_{1}<\ldots<x_{k}\leq N-1\),_ \[\hat{\mathbb{P}}_{\{x_{1},\ldots,x_{k}\}}[\xi_{\infty}(N)=k]=\prod_{i=1}^{k} \frac{h(x_{i})-(i-1)}{h(N)-(i-1)}. \tag{3.16}\] Proof.: First notice that the formula matches the boundary conditions of the absorption probability. Second, assume that the formula holds for \(k-1\) and let us show that \[\hat{\mathcal{L}}\left(\prod_{i=1}^{k}\frac{h(x_{i})-(i-1)}{h(N)-(i-1)}\right) =0.\] Define \[g_{k}(x_{1},\ldots,x_{k}):=\prod_{i=1}^{k}\frac{h(x_{i})-(i-1)}{h(N)-(i-1)}.\] If \(x_{k}>x_{k-1}+1\) then we can apply the dual generator in the following way \[\hat{\mathcal{L}}g_{k}(x_{1},\ldots,x_{k})\] \[=\left(\hat{\mathcal{L}}g_{k-1}(x_{1},\ldots,x_{k-1})\right)\frac{h (x_{k})-(k-1)}{h(N)-(k-1)}\] \[+g_{k-1}(x_{1},\ldots,x_{k-1})\hat{\mathcal{L}}\left(\frac{h(x_{k })-(k-1)}{h(N)-(k-1)}\right)=0\] where the last equality follows by the induction assumption and the fact that \(h\) is harmonic for \(\hat{\mathcal{L}}\). If \(x_{k}=x_{k-1}+1\) and \(x_{k+1}\neq N-1\) then we can apply the generator in the following way \[\hat{\mathcal{L}}\left(\prod_{i=1}^{k}\frac{h(x_{i})-(i-1)}{h(N)- (i-1)}\right)\] \[=\left(\hat{\mathcal{L}}g_{k-1}(x_{1},\ldots,x_{k-1})\right)\frac {h(x_{k})-(k-1)}{h(N)-(k-1)}\] \[-g_{k-2}(x_{1},\ldots,x_{k-2})\left(\frac{h(x_{k-1}+1)-(k-2)}{h( N)-(k-2)}-\frac{h(x_{k-1})-(k-2)}{h(N)-(k-2)}\right)\frac{h(x_{k})-(k-1)}{h(N)- (k-1)}\] \[+g_{k-1}(x_{1},\ldots,x_{k-1})\left(\frac{h(x_{k}+1)-(k-1)}{h(N)- (k-1)}-\frac{h(x_{k})-(k-1)}{h(N)-(k-1)}\right)=0\] where the first term on the right hand side is zero by induction and the second term, which appears because in the first term of the right hand side we applied the generator as if the \(k+1\) particle was not present in the system, cancel the third term. Similarly, if \(x_{k}=x_{k-1}+1\) and \(x_{k+1}=N-1\) the difference is that the third term on the right hand side of the computation above is multiplied by \(\omega_{R}\) which however cancel with \(h(x_{k}+1)-h(x_{k})=\frac{1}{\omega_{R}}\) and thus again \(\hat{\mathcal{L}}\left(\prod_{i=1}^{k}\frac{h(x_{i})-(i-1)}{h(N)-(i-1)} \right)=0\) concluding the proof. ## 4. n-point correlations via the Ninja particle method The matrix ansatz method of [10] allow to derive a recursive relation for the correlations of the boundary driven SSEP (see [11, (A.7)]). As pointed out in [7, Section 7.1], from such relation satisfied by the correlations, an analogous relation for the absorption probabilities in the dual system follows. However, so far a probabilistic approach alternative to the matrix ansatz method has not been proposed. In this section we provide a probabilistic route to compute the absorption probabilities of the dual process by building a coupling between two dual boundary driven SSEP, one evolving on \([N]_{0}\) and the other one on \([N-1]_{0}\) from which we deduce a recursive relation for the absorption probabilities in the two systems. In the following we consider the homogeneous segment, i.e. \(\{x,y\}\in E\) if and only if \(|x-y|=1\), \(\omega_{x,x+1}=1\) for each \(x\in\{1,\ldots,N-2\}\in E\) and we put \(\omega_{L}=\omega_{R}=1\) as well. We denote by \((\xi_{t}^{[M]})_{t\geq 0}\) the dual boundary driven SSEP on the homogeneous segment \([M]_{0}\) where \(\{0,M\}\) are absorbing sites and by \(\hat{\mathbb{P}}_{\{x_{1},\ldots,x_{k}\}}^{[M]_{0}}\) the law of \((\xi_{t}^{[M]})_{t\geq 0}\) starting from \(\sum_{i=1}^{k}\delta_{x_{i}}\). Moreover we denote by \(\hat{\mathcal{L}}^{[M]_{0}}\) the generator of \((\xi_{t}^{[M]})_{t\geq 0}\). The main result of the section is the following theorem. **Theorem 4.1**.: _For all \(0\leq x_{1}<\ldots<x_{k+1}\leq N\) the following relation holds_ \[\hat{\mathbb{P}}^{[N]_{0}}_{\{x_{1},\ldots,x_{k+1}\}}(\xi^{[N]_{0}}_{\infty}(N)= k+1)=\left(\frac{x_{k+1}-k}{N}\right)\ \hat{\mathbb{P}}^{[N-1]_{0}}_{\{x_{1},\ldots,x_{k}\}}(\xi^{[N-1]_{0}}_{\infty}(N )=k). \tag{4.1}\] Recalling that \(\hat{\mathbb{P}}^{[N]_{0}}_{\{x\}}(\xi^{[N]_{0}}_{\infty}(N)=1)=\frac{x}{N}\), as a direct consequence of the above theorem, we obtain the explicit absorption probabilities. **Corollary 4.2**.: _Let \(0<x_{1}<\ldots<x_{n}<N\), then_ \[\hat{\mathbb{P}}^{[N]_{0}}_{\{x_{1},\ldots,x_{k}\}}(\xi^{[N]_{0}}_{\infty}(N) =k)=\prod_{i=1}^{k}\frac{x_{i}-(i-1)}{N-(i-1)}. \tag{4.2}\] In the next section we present the coupling we mentioned above. This technique relies on the introduction of a special particle, denoted by _Ninja_, which has some special features reminiscent of the behavior of second class particles (see, e.g., [24, pag. 218]) and will be responsible for the coupling of the two processes on different graphs. ### The _Ninja_-process: a game of labels Consider the homogeneous segment \([N]_{0}=[N-1]\cup\{0,N\}\) where \(\{0,N\}\) are absorbing sites. We place \(k+1\) particles initially at distinct positions \(x_{1},\ldots,x_{k+1}\in[N]_{0}\) with \[1\leq x_{1}<\ldots<x_{k}\leq N-1\] and \[x_{k+1}\notin\{x_{1},\ldots,x_{k}\}\] and we label by \((i)\) the particle starting from \(x_{i}\) for all \(i\in\{1,\ldots,k\}\) and by _Ninja_, the particle starting at \(x_{k+1}\) (see Figure 4). We do not allow the following two cases: \[\exists i\in\{1,\ldots,k\}\ s.t.\ x_{i}=N-1\ \text{and}\ x_{k+1}=N\] and \[\exists i\in\{1,\ldots,k\}\ s.t.\ x_{i}=1\ \text{and}\ x_{k+1}=0.\] For our purposes we will be interested in the case \(x_{k+1}>x_{i}\) for all \(i\in\{1,\ldots,k\}\), but the process that we are going to define does not require such condition. We are now going to describe in words the dynamics of the process \[(X^{(1)}_{t},\ldots,X^{(k)}_{t},\textit{Ninja}_{t})_{t\geq 0}\] that we call _Ninja_-process with law denoted by \(\mathbb{P}^{\textit{Ninja}}_{x_{1},\ldots,x_{k+1}}\). The particles will move in such a way that the configuration process obtained by the labelled particles evolve exactly as the usual dual of the boundary driven SSEP on \([N]_{0}\) but we play with the labels of the particles and more precisely with the _Ninja_ label in order to perform the coupling. More precisely: **Case 1**: (The _Ninja_ is alone) When no particle is in a location nearest neighbor to the _Ninja_ and the _Ninja_\(\notin\{0,N\}\) each particle, including the _Ninja_, behaves as simple symmetric random walks jumping at rate one, subject to the exclusion rule and with \(\{0,N\}\) acting as absorbing sites. In this case, the _Ninja_ does not have nearest neighbor particles and it jumps at rate one either to its left or to its right. **Case 2**: (The _Ninja_ interacts) Consider now the case in which the _Ninja_ particle is not at \(\{0,N\}\) and the \(i\)-th particle, with \(i\in\{1,\ldots,k\}\), is nearest neighbor of the _Ninja_ and not in \(\{0,N\}\). Then all the other particles that are not nearest neighbor of the _Ninja_ behave as simple symmetric random walks jumping at rate one, subject to the exclusion rule and with \(\{0,N\}\) acting as absorbing sites. On the other hand the couple \((x_{i},\mbox{{Ninja}})\) jumps at rate one to \[(x_{i}+2(\mbox{{Ninja}}-x_{i}),x_{i})\] if the site \(x_{i}+2(\mbox{{Ninja}}-x_{i})\) is empty and at rate one to \[(x_{i}-(\mbox{{Ninja}}-x_{i}),\mbox{{Ninja}})\] if \(x_{i}-(\mbox{{Ninja}}-x_{i})\) is empty (see Figures 5 and 6). Similarly if there exists another index \(j\neq i\) such that \(|x_{j}-\mbox{{Ninja}}|=1\) and \(x_{j}\notin\{0,N\}\), then the couple \((x_{j},\mbox{{Ninja}})\) behaves analogously to the couple \((x_{i},\mbox{{Ninja}})\). Figure 4. Figure 5. **Case 3**: (The _Ninja_ returns) In the Ninja particle system, it is not allowed the situation in which the _Ninja_ is at site \(x\in\{N,0\}\) and one of the other particles is at \(|x-1|\). If the _Ninja_ is at \(x\in\{N,0\}\), it can escape from the absorbing site \(x\in\{N,0\}\) when one of the other particles tries to jump at \(|x-1|\). Indeed suppose that \(x_{i}=|x-2|\), then all other particles behave as simple symmetric random walks jumping at rate one, subject to the exclusion rule and with \(\{0,N\}\) acting as absorbing sites, while the couple \((x_{i},\textit{Ninja})=(|x-2|,x)\) jumps at rate \(1\) to \[\left(|x-2|-\frac{x-|x-2|}{2},x\right)\] if \(|x-2|-\frac{x-|x-2|}{2}\) is empty and at rate \(1\) to \[\left(x,x-\frac{x-|x-2|}{2}\right)\] (see Figure 7). We now describe the dynamics of this particle system that we call _Ninja_-process via its generator \[\hat{\mathcal{L}}^{\textit{Ninja}}=\hat{\mathcal{L}}^{\textit{Ninja}}_{1}+ \hat{\mathcal{L}}^{\textit{Ninja}}_{2}+\hat{\mathcal{L}}^{\textit{Ninja}}_{3}\] acting on functions \(f:[N]^{k+1}\rightarrow\mathbb{R}\). \(\hat{\mathcal{L}}^{\textit{Ninja}}_{1}\) describes the dynamics in **Case 1** and it is given by \[\hat{\mathcal{L}}^{\textit{Ninja}}_{1}f(x_{1},\ldots,x_{k+1})= \left(\prod_{i=1}^{k}\mathbf{1}_{\{|x_{i}-x_{k+1}|\neq 1\}}\right) \mathbf{1}_{\{x_{k+1}\notin\{0,N\}\}}\] \[\times\left(\sum_{i=1}^{k+1}\mathbf{1}_{\{x_{i}\notin\{0,N\}\}} \left(1-\sum_{j=1}^{k+1}\mathbf{1}_{\{x_{j}=x_{i}\pm 1\}}\right)\right)\] \[\qquad\times(f(x_{1},\ldots,x_{i-1},x_{i}\pm 1,x_{i+1},\ldots,x_{k+ 1})-f(x_{1},\ldots,x_{k+1})))\,.\] Figure 6. \(\hat{\mathcal{L}}_{2}^{\text{Ninja}}\) describes the dynamics in **Case 2** (see Figure 5 and 6) and it is given by \[\hat{\mathcal{L}}_{2}^{\text{Ninja}}f(x_{1},\ldots,x_{k+1})=\left(1 -\prod_{i=1}^{k}\mathbf{1}_{\{|x_{i}-x_{k+1}|\neq 1\}}\right)\mathbf{1}_{\{x_{k+1} \notin\{0,N\}\}}\] \[\times\left(\sum_{i=1}^{k}\mathbf{1}_{\{x_{i}\notin\{0,N\}\}} \left[\left(1-\sum_{j=1}^{k+1}\mathbf{1}_{\{x_{j}=x_{i}\pm 1\}}\right) \right)\right.\] \[\qquad\left.\times(f(x_{1},\ldots,x_{i-1},x_{i}\pm 1,x_{i+1}, \ldots,x_{k+1})-f(x_{1},\ldots,x_{k+1}))\right.\] \[\left.+\mathbf{1}_{\{|x_{i}-x_{k+1}|=1\}}(1-\sum_{j=1}^{k+1} \mathbf{1}_{\{x_{i}+2(x_{k+1}-x_{i})=x_{j}\}})\right.\] \[\left.\times(f(x_{1},\ldots,x_{i-1},x_{i}+2(x_{k+1}-x_{i}),x_{i+1 },\ldots,x_{i})-f(x_{1},\ldots,x_{k+1}))\right].\] \(\hat{\mathcal{L}}_{3}^{\text{Ninja}}\) describes the dynamics in **Case 3** (see Figure 7) and it is given by \[\hat{\mathcal{L}}_{3}^{\text{Ninja}}f(x_{1},\ldots,x_{k})=\mathbf{ 1}_{\{x_{k+1}\in\{0,N\}\}}\] \[\times\left(\sum_{i=1}^{k}\mathbf{1}_{\{x_{i}\notin\{0,N\}\}} \left[\mathbf{1}_{\{|x_{i}-x_{k+1}|\neq 2\}}\left(1-\sum_{j=1}^{k}\mathbf{1}_{\{x_{j}=x _{i}\pm 1\}}\right)\right)\right.\] \[\qquad\left.\times(f(x_{1},\ldots,x_{i-1},x_{i}\pm 1,x_{i+1}, \ldots,x_{k+1})-f(x_{1},\ldots,x_{k+1}))\right.\] \[\left.+\mathbf{1}_{\{|x_{i}-x_{k+1}|=2\}}(1-\sum_{j=1}^{k+1} \mathbf{1}_{\{x_{i}-(x_{k+1}-x_{i})/2=x_{j}\}})\right.\] \[\left.\times(f(x_{1},\ldots,x_{i-1},x_{i}-(x_{k+1}-x_{i})/2,x_{i+ 1},\ldots,x_{k+1})-f(x_{1},\ldots,x_{k+1}))\right.\] Figure 7. \[\left(1-\prod_{i=1}^{k}\mathbf{1}_{\{|x_{i}-x_{k+1}|\neq 1\}}\right)\mathbf{1}_{\{x _{k+1}\notin\{0,N\}\}}=1,\] and \[\mathbf{1}_{\{|x_{i}-x_{k+1}|=1\}}(1-\sum_{j=1}^{k+1}\mathbf{1}_{\{x_{i}+2(x_{k +1}-x_{i})=x_{j}\}})=1,\] with, e.g. \(x_{k+1}=x_{i}+1\), i.e. we are in **Case 2** (see Figure 5 and 6), because the transition \[(x_{1},\ldots,x_{k+1})\rightarrow(x_{1},\ldots,x_{i-1},x_{i}+2(x_{k+1}-x_{i}),x_{i+1},\ldots,x_{i})\] corresponds to the configuration transition \[\sum_{j=1}^{k+1}\delta_{x_{j}}\to\sum_{j=1}^{k+1}\delta_{x_{j}}-\delta_{x_{k+1}}+ \delta_{x_{k+1}+1}\] then \[\hat{\mathcal{L}}_{2}^{\text{\it Ninja}}G(x_{1},\dots,x_{k+1})=\hat{\mathcal{L}} ^{[N]_{0}}G\left(\sum_{i=1}^{k+1}\delta_{x_{i}}\right).\] If \((x_{1},\dots,x_{k+1})\) is such that \[\mathbf{1}_{\{x_{k+1}\in\{0,N\}\}}\mathbf{1}_{\{x_{i}\notin\{0,N\}\}}\mathbf{1 }_{\{|x_{i}-x_{k+1}|=2\}},\] with, e.g., \(x_{k+1}=N\) and thus \(x_{i}=N-2\), i.e. we are in **Case 3** (see Figure 7), because the transition \[(x_{1},\dots,x_{k+1})\to(x_{1},\dots,x_{i-1},x_{k+1},x_{i+1},\dots,x_{i}+(x_{k +1}-x_{i})/2)\] corresponds to the configuration transition \[\sum_{j=1}^{k+1}\delta_{x_{j}}\to\sum_{j=1}^{k+1}\delta_{x_{j}}-\delta_{N-2}+ \delta_{N-1}\] then \[\hat{\mathcal{L}}_{3}^{\text{\it Ninja}}G(x_{1},\dots,x_{k+1})=\hat{\mathcal{L }}^{[N]_{0}}G\left(\sum_{i=1}^{k+1}\delta_{x_{i}}\right).\] Because for all \(\{x_{1},\dots,x_{k+1}\}\in[N]_{0}\) such that \(\sum_{i=1}^{k+1}\delta_{x_{i}}(x)\in\{0,1\}\) for all \(x\in[N-1]\) \[\left(\prod_{i=1}^{k}\mathbf{1}_{\{|x_{i}-x_{k+1}|\neq 1\}} \right)\mathbf{1}_{\{x_{k+1}\notin\{0,N\}\}}\\ +\left(1-\prod_{i=1}^{k}\mathbf{1}_{\{|x_{i}-x_{k+1}|\neq 1\}} \right)\mathbf{1}_{\{x_{k+1}\notin\{0,N\}\}}+\mathbf{1}_{\{x_{k+1}\in\{0,N\} \}}=1 \tag{4.3}\] the proof is concluded. The second one, provides a map from the _Ninja_-process to the dual of the boundary driven SSEP evolving on the reduced graph \([N-1]_{0}\) with \(\{0,N-1\}\) acting as absorbing sites, namely to \((\xi_{t}^{[N-1]_{0}})_{t\geq 0}\). **Proposition 4.4**.: _For \(x,y\in[N]_{0}\) set_ \[\pi(x,y):=\begin{cases}N-1&\text{if }\,y=x=N\\ x-\mathbf{1}_{\{y<n\}}&\text{otherwise.}\end{cases} \tag{4.4}\] _Let \((X_{t}^{(1)},\dots,X_{t}^{(k)},\text{Ninja}_{t})_{t\geq 0}\) be the Ninja-process and define_ \[\pi(x,y):=\begin{cases}N-1&\text{if }\,y=x=N\\ x-\mathbf{1}_{\{y<n\}}&\text{otherwise.}\end{cases} \tag{4.5}\] _Let \((X_{t}^{(1)},\dots,X_{t}^{(k)},\text{Ninja}_{t})_{t\geq 0}\) be the Ninja-process and define_ \[\pi(x,y):=\begin{cases}N-1&\text{if }\,y=x=N\\ x-\mathbf{1}_{\{y<n\}}&\text{otherwise.}\end{cases} \tag{4.6}\] _Let \((X_{t}^{(1)},\dots,X_{t}^{(k)},\text{Ninja}_{t})_{t\geq 0}\) be the Ninja-process and define_ \[\pi(x,y):=\begin{cases}N-1&\text{if }\,y=x=N\\ x-\mathbf{1}_{\{y<n\}}&\text{otherwise.}\end{cases} \tag{4.7}\] _Let \((X_{t}^{(1)},\dots,X_{t}^{(k)},\text{Ninja}_{t})_{t\geq 0}\) be the Ninja-process and define_ \[\pi(x,y):=\begin{cases}N-1&\text{if }\,y=x=N\\ x-\mathbf{1}_{\{y<n\}}&\text{otherwise.}\end{cases} \tag{4.8}\] _Let \((X_{t}^{(1)},\dots,X_{t}^{(k)},\text{Ninja}_{t})_{t\geq 0}\) be the Ninja-process and define_ \[\pi(x,y):=\begin{cases}N-1&\text{if }\,y=x=N\\ x-\mathbf{1}_{\{y<n\}}&\text{otherwise.}\end{cases} \tag{4.9}\] _Let \((X_{t}^{(1)},\dots,X_{t}^{(k)},\text{Ninja}_{t})_{t\geq 0}\) be the Ninja-process and define_ \[\pi(x,y):=\begin{cases}N-1&\text{if }\,y=x=N\\ x-\mathbf{1}_{\{y<n\}}&\text{otherwise.}\end{cases} \tag{4.10}\] _Let \((X_{t}^{(1)},\dots,X_{t}^{(k)},\text{Ninja}_{t})_{t\geq 0}\) be the Ninja-process and define_ \[\pi(x,y):=\begin{cases}N-1&\text{if }\,y=x=N\\ x-\mathbf{1}_{\{y<n\}}&\text{otherwise.}\end{cases} \tag{4.11}\] _Let \((X_{t}^{(1)},\dots,X_{t}^{(k)},\text{Ninja}_{t})_{t\geq 0}\) be the Ninja-process and define_ \[\pi(x,y):=\begin{cases}N-1&\text{if }\,y=x=N\\ x-\mathbf{1}_{\{y<n\}}&\text{otherwise.}\end{cases} \tag{4.12}\] _Let \((X_{t}^{(1)},\dots,X_{t}^{(k)},\text{Ninja}_{t})_{t\geq 0}\) be the Ninja-process and define_ \[\pi(x,y):=\begin{cases}N-1&\text{if }\,y=x=N\\ x-\mathbf{1}_{\{y<n\}}&\text{otherwise.}\end{cases} \tag{4.13}\] _Let \((X_{t}^{(1)},\dots,X_{t}^{(k)},\text{Ninja}_{t})_{t\geq 0}\) be the Ninja-process and define_ \[\pi(x,y):=\begin{cases}N-1&\text{if }\,y=x=N\\ x-\mathbf{1}_{\{y<n\}}&\text{otherwise.}\end{cases} \tag{4.14}\] _Let \((X_{t}^{(1)},\dots,X_{t}^{(k)},\text{Ninja}_{t})_{t\geq 0}\) be the Ninja-process and define_ \[\pi(x,y):=\begin{cases}N-1&\text{if }\,y=x=N\\ x-\mathbf{1}_{\{y<n\}}&\text{otherwise.}\end{cases} \tag{4.15}\] _Let \((X_{t}^{(1)},\dots,X_{t}^{(k)},\text{Ninja}_{t})_{t\geq 0}\) be the Ninja-process and define_ \[\pi(x,y):=\begin{cases}N-1&\text{if }\,y=x=N\\ x-\mathbf{1}_{\{y<n\}}&\text{otherwise.}\end{cases} \tag{4.16}\] _Let \((X_{t}^{(1)},\dots,X_{t}^{(k)},\text{Ninja}_{t})_{t\geq 0}\) be the Ninja-process and define_ \[\pi(x,y):=\begin{cases}N-1&\text{if }\,y=x=N\\ x-\mathbf{1}_{\{y<n\}}&\text{otherwise.}\end{cases} \tag{4.17}\] _Let \((X_{t}^{(1)},\dots,X_{t}^{(k)},\text{Ninja}_{t})_{t\geq 0}\) be the Ninja-process and define_ \[\pi(x,y):=\begin{cases}N-1&\text{if }\,y=x=N\\ x-\mathbf{1}_{\{y<n\}}&\text{otherwise.}\end{cases} \tag{4.18}\] _Let \((X_{t}^{(1)},\dots,X_{t}^{(k)},\text{Ninja}_{t})_{t\geq 0}\) be the Ninja-process and define_ \[\pi(x,y):=\begin{cases}N-1&\text{if }\,y=x=N\\ x-\mathbf{1}_{\{y<n\}}&\text{otherwise.}\end{cases} \tag{4.19}\] _Let \((X_{t}^{(1)},\dots,X_{t}^{(k)},\text{Ninja}_{t})_{t\geq 0}\) be the Ninja-process and define_ \[\pi(x,y):=\begin{cases}N-1&\text{if }\,y=x=N\\ x-\mathbf{1}_{\{y<n\}}&\text{otherwise.}\end{cases} \tag{4.20}\] _Let \((X_{t}^{(1)},\dots,X_{t}^{(k)},\text{Ninja}_{t})_{t\geq 0}\) be the Ninja-process and define_ \[\pi(x,y):=\begin{cases}N-1&\text{if }\,y=x=N\\ x-\mathbf{1}_{\{y<n\}}&\text{otherwise.}\end{cases} \tag{4.21}\] _Let \((X_{t}^{(1)},\dots,X_{t}^{(k)},\text{N \[\mathcal{N}_{t}^{[N-1]_{0}}:=\sum_{i=1}^{k}\delta_{\pi(X_{t}^{(i)},\textit{Ninj}_{ \boldsymbol{a}_{t}})}.\] _Then, if \(\xi_{0}^{[N-1]_{0}}=\sum_{i=1}^{k}\delta_{\pi(X_{0}^{(i)},\textit{Ninj}_{ \boldsymbol{a}_{0}})}\),_ \[(\mathcal{N}_{t}^{[N-1]_{0}})_{t\geq 0}=(\xi_{t}^{[N-1]_{0}})_{t\geq 0}\] _in distribution._ Proof.: Similarly to the proof of Proposition 4.3, the result follows from the fact the for all \(G:[N]_{0}^{k}\to\mathbb{R}\) permutation invariant, i.e. \[G(x_{1},\ldots,x_{k})=G\left(\sum_{i=1}^{k}\delta_{x_{i}}\right),\] we have, for all \(x_{1},\ldots,x_{k}\) \[\hat{\mathcal{L}}^{\textit{Ninj}_{\boldsymbol{a}}}G(\pi(x_{1},x_{k+1}),\ldots,\pi(x_{k},x_{k+1}))=\hat{\mathcal{L}}^{[N]_{0}}G\left(\sum_{i=1}^{k}\delta_{ x_{i}}\right)\] for all \(x_{k+1}\in[N]_{0}\) such that, if \(x_{k+1}\notin\{0,N\}\), \(\sum_{i=1}^{k}\delta_{x_{i}}(x_{k+1})=0\). We do not provide all the details and we just point out that in **Case 2** (see Figure 5 and 6) if \(x_{k+1}=x_{i}+1\) the transition \[(x_{1},\ldots,x_{k+1})\to(x_{1},\ldots,x_{i-1},x_{i}+2(x_{k+1}-x_{i}),x_{i+1},\ldots,x_{i})\] in the _Ninja_-process, if allowed, corresponds to the transition \[\sum_{i=1}^{k}\delta_{\pi(x_{i},x_{k+1})}\to\sum_{i=1}^{k}\delta_{\pi(x_{i},x_ {k+1})}-\delta_{x_{i}}+\delta_{x_{i}+1}\] in the \((\mathcal{N}_{t}^{[N-1]_{0}})_{t\geq 0}\) process. In **Case 3** (see Figure 7) if \(x_{k}+1=N\) and \(x_{i}=N-2\), i.e. we are in **Case 3**, \[(x_{1},\ldots,x_{k+1})\to(x_{1},\ldots,x_{i-1},x_{k+1},x_{i+1},\ldots,x_{i}+(x _{k+1}-x_{i})/2)\] in the _Ninja_-process, corresponds to the transition \[\sum_{i=1}^{k}\delta_{\pi(x_{i},x_{k+1})}\to\sum_{i=1}^{k}\delta_{\pi(x_{i},x _{k+1})}-\delta_{N-2}+\delta_{N-1}\] in the \((\mathcal{N}_{t}^{[N-1]_{0}})_{t\geq 0}\) process and the particle at \(N-1\) will stay there forever. Both the above transitions are usual transitions in the dual BD-SSEP \((\xi_{t}^{[N-1]_{0}})_{t\geq 0}\). We conclude this section by collecting in the corollary below some direct consequences of Propositions 4.3 and 4.4 which will be used in the proof of Theorem 4.1. The result below follows from the observation that the function \[f(x_{1},\ldots,x_{k}):=\boldsymbol{1}_{\{\left(\sum_{i=1}^{k}\boldsymbol{1}_{ \{x_{i}=N\}}\right)=k\}}\] is permutation invariant. **Corollary 4.5**.: 1. _For all_ \(x_{1},\ldots,x_{k+1}\in[N]_{0}\) _with_ \[1\leq x_{1}<\ldots<x_{k}\leq N\] _and_ \[x_{k+1}\notin\{x_{1},\ldots,x_{k}\}\] \[\mathbb{P}^{\text{Ninja}}_{x_{1},\ldots,x_{k},x_{k+1}}(\{X^{(1)}_{ \infty}=N,\ldots,X^{(k)}_{\infty}=N,\text{Ninja}_{\infty}=N\})\\ =\hat{\mathbb{P}}^{[N]_{0}}_{\{x_{1},\ldots,x_{k+1}\}}(\xi^{[N]_{ 0}}_{\infty}(N)=k+1).\] (4.5) 2. _For each_ \(1\leq x_{1}<\ldots<x_{k}\leq N-1\)__ \[\mathbb{P}^{\text{Ninja}}_{x_{1},\ldots,x_{k},x_{k+1}}(\{X^{(1)}_{ \infty}=N,\ldots,X^{(k)}_{\infty}=N\})\\ =\mathbb{P}^{\text{Ninja}}_{x_{1},\ldots,x_{k},y_{k+1}}(\{X^{(1) }_{\infty}=N,\ldots,X^{(k)}_{\infty}=N\})=\hat{\mathbb{P}}^{[N-1]_{0}}_{\{x_{ 1},\ldots,x_{k}\}}(\xi^{[N-1]_{0}}_{\infty}(N-1)=k).\] (4.6) _for all_ \(x_{k+1},y_{k+1}\in[N]_{0}\setminus\{x_{1},\ldots,x_{k}\}\)_._ ### Proof of Theorem 4.1 We now have almost all the elements to prove Theorem 4.1. Indeed by Corollary 4.5 we have that, for all \(x_{k+1}\notin\{x_{1},\ldots,x_{k}\}\), \[\hat{\mathbb{P}}^{[N-1]_{0}}_{\{x_{1},\ldots,x_{k}\}}(\xi^{[N-1]_ {0}}_{\infty}(N-1)=k)=\\ =\mathbb{P}^{\text{Ninja}}_{x_{1},\ldots,x_{k},x_{k+1}}(\{X^{(1) }_{\infty}=N,\ldots,X^{(k)}_{\infty}=N,\text{Ninja}_{\infty}=N\})\\ +\ \mathbb{P}^{\text{Ninja}}_{x_{1},\ldots,x_{k},x_{k+1}}(\{X^{(1) }_{\infty}=N,\ldots,X^{(k)}_{\infty}=N,\text{Ninja}_{\infty}=0\})\] and denoting by \(E\) be the event that all the \(k\) particles are absorbed at \(N\), i.e. \[E:=\{X^{x_{i}}_{\infty}=N\;\forall i\in\{1,\ldots,k\}\} \tag{4.7}\] we obtain \[\hat{\mathbb{P}}^{[N-1]_{0}}_{\{x_{1},\ldots,x_{k}\}}(\xi^{[N-1] _{0}}_{\infty}(N-1)=k)=\hat{\mathbb{P}}^{[N]_{0}}_{\{x_{1},\ldots,x_{k+1}\}}( \xi^{[N]_{0}}_{\infty}(N)=k+1)\\ +\mathbb{P}^{\text{Ninja}}_{x_{1},\ldots,x_{k},x_{k+1}}(\text{ Ninja}_{\infty}=0|E)\mathbb{P}^{[N-1]_{0}}_{\{x_{1},\ldots,x_{k}\}}(\xi^{[N-1]_{0}}_ {\infty}(N)=k).\] The conclusion of the proof follows from the next proposition. **Proposition 4.6**.: _Let \((X^{(1)}_{t},\ldots,X^{(k)}_{t},\text{Ninja}_{t})_{t\geq 0}\) be the Ninja-process and recall the event \(E\) given in (4.7). Then, for all \(x_{k+1}\notin\{x_{1},\ldots,x_{k}\}\),_ \[\mathbb{P}^{\text{Ninja}}_{x_{1},\ldots,x_{k},x_{k+1}}(\text{ Ninja}_{\infty}=0|E)=1-\frac{x_{k+1}-k}{N}. \tag{4.8}\] Proof.: We consider the skeleton chain \[(X_{n}^{(1)},\ldots,X_{n}^{(k)},\mathit{Ninja}_{n})_{n\in\mathbb{N}}\] since we are interested in computing absorption probabilities only and we define by \[(\bar{X}_{n}^{(1)},\ldots,\bar{X}_{n}^{(k)},\overline{\mathit{Ninja}}_{n})_{n\in \mathbb{N}}\] the Ninja process conditioned on the event \(E\). In order to compute the conditional probability on the left hand side of (4.8), we introduce the following auxiliary stochastic process: \[\bar{M}_{n}:=N-\overline{\mathit{Ninja}}_{n}+\sum_{\ell=1}^{k}\mathbf{1}_{\{ \bar{X}_{n}^{(\ell)}<\overline{\mathit{Ninja}}_{n}\}}.\] First notice that, \(\mathbb{P}_{x_{1},\ldots,x_{k},x_{k+1}}^{\mathit{Ninja}}\)-a.s. \[\lim_{n\to\infty}\bar{M}_{n}=N\ \mathbf{1}_{\{\overline{\mathit{Ninja}}_{ \infty}=0\}}.\] Because \(\bar{M}_{n}\leq N+k\) for each \(n\in\mathbb{N}\), the dominated convergence theorem guarantees that \[\lim_{n\to\infty}\mathbb{E}_{x_{1},\ldots,x_{k},x_{k+1}}^{ \mathit{Ninja}}[\bar{M}_{n}] =N\ \mathbb{P}_{x_{1},\ldots,x_{k},x_{k+1}}^{\mathit{Ninja}}( \overline{\mathit{Ninja}}_{\infty}=0)\] \[=N\ \mathbb{P}_{x_{1},\ldots,x_{k},x_{k+1}}^{\mathit{Ninja}}( \mathit{Ninja}_{\infty}=0|E). \tag{4.9}\] We conclude by showing that for all \(n\in\mathbb{N}\) \[\mathbb{E}_{x_{1},\ldots,x_{k},x_{k+1}}^{\mathit{Ninja}}[\bar{M}_{n}]=\bar{M} _{0}=N-x_{k+1}+k\] from which the thesis follows. It is enough to show that for all \(x_{1},\ldots,x_{k+1}\) \[\mathbb{E}_{x_{1},\ldots,x_{k},x_{k+1}}^{\mathit{Ninja}}[\bar{M}_{1}]=N-x_{k+ 1}+k\] since then, by the Markov property, we will show that the equality holds for all \(n\in\mathbb{N}\). For this purpose, we consider three different scenarios. The first one is when there exists exactly one \(i\) such that \(|x_{i}-x_{k+1}|=1\) or when \(x_{i}=2\) and \(x_{k+1}=0\) or when \(x_{i}=N-2\) and \(x_{k+1}=N\). In this case, if a particle with label \(j\neq i\) jumps, the process will not change its value since each term in the sum composing \(\bar{M}_{1}\) remains the same as in \(\bar{M}_{0}\). If the \(i\)-th particle is at the left of the Ninja (the \(k+1\) particle), i.e. \(x_{k+1}-x_{i}=1\), then if it jumps to the left, no terms changes in \(\bar{M}_{1}\), while if it jumps to the right interacting with the ninja then \[\overline{\mathit{Ninja}}_{1}-\overline{\mathit{Ninja}}_{0}=-1\] and \[\mathbf{1}_{\{\bar{X}_{1}^{x_{i}}<\overline{\mathit{Ninja}}_{1}\}}-\mathbf{1 }_{\{\bar{X}_{0}^{x_{i}}<\overline{\mathit{Ninja}}_{0}\}}=-1\] and thus \(\bar{M}_{0}=\bar{M}_{1}\) for any possible transition. Similarly, one can deduce the same conclusions when \(x_{i}-x_{k+1}=1\), or when \(x_{i}=2\) and \(x_{k+1}=0\) or when \(x_{i}=N-2\) and \(x_{k+1}=N\). The second one consists in the case where there exists \(i\) and \(j\) such that \(x_{k+1}-x_{i}=1\) and \(x_{j}-x_{k+1}=1\); then the Ninja cannot move and all the other particles cannot jump across the Ninja and thus \(\bar{M}_{1}=\bar{M}_{0}\). Finally the third scenario consists in the case \(\min\{|x_{i}-x_{k+1}|,\;i\in\{1,\ldots,k\}\}\geq 2\) with \(x_{k+1}\notin\{0,N\}\). In this case, no particle can jump across the _Ninja_ in one step and thus \[\sum_{i=1}^{k}\mathbf{1}_{\{\bar{X}_{1}^{(i)}<\overline{\mbox{\it Ninja}}_{1} \}}=\sum_{i=1}^{k}\mathbf{1}_{\{\bar{X}_{0}^{(i)}<\overline{\mbox{\it Ninja}}_ {0}\}}.\] However, in this case the Ninja can move both to the left and to the right. We now show that in this third case, despite the conditioning to the event \(E\), \[\mathbb{P}_{x_{1},\ldots,x_{k},x_{k+1}}^{\mbox{\it Ninja}}(\overline{\mbox{ \it Ninja}}_{1}=x_{k+1}+1)=\mathbb{P}_{x_{1},\ldots,x_{k},x_{k+1}}^{\mbox{\it Ninja}}( \overline{\mbox{\it Ninja}}_{1}=x_{k+1}-1)\] from which we conclude that \[\mathbb{E}_{x_{1},\ldots,x_{k},x_{k+1}}^{\mbox{\it Ninja}}[\bar{M}_{1}]=\bar{ M}_{0}.\] Recall that by definition of the _Ninja_-process: the non-conditioned process satisfies \[\mathbb{P}_{x_{1},\ldots,x_{k},x_{k+1}}^{\mbox{\it Ninja}}(\mbox{\it Ninja} _{1}=x_{k+1}+1)=\mathbb{P}_{x_{1},\ldots,x_{k},x_{k+1}}^{\mbox{\it Ninja}}( \mbox{\it Ninja}_{1}=x_{k+1}-1)\] since in this case, the _Ninja_ is performing a simple symmetric random walk. By Bayes theorem and the Markov property we have \[\mathbb{P}_{x_{1},\ldots,x_{k},x_{k+1}}^{\mbox{\it Ninja}}( \overline{\mbox{\it Ninja}}_{1}=x_{k+1}+1)=\mathbb{P}_{x_{1},\ldots,x_{k},x_{ k+1}}^{\mbox{\it Ninja}}(\mbox{\it Ninja}_{1}=x_{k+1}+1|E)\] \[=\frac{\mathbb{P}_{x_{1},\ldots,x_{k},x_{k+1}}^{\mbox{\it Ninja} }(E|\mbox{\it Ninja}_{1}=x_{k+1}+1)}{\mathbb{P}_{x_{1},\ldots,x_{k},x_{k+1}}^ {\mbox{\it Ninja}}(E)}\mathbb{P}_{x_{1},\ldots,x_{k},x_{k+1}}^{\mbox{\it Ninja}} (\mbox{\it Ninja}_{1}=x_{k+1}+1)\] \[=\frac{\mathbb{P}_{x_{1},\ldots,x_{k},x_{k+1}+1}^{\mbox{\it Ninja} }(E)}{\mathbb{P}_{x_{1},\ldots,x_{k},x_{k+1}}^{\mbox{\it Ninja}}(E)}\mathbb{ P}_{x_{1},\ldots,x_{k},x_{k+1}}^{\mbox{\it Ninja}}(\mbox{\it Ninja}_{1}=x_{k+1}+1).\] By Corollary 4.6, we have that \[\mathbb{P}_{x_{1},\ldots,x_{k},x_{k+1}+1}^{\mbox{\it Ninja}}(E)=\mathbb{P}_{ x_{1},\ldots,x_{k},x_{k+1}}^{\mbox{\it Ninja}}(E)\] concluding that \[\mathbb{P}_{x_{1},\ldots,x_{k},x_{k+1}}^{\mbox{\it Ninja}}(\overline{\mbox{ \it Ninja}}_{1}=x_{k+1}+1)=\mathbb{P}_{x_{1},\ldots,x_{k},x_{k+1}}^{\mbox{\it Ninja}} (\mbox{\it Ninja}_{1}=x_{k+1}+1).\] The same arguments gives \[\mathbb{P}_{x_{1},\ldots,x_{k},x_{k+1}}^{\mbox{\it Ninja}}(\overline{\mbox{ \it Ninja}}_{1}=x_{k+1}-1)=\mathbb{P}_{x_{1},\ldots,x_{k},x_{k+1}}^{\mbox{\it Ninja}} (\mbox{\it Ninja}_{1}=x_{k+1}-1)\] and thus \[\mathbb{P}_{x_{1},\ldots,x_{k},x_{k+1}}^{\mbox{\it Ninja}}(\overline{\mbox{ \it Ninja}}_{1}=x_{k+1}+1)=P_{x_{1},\ldots,x_{k},x_{k+1}}^{\mbox{\it Ninja}}( \overline{\mbox{\it Ninja}}_{1}=x_{k+1}-1).\] Finally, denoting by \((\bar{\mathcal{F}}_{n})_{n\in\mathbb{N}}\) the natural filtration generated by the conditioned process \((\bar{X}_{n}^{(1)},\ldots,\bar{X}_{n}^{(k)},\overline{\mbox{\it Ninja}}_{n} )_{n\in\mathbb{N}}\), we obtain, using the Markov property \[\mathbb{E}_{x_{1},\ldots,x_{k},x_{k+1}}^{\mbox{\it Ninja}}[\bar{M}_{n}]= \mathbb{E}_{x_{1},\ldots,x_{k},x_{k+1}}^{\mbox{\it Ninja}}[\mathbb{E}_{x_{1}, \ldots,x_{k},x_{k+1}}^{\mbox{\it Ninja}}[\bar{M}_{n}|\bar{\mathcal{F}}_{n-1}]]\] \[=\mathbb{E}_{x_{1},\ldots,x_{k},x_{k+1}}^{\text{Ninja}}[\mathbb{E}_{X_{n-1}^{(1)},\ldots,X_{n-1}^{(k)},\text{Ninja}_{n-1}^{x_{k+1}}}^{x_{k+1}}[\bar{M}_{1}]].\] The above term is equal to \[\sum_{y_{1},\ldots,y_{k+1}}\mathbb{P}_{x_{1},\ldots,x_{k},x_{k+1} }^{\text{Ninja}}(\bar{X}_{n-1}^{(1)}=y_{1},\ldots,\bar{X}_{n-1}^{(k)}=y_{k}, \overline{\text{Ninja}}_{n-1}=y_{k+1})\mathbb{E}_{y_{1},\ldots,y_{k},y_{k+1} }^{\text{Ninja}}[\bar{M}_{1}]\] \[=\bar{M}_{0}\sum_{y_{1},\ldots,y_{k+1}}\mathbb{P}_{x_{1},\ldots,x _{k},x_{k+1}}^{\text{Ninja}}(\bar{X}_{n-1}^{(1)}=y_{1},\ldots,\bar{X}_{n-1}^{( k)}=y_{k},\overline{\text{Ninja}}_{n-1}=y_{k+1})=\bar{M}_{0}\] and the proof is concluded. **Acknowledgments**.: S.F. acknowledges financial support from the Engineering and Physical Sciences Research Council of the United Kingdom through the EPSRC Early Career Fellowship EP/V027824/1. S.F. and A.G.C. thank the Hausdorff Institute for Mathematics (Bonn) for its hospitality during the Junior Trimester Program _Stochastic modelling in life sciences_ funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - EXC-2047/1 - 390685813. If the authors' staying at HIM has been so pleasant, productive and full of nice workshops and seminars is in great extent thanks to Silke Steinert-Berndt and Tomasz Dobrzeniecki: the authors are extremely grateful to them. The authors thank Simona Villa for making the pictures and P. Goncalves, M. Jara, F. Sau and G. Schutz for useful discussions and comments. S.F. thanks P. Goncalves for her kind hospitality at the Instituto Superior Tecnico in Lisbon during June 2023. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovative programme (grant agreement n. 715734).
2310.17325
C-Disentanglement: Discovering Causally-Independent Generative Factors under an Inductive Bias of Confounder
Representation learning assumes that real-world data is generated by a few semantically meaningful generative factors (i.e., sources of variation) and aims to discover them in the latent space. These factors are expected to be causally disentangled, meaning that distinct factors are encoded into separate latent variables, and changes in one factor will not affect the values of the others. Compared to statistical independence, causal disentanglement allows more controllable data generation, improved robustness, and better generalization. However, most existing work assumes unconfoundedness in the discovery process, that there are no common causes to the generative factors and thus obtain only statistical independence. In this paper, we recognize the importance of modeling confounders in discovering causal generative factors. Unfortunately, such factors are not identifiable without proper inductive bias. We fill the gap by introducing a framework entitled Confounded-Disentanglement (C-Disentanglement), the first framework that explicitly introduces the inductive bias of confounder via labels from domain expertise. In addition, we accordingly propose an approach to sufficiently identify the causally disentangled factors under any inductive bias of the confounder. We conduct extensive experiments on both synthetic and real-world datasets. Our method demonstrates competitive results compared to various SOTA baselines in obtaining causally disentangled features and downstream tasks under domain shifts.
Xiaoyu Liu, Jiaxin Yuan, Bang An, Yuancheng Xu, Yifan Yang, Furong Huang
2023-10-26T11:44:42Z
http://arxiv.org/abs/2310.17325v1
# C-Disentanglement: Discovering ###### Abstract Representation learning assumes that real-world data is generated by a few semantically meaningful generative factors (i.e., sources of variation) and aims to discover them in the latent space. These factors are expected to be causally disentangled, meaning that distinct factors are encoded into separate latent variables, and changes in one factor will not affect the values of the others. Compared to statistical independence, causal disentanglement allows more controllable data generation, improved robustness, and better generalization. However, most existing works assume unconfoundedness (i.e., there are no common causes to the generative factors) in the discovery process, and thus obtain only statistical independence. In this paper, we recognize the importance of modeling confounders in discovering causal generative factors. Unfortunately, such factors are not identifiable without proper inductive bias. We fill the gap by introducing a framework named **C**onfounded-**D**isentanglement** (C-Disentanglement), the first framework that explicitly introduces the inductive bias of confounder via labels/knowledge from domain expertise. In addition, we accordingly propose an approach to sufficiently identify the causally-disentangled factors under any inductive bias of the confounder. We conduct extensive experiments on both synthetic and real-world datasets. Our method demonstrates competitive results compared to various SOTA baselines in obtaining causally disentangled features and downstream tasks under domain shifts. code available here ## 1 Introduction Causally disentangled representation learning methods endeavour to identify and manipulate the underlying explanatory causes of variation (i.e., generative factors) within observational data through obtaining _causally disentangled representations_[24, 8]. Such a representation encodes these factors separately in different random variables in the latent space, where changes in one variable do not causally affect the others. Pursuing causal independence1 makes it possible to identify the ground truth generative factors that are not statistically independent, such as the color, shape and size in a fruit dataset as shown in Figure 1, which is more realistic and allows more controlled data generation, improved robustness, and better generalization in out-of-distribution problems. Figure 1: Causally independent generative factors (color, shape, size) may appear statistically correlated in observational data. Despite great success, existing methods suffer from the identifiability issue in discovering semantically meaningful generative factors. The main reason is that most of them equate disentangling in the latent space with enforcing statistical independence [9; 11; 6] in the latent space, or requiring no mutual information[8]. In other words, they explicitly or implicitly assume that the observational dataset is _unconfounded_ in the generative process, which is not realistic in practice. A confounder \(\mathbf{C}\) is a common cause of multiple variables. It could create a correlation among causally disentangled generative factors in the observational distribution. In the previous example of a fruit dataset, the type of fruit is a confounder, creating a correlation between size and shape (apples are usually red and round objects), thus size and shape can never be captured by a learning objective that requires statistically independent latent representations. Assuming unconfoundedness in such a dataset leads to discrepancies in characterizing the statistical relationship of the generative factors, and makes them _non-identifiable_. Considering the limitation of the unconfoundedness assumption, a few of the recent studies take into account confounders [24; 23; 21] in the problem formulation. Specifically, [23; 21] proposed evaluation metrics in the confounded causal generative process. [24] introduced an unsupervised learning scheme by regulating the learning process with the Independence-Of-Support Score (IOSS) under an unobserved confounder, following the theory that if a latent representation is causally disentangled, then it must have independent support. However, it has been shown _almost impossible_ to obtain disentangled representations through purely unsupervised learning without proper inductive biases [10; 16]. While there are a few semi-supervised or weakly supervised methods [16; 17] developed seeking statistical independence, none of the above-mentioned methods considers how to provide inductive bias for confounder in learning causally disentangled representations. In this paper, we recognize the importance of providing inductive bias to confounder so that the ground truth generative factors can be identified and we fill the gap by introducing the inductive bias via knowledge from domain expertise. Specifically, we propose a framework called **C**onfounded**-**D**isentanglement** (_C-Disentanglement_). C-Disentanglement is, to the best of our knowledge, the first framework that discusses the identifiability issue of generative factors regarding the inductive bias of confounder, and thus opens up the possibility to discover the ground truth causally disentangled generative factors which are correlated in the observational dataset. Importantly, C-Disentanglement is a general framework that includes existing methods (in which unconfoundedness is assumed) as special cases, allowing customized inductive bias of confounders according to diverse user needs. Under the framework, We develop an algorithm to discover the causally disentangled generative factors in the latent space with inductive bias \(\mathbf{C}\), where \(\mathbf{C}\) is a label set. Instead of enforcing global statistical independence among variables on the observational dataset, we partition the dataset into subsets according to realizations of \(\mathbf{C}\), and regulate these variables within each subset. Our proposed discovery methodology is general and can be applied to various disentangle mechanisms. But as a prototype for a proof of concept, we implement our method, cdVAE, under the context of VAE [12]. Specifically, we formulate a mixture of Gaussian model in which each Gaussian component, conditioned on a specific value of \(\mathbf{C}\), represents the distribution of a latent variable that estimates the underlying generative factor. Intuitively, under the previous example of the fruit dataset, if we fix the type of fruit (e.g., the apple) and check the correlation between the color and the shape, we could find that the color change within the apples does not affect the shape distribution, thus can be captured by regulating the statistical independence conditioned on \(\mathbf{C}\) (a fixed fruit type). Our methodology is a novel learning paradigm that can discover latent factors through weak supervision labels (i.e., \(\mathbf{C}\)). We experiment with several tasks from image reconstruction and generation, to classification under the out-of-distribution scenario on both real-world and synthetic datasets. We demonstrate that our method outperforms various baselines in discovering generative factors in the latent space. **Summary of contributions: (1)** We recognize the identifiability issue of discovering generative factors in the latent space. We accordingly introduce a framework, named Confounded Disentanglement (C-Disentanglement). It is the first framework that discusses how inductive bias of confounder could be explicitly provided via labels/knowledge from domain expertise. **(2)** We propose an algorithm, cdVAE, that discovers causally disentangled generative factors in the latent space. The algorithm sheds light on the easy injection of inductive bias into existing methods.**(3)** We conduct extensive experiments and ablation studies across various datasets and tasks. Empirical results verify that cdVAE outperforms existing methods in terms of inferring causally disentangled latent representation and also show cdVAE's superiority in downstream tasks under OOD generalization. ## 2 Preliminaries In this section, we introduce basic concepts in causal inference and then show how to evaluate the causal relationship among variables in latent space. **Causal graph through DAG.** The causal relationship among variables can be reflected by a Directed Acyclic Graph (DAG). Each (potentially high-dimensional) variable is denoted as a node. The directed edges indicate the causal relationships and always point from parents to children. **Intervention and do-operator.**_Intervention_ is one of the fundamental concepts in causal inference. When we intervene on a variable, we set the value of a variable and cut all incoming causal arrows since its value is thereby determined only by the intervention [20]. The intervention is mathematically represented by the _do-operator_\(do(\cdot)\). Let \(Z_{1}\) and \(Z_{2}\) be two variables, \(P(Z_{2}|do(Z_{1}=z_{1}))\) characterizes an interventional distribution and reflects how a change of \(Z_{1}\) affects the distribution of \(Z_{2}\). The do-operation and the interventional distribution should be estimated on the interventional dataset. However, in practice, the true distribution of the data is unavailable but an observational subset. As a result, we estimate the interventional distribution from the observational set following _do-calculus_ introduced by [19]. A detailed introduction can be found in the Appendix A. **Confounders bring in spurious correlation.** Although the detailed definitions vary from literature, a confounder usually refers to a common cause (i.e., a common parent in the causal graph) of multiple variables and it brings in a certain level of correlation among these variables. Consequently, to estimate the causal effect of one variable on the other from the observational dataset, we have to eliminate the correlation introduced by the confounders. For example, when analyzing the causal relationship between the number of heat strokes and the rate of ice cream consumption, we may find that there is a correlation. However, the temperature is a confounder in the situation. If we eliminate this spurious correlation by conditioning on the temperature, we may find out that under the same temperature, heat stroke is not correlated to the rate of ice cream consumption. **Evaluation of interventional distribution.** We specifically consider the case where there are variables \(Z_{1}\) and \(Z_{2}\), and we analyze how the existence of parental nodes affects the estimation of \(P(Z_{2}|do(Z_{1}=z_{1}))\) on the observational distribution. **Proposition 2.1**.: _Let \(Z_{1}\) and \(Z_{2}\) be two random variables, \(\mathbf{C}^{*}\) be the ground truth confounder set. If \(\mathbf{C}\) is a superset of or is equivalent to \(\mathbf{C}^{*}\), i.e., \(\mathbf{C}^{*}\subseteq\mathbf{C}\), with \(c\) being a realization of \(\mathbf{C}\), we have_ \[P(Z_{2}|do(Z_{1}))=\sum_{c\in\mathbf{C}}P(Z_{2}|Z_{1},\mathbf{C}=c)P(\mathbf{ C}=c) \tag{1}\] _if no \(C\in\mathbf{C}\) is a descendent of \(\mathbf{Z}\)._ The proof can be found in Appendix B.1. Intuitively speaking, Proposition 2.1 states that to accurately estimate the causal relationship between variables, we have to eliminate the spurious correlation by conditioning on confounders. In addition, conditioning on additional variables will not affect the estimation if they are not decedents of \(\mathbf{Z}\). **Causal disentanglement or statistical independence.** If we had access to a series of ground-truth features that generated a fruit dataset(e.g., color, shape, texture), we can encode each of the features into a variable and concatenate them for a disentangled representation. However, these features in the observational dataset are usually correlated, depending on the type of fruits. If not considered from the causal perspective, traditional methods aiming for statistically independent representations, such as \(\beta\)-VAE [9], may not be able to consider correlated features that are disentangled. In fact, neither causal disentangled entails statistical independence nor vice versa because of the existence of confounders. We, therefore, formulate the task of inferring causally disentangled features from the observational under a certain level of confoundedness. ## 3 Confounded causal disentanglement via inductive bias ### Problem formulation We formally frame the task of discovering a set of generative factors from the observational dataset from a causal perspective as shown in Figure 2. Let \(\mathbf{X}\) denote the observational data, the confounded causal generative process [23, 21] assumes that \(\mathbf{X}\) are generated from \(K\) ground-truth causes of variations \(G=[G_{1},G_{2},...,G_{K}]\) (i.e., \(G\to X\)) that do not cause each other. These generative factors are generally not available, and they are confounded by some unobserved confounding variables \(\mathbf{C}^{*}\). The task of interest is to discover those generative factors in the latent space (denoted as \(\mathbf{Z}=[Z_{1},Z_{2},...,Z_{D}]\)) that best approximates \(\mathbf{G}\) from \(\mathbf{X}\). Causal disentanglement among latent generative factors.Generative factors, encoded in the latent representational space, are causally disentangled, meaning that intervening on the value of one factor does not affect the distribution of the others. It is formally defined as: **Definition 3.1** (Causal Disentanglement on Data \(\mathbf{X}\)([19, 23, 24])).: A representation is disentangled if, for \(i\in\{1,...,D\}\), \[P(Z_{i}|\text{do}(Z_{-i}=z_{-i}),\mathbf{X})=P(Z_{i}|\mathbf{X}),\quad\forall z _{i}. \tag{2}\] where \(-i=\{1,2,...,D\}/i\) indicates the set of all indices except for \(i\). Challenge of unobserved \(\mathbf{C}^{*}\).The generative factors \(\mathbf{G}\) are not identifiable without proper inductive bias of \(\mathbf{C}^{*}\)[16]. Previous works on discovering the generative factors either obtain disentanglement by enforcing statistical independence on latent variables [9, 15, 6], or require that latent variables do not capture information of each other [8]. Such a setting is equivalent to assuming unconfoundedness of the generative factors. It ignores the possibility that correlated latent variables can also be causally disentangled in the observational distribution, and hence is an assumption too restrictive. Fortunately, even though the ground truth generative factors are unobserved, domain expertise may inform a "reasonable" or "likely" inductive bias of the confounder from an accessible label set, denoted as \(\mathbf{C}\). This \(\mathbf{C}\) is used to account for all correlation among \(\mathbf{Z}\). Note here that we do not assume the accessible label set \(\mathbf{C}\) equals to \(\mathbf{C}^{*}\) (but hope that it is close to \(\mathbf{C}^{*}\)). The relationship between \(\mathbf{C}^{*}\) and label set \(\mathbf{C}\) must fall into one of the following scenarios, despite the immeasurability of their exact relationships. **Case 1**: The label set contains no information about the confounders, i.e., \(\mathbf{C}=\emptyset\) **Case 2**: The label set contains partial information about the confounders, i.e.,\(\mathbf{C}\subset\mathbf{C}^{*}\), **Case 3**: The label set contains all information about the confounders, i.e.,\(\mathbf{C}=\mathbf{C}^{*}\) One may argue that \(\mathbf{C}\) may contain information irrelevant to the ground truth confounders. We show in Appendix B that in the confounded generative process described in this paper, irrelevant information in \(\mathbf{C}\) does not affect the evaluation of the interventional distribution, and therefore can be ignored. We only take into consideration how much information in \(\mathbf{C}^{*}\) is captured here without loss of generality. According to Proposition 2.1, in case 3, we can estimate \(P(Z_{i}|do(Z_{-i}=z_{-i},\mathbf{X})\) on the observational set with inductive bias from \(\mathbf{C}\). However, in the rest of the cases, the equation does not hold. Therefore, we introduce an operator \(do^{c}(\cdot)\) to estimate the interventional distribution on the observational set under inductive \(\mathbf{C}\). The framework that applies \(do^{c}(\cdot)\) is named C-Disentanglement, shorten for _Confounded Disentanglement_. ### C-Disentanglement as the learning objective We formally introduce C-Disentanglement and \(do^{c}\) as follows: Figure 2: The left figure shows the ground truth generative process while the right figure demonstrates the learning task. **Definition 3.2** (C-Disentanglement and \(do^{c}\)).: Let \(\mathbf{X}\) be the observational data, \(\mathbf{Z}=[Z_{1},Z_{2},...,Z_{D}]\) be a concatenation of \(D\) random variables, \(\mathbf{C}\) be a label set selected from domain expertise to provide inductive bias for confounders of the observational data, we define \(do^{c}\) operation as: \[P(Z_{i}|do^{c}(Z_{-i}=z_{-i}),\mathbf{X})=\sum_{c\in\mathbf{C}}P(Z_{i}|\mathbf{ X},Z_{-i}=z_{-i},\mathbf{C}=c)P(\mathbf{C}=c) \tag{3}\] \(\forall i\in 1,2,3,...,D\), where \(c\) is realizations of \(\mathbf{C}\). \(\mathbf{Z}\) obtains C-Disentanglement on \(\mathbf{X}\) given \(\mathbf{C}\) if \[P(Z_{i}|do^{c}(Z_{-i}=z_{-i}),\mathbf{X})=P(Z_{i}|\mathbf{X}). \tag{4}\] \(\mathbf{C}\) as an approximation of \(\mathbf{C}^{*}\).Whether the ground truth generative factors can be discovered depends on whether their statistical relationship on the observational distribution is correctly characterized, and it requires knowledge of \(\mathbf{C}^{*}\). However, due to the unobservable nature of the confounders, we use \(\mathbf{C}\) as an approximation of \(\mathbf{C}^{*}\), as in Equation (4). Then the identifiability of the ground truth generative factors depends on the relationship between \(\mathbf{C}\) and \(\mathbf{C}^{*}\). We thus examine three possibilities as listed in section 3.1 together with a case study as in Figure 3. In **Case 1, \(\mathbf{C}\)** is an empty set as shown in fig. 2(a). It is equivalent to assuming that the generative process is unconfounded and Equation (3) degrades to statistical independence on the observational set: \[P(Z_{i}|do^{c}(Z_{-i}=z_{-i}),\mathbf{X})=P(Z_{i}|Z_{-i}=z_{-i},\mathbf{X})=P( Z_{i}|\mathbf{X}). \tag{5}\] This contradicts the fact that these factors are actually correlated and such an assumption makes the ground truth generative factors unidentifiable when there exists a underlying confounder. In **Case 3**, as shown in fig. 2(c) and fig. 2(d), \(\mathbf{C}\supseteq\mathbf{C}^{*}\), \(do\) is equivalent to \(do^{c}\). Thus the ground truth generative factors can be identified. In **Case 2**, partial information about the confounders is given as shown in fig. 2(b). Heuristically speaking, this relaxes the constraint of statistical independence on the observational set and therefore enlarges the solution set, providing an increased possibility for the ground truth factors to be recovered. We empirically show in Section 5 that even partial information of confounders improves accuracy, outperforming existing methods in various tasks, even under distribution shifts. ## 4 cdVAE: identify causally disentangled factors in the latent space. In this section, we provide an algorithm, cdVAE, for confounded disentangled VAE, to identify the latent causally disentangled generative factors under confounder \(\mathbf{C}\) in the context of VAE. Figure 3: **Case study: a fruit dataset. Suppose a fruit dataset is generated by _color, shape and size_. These factors are correlated depending on the type of fruit and the place of origin. One way to estimate the causal effect within latent variables, proposed as C-Disentanglement, is to condition on realizations of the confounders \(\mathbf{C}\), which partitions the data into subsets and inspects within each set. Figure (a)-(d) demonstrate partitions under different \(\mathbf{C}\). In fig. 2(a), when \(\mathbf{C}=\emptyset\), color, shape and size cannot be identified, as the requirement of statistical independence contradicts the fact that they are correlated in the observational set. In fig. 2(b), only partial information is given. It still helps as it can identify that size and shape are causally independent although entangled by color. In fig. 2(c) and fig. 2(d), the ground truth generative factors can be discovered.** ### Learning objective Given \(\mathbf{X}\) as the dataset, we hope to find a deterministic function \(f\), parameterized by \(\theta\), where \(f:\mathcal{Z}\rightarrow\mathcal{X}\) such that (1) \(P(X)=\int f(\mathbf{Z};\theta)P(\mathbf{Z})dz\) is maximized and (2) each \(Z_{i}\) encodes causally disentangled generative factors, with \(\mathbf{Z}=[Z_{1},Z_{2},...,Z_{D}]\) as a concatenation of random variables in the latent space. More concretely, \(f(z;\theta)\) is characterized by a probability distribution \(P(\mathbf{Z}|\mathbf{X};\theta)\), and \(p(\mathbf{Z})\) is a prior distribution from where \(\mathbf{Z}\) can be easily sampled. For each \(Z_{i}\in\mathbf{Z}\), we require \[P(Z_{i}|do^{c}(Z_{-i}),\mathbf{X})=P(Z_{i}|\mathbf{X}),\quad\forall c\in \mathbf{C}. \tag{6}\] Applying the variational Bayesian method [12], the learning object is to optimize the evidence lower bound (ELBO) while satisfying the constraints on causal disentanglement: \[\max_{\theta,\phi} \mathbb{E}_{\mathbf{Z}\sim Q}[logP(\mathbf{X}|\mathbf{Z};\theta )]-D[Q(\mathbf{Z}|\mathbf{X};\phi)||P(\mathbf{Z})] \tag{7}\] \[s.t. P(Z_{i}|do^{c}(Z_{-i}),\mathbf{X})-P(Z_{i}|\mathbf{X})=0\quad \forall i\in 1,2,...,D \tag{8}\] For simplicity, we omit all model parameters \(\theta,\phi\) in writing. ### Learning strategy We start from the estimation of the causal disentanglement constraint as shown in Equation (6). Because the probability distribution is hard to directly calculate, inspired by [23], we resort to its first-order moment as an approximation. Specifically, we estimate the \(L_{1}\) distance between the expectation of probability \(P(Z_{i}|do^{c}(Z_{-i})\), \(\mathbf{X})\) and \(P(Z_{i}|\mathbf{X})\) as follows: \[l_{c}=\sum_{i=1}^{D}d[\mathbb{E}(Z_{i}|do^{c}(Z_{-i}),\mathbf{X}),\quad\mathbb{ E}(Z_{i}|\mathbf{X})]. \tag{9}\] We show in the following theorem that eq. (9) is satisfied the learned latent variable \(\mathbf{Z}\) subjects to a Gaussian distribution with a diagonal covariance matrix. Proof can be found in appendix B.2. **Theorem 4.1**.: _Suppose that the latent variable \(\mathbf{Z}\) on dataset \(\mathbf{X}\) given \(\mathbf{C}=c\) subjects to the Gaussian Distribution \(\mathcal{N}(\mu^{c}(\mathbf{X}),\Sigma^{c}(\mathbf{X}))\). Specifically,_ \[P(\mathbf{Z}|\mathbf{C}=c,\mathbf{X})=(2\pi)^{-D/2}\det(\Sigma^{c})^{-1/2}\; \exp\left(-\frac{1}{2}(\mathbf{Z}-\mu^{c})^{\mathsf{T}}(\Sigma^{c})^{-1}( \mathbf{Z}-\mu^{c})\right),\] _where \(\mathbf{Z}\in\mathbb{R}^{D}\). If \(\Sigma^{c}(\mathbf{X})\) is diagonal for all \(c\), we have_ \[l_{c}=\sum_{i=1}^{D}d(\mathbb{E}(Z_{i}|do^{c}(Z_{-i}),\mathbf{X}),\quad \mathbb{E}(Z_{i}|\mathbf{X}))=0. \tag{10}\] From Theorem 4.1, we see that for each \(\mathbf{C}=c\), enforcing the latent variable \(\mathbf{Z}\) to be statistically independent minimizes \(l_{c}\). Taking the whole \(\mathbf{C}\) set into consideration, \(P(\mathbf{Z}|\mathbf{X})\) subjects to a mixture Figure 4: **Learning paradigm of cdVAE.** The input data \(\mathbf{X}\) is partitioned by realizations of the confounder \(\mathbf{C}\). We infer from each partition a Gaussian distribution and form a mixture of Gaussian model to characterize the distribution of \(\mathbf{Z}\) on the observational distribution. We assume soft assignment of samples and further infer \(\pi^{c}(\mathbf{X})\) that resembles \(\mathbf{C}\). of Gaussian distribution where each centroid is inferred from observational data under a specific realization of the confounder \(\mathbf{C}\): \[P(\mathbf{Z}|\mathbf{X})=\sum_{c\in C}\pi_{c}\mathcal{N}(\mu^{c}(\mathbf{X}), \Sigma^{c}(\mathbf{X})). \tag{11}\] The mixing coefficient \(\pi_{c}=P(\mathbf{C}=c|\mathbf{X})\) reads the probability of occurrence of \(\mathbf{C}=c\). Nevertheless, such a hard assignment of coefficient varies with observable dataset and cannot accommodate scenarios in which the label set \(\mathbf{C}\) does not exist. In this paper, we parameterize \(\pi_{c}\) as a Gaussian distribution for a soft assignment of samples: \(\pi_{c}\sim\mathcal{N}(\mu^{c}(\mathbf{X}),\Sigma^{c}(\mathbf{X}))\). Parameters \(\mu^{c}(\mathbf{X})\) and \(\Sigma^{c}(\mathbf{X})\) are learned to minimize the discrepancy with \(P(\mathbf{C}=c|\mathbf{X})\). In VAEs, the prior of the latent space \(P(\mathbf{Z})\) is assumed to follow a Gaussian distribution with mean zero and identity variance: \(\mathbf{Z}\sim\mathcal{N}(\mathbf{0},I)\). To avoid enforcing statistical independence of the overall latent spaces we learn, we only assume that the prior of \(\mathbf{Z}\) to be a Gaussian distribution with variance one for each subset of \(\mathbf{C}\). Concretely, suppose that the latent variable \(\mathbf{Z}\) for \(\mathbf{X}\) under \(\mathbf{C}=c\) follows a Gaussian distribution \(\mathcal{N}(\mu^{c}(\mathbf{X}),\Sigma^{c}(\mathbf{X}))\), then the KL divergence in (7) regulates the distribution to \(\mathcal{N}(\mu^{c}(\mathbf{X}),I)\). By the Lagrangian multiplier method, the new loss function is \[\mathcal{L}=\underbrace{-\mathbb{E}[\log P(\mathbf{X}|\mathbf{Z})]}_{\mathcal{ L}_{rec}}+\underbrace{\mathbb{E}[\log P(\mathbf{C}|\pi_{c}(\mathbf{X}))]}_{ \mathcal{L}_{cls}}+D_{KL}[P(\mathbf{Z}|\mathbf{X},\mathbf{C})||P(\mathbf{Z}| \mathbf{C})]. \tag{12}\] The algorithm pseudocode can be found in Appendix C. ## 5 Experiments In this section, we experimentally compare cdVAE with various baselines on synthetic and real-world datasets, and study properties of cdVAE through the ablation studies. We demonstrate that cdVAE allows for causally disentangled but statistically correlated features being discovered in the latent space, which better approximates the ground truth generative factors and outperforms SOTA baseline under distribution shifts. Concretely, we aim to answer the following questions regarding the proposed model: * [label=\(\rhd\)] * **Q1**: How does it perform compared to the existing methods in the latent space? * **Q2**: How does it perform in downstream tasks such as classification under distribution shifts? ### Basic setup **Datasets.** we evaluate cdVAE on three datasets: synthetic datasets 3dshape [3] and Candle [21], and real-world dataset CelebA [15]. 3dshape is a dataset of 3D shapes generated from 6 ground-truth independent latent factors. These factors are floor color, wall color, object color, scale, shape, and orientation. There are 480,000 images from all possible combinations of these factors, and these combinations are present exactly once. Candle is a dataset generated using Blender, a free and open-source 3D CG suite that allows for manipulating the background and adding foreground elements that inherit the natural light of the background. It has floor hue, wall hue, object hue, scale, shape, and orientation as latent factors. We use 3dshape and Candle as synthetic datasets. To mimic the real-world scenario where perfectly disentangled ground-truth generative factors do not exist or are hard to identify, we manually create a certain level of correlation in these datasets by first making rules among certain attributes and then sampling accordingly. **Baselines and tasks.** We compare the performance of cdVAE in the task of image generation and classification under distribution shift with the following architectures: (1) VAE-based methods (vanilla VAE [12], \(\beta\)-VAE [9], FactorVAE [11]) that equate disentanglement as statistical independence, (2) Existing causal regulation methods, CAUSAAL-REP [24], (3) cVAE [12] as it also applies the label information (4) GMVAE [7] as it also adopts a mixture of Gaussian model in a variational autoencoder framework. Note that in this paper, we do not compare cdVAE with CausalVAE [27], despite the latter also obtaining "disentanglement" in the latent space. The main reason is that the problem-setting and the learning objectives are fundamentally different. CausalVAE aims to disentangle known ground truth generative factors in the latent space, it aims to bind these factors one by one to latent variables and the causal dependency is not considered. The task of interest in this paper is to discover causally disentangled factors in the latent space and the ground truth is unavailable, thus these two methods are not comparable. **Evaluation metrics**. The experimental results are evaluated on (1) end-to-end evaluation metrics: the accuracy of classification under distribution shifts and reconstruction loss in image generation task. (2) how well the learned latent generative factors recover the ground truth one: Maximal Information Coefficient (MIC) and Total Information Coefficient (TIC) [13]. (3) Existing disentanglement scores from a causal perspective: IRS [23], UC/CG [21], IOSS [24], and statistical perspective: D-Score [8]. The higher these evaluation metrics, the better the model except for IOSS (the lower, the better). A detailed introduction of these metrics can be found in Appendix C. To be more concrete, IRS evaluates the general interventional robustness of learned representations while UC and CG are extended from IRS. UC (Unconfoundedness) evaluates how well distinct generative factors are captured by distinct sets of latent dimensions with no overlap. CG (Counterfactual Generativeness) measures how robust a certain latent variable is against the others. IOSS measures whether learned random variables have independent supports as a necessary condition inferred from the definition of causal disentanglement. \(D\) score measures disentanglement in the non-causal definition. ### Experimental results **(Q1) cdVAE significantly outperforms various baselines in end-to-end measurement.** We compare our cdVAE with baseline models in the image generation task on the CelebA, and Candle datasets. In CelebA dataset, we have \(\mathbf{C}\) set to be whether there are eyeglasses in the image, i.e., \(|\mathbf{C}|=2\). In the Candle dataset, we have \(|\mathbf{C}|=4\), with 2 objective hues and two wall hues combined together. More details can be found in Appendix C. As shown in Table 1, cdVAE are evaluated by several groups of metrics. Reconstruction error measures how well the images are reconstructed in an end-to-end fashion while the others are measurements of how disentangled the latent factors are: D score is from the non-causal perspective while UC, UG and IOSS are from the causal perspective. For the celebA dataset, we do not measure the UC and CG score as it requires ground truth generative factors or access to the true generative process. Our method outperforms all baselines on these metrics except for the D score. Note the D score is only an effective measurement when there are no confounders in the observational dataset, as it requires each latent variable not to contain information about the others. Under the confounded dataset such as CelebA and Candle, the ground truth generative factors are correlated, resulting in a low D score with our model as expected. **(Q2.1) cdVAE are more robust under distribution shifts.** We conduct the task of shape classification on the 3dshape dataset with distribution shift and use the classification accuracy as a metric for out-of-distribution generalization[28]. Specifically, in the source distribution, we sample a certain percentage of images in which the object hue is correlated with the object shape (i.e., red objects are cubes). The rest of the images are evenly generated by \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline & \multicolumn{3}{c}{**CelebA**} & \multicolumn{3}{c}{**Candle**} \\ \cline{2-9} **Methods** & **Recon \(\downarrow\) & \begin{tabular}{c} \(\mathbf{D}\uparrow\) \\ (non-causal) \\ \end{tabular} & **IROS \(\downarrow\)** & **Recon \(\downarrow\)** & \begin{tabular}{c} \(\mathbf{D}\uparrow\) \\ (non-causal) \\ \end{tabular} & **IC \(\uparrow\)** & **CG \(\uparrow\)** & **IROS \(\downarrow\)** \\ \hline VAE & 0.33 & 0.11 & 0.78 &.024 & 0.14 & 0.10 & 0.18 & 0.69 \\ \(\beta\)-VAE & 0.27 & 0.15 & 0.74 &.017 & **0.18** & 0.11 & 0.24 & 0.54 \\ FactorVAE & 0.25 & **0.17** & 0.68 &.014 & 0.15 & 0.13 & 0.26 & 0.51 \\ \hline CAUSAI-REP & 0.29 & 0.16 & 0.34 &.012 & **0.18** & 0.20 & 0.32 & 0.31 \\ \hline cVAE & 0.32 & 0.14 & 0.64 &.020 & 0.14 & 0.12 & 0.21 & 0.62 \\ GMVAE & 0.30 & 0.13 & 0.71 &.018 & 0.12 & 0.09 & 0.16 & 0.71 \\ \hline cdVAE & **0.18** & 0.12 & **0.21** & **.008** & 0.11 & **0.35** & **0.54** & **0.16** \\ \hline \hline \end{tabular} Note that in this paper, we do not compare cdVAE with CausalVAE [27], despite the latter also obtaining “disentanglement” in the latent space. The main reason is that the problem-setting and the learning objectives are fundamentally different. CausalVAE aims to disentangle known ground truth generative factors in the latent space, it aims to bind these factors one by one to latent variables and the causal dependency is not considered. The task of interest in this paper is to discover causally disentangled factors in the latent space and the ground truth is unavailable, thus these two methods are not comparable. \end{table} Table 1: **Compare with baselines in image generation task on celebA and candle. The reconstruction error indicates the end-to-end performance of the image generation task. D-score measures from a non-causal perspective and requires that the generation process is unconfounded. We expect that a good method that recovers causally disentangled factors should obtain poor (i.e., low) D-scores. IOSS, UC and CG are causal metrics that measure the level of disentanglement of a representation.** disentangled factors, while in the target domain, all images are generated by disentangled factors. The proportion of highly correlated data is denoted by _shift severity_. For example, shift severity = 0.4 means that \(40\%\) of training images are sampled under preset correlation between object hue and object shape. We train cdVAE, \(\beta\)-VAE, and CAUSAL-REP using images from the source set, with decoders being replaced by classifiers. The trained classifier is then tested on the targets set. We report the classification accuracy on the target set and the performance drop in Figure 5. We could observe that cdVAE is more robust than other baselines under distribution shift as it has the lowest performance drop and the highest target set accuracy. **An intuitive explanation of how C-Disentanglement improves the OOD generalization.** Suppose we have a fruit dataset and the goal is to classify fruits. In the training set, a large amount of apples are red and round. If assuming unconfoundedness and requires statistical independence, it is possible that a latent variable encodes whether this object is round-and-red. As a result, in the test set, if there is a green apple, it will fail the test and may not be able to be identified as an apple. However, under the framework of C-Disentanglement, color and shape are disentangled in the latent space, and the shape and color contribute to the classification separately, resulting in a higher probability for the green apple also to be identified as an apple. **(Q2.2) cdVAE better discovers the ground truth generative factors.** In the task of shape classification, we further examine how well the learned representations approximate the ground truth generative labels and to what extent they are causally disentangled. Table 2 shows that cdVAE outperforms all baselines in approximating the generative factors and the disentanglement. ### Ablations studies We further conduct ablation studies to show that providing inductive bias indeed improves the discovery of generative factors in the latent space. Concretely, we compare cdVAE with conditional VAE (cVAE) [12] and GMVAE [7]. Compared with vanilla VAE, the method proposed uses label information to provide inductive bias to confounders for partitioning the observational dataset. cVAE has the label information compared with vanilla VAE and GMVAE is a VAE modelled by a mixture of Gaussian. As shown in Table 1, cdVAE has universally better results, showing the necessity of introducing bias to factors discovered in the latent space. We also investigate how choice of \(\mathbf{C}\) affect the model performance and how to adapt C-Disentanglement to the existing method aiming for statistical independent, as shown in appendix C. ## 6 Related work **Disentangled Representations** The pursuit for disentangled representation can be dated to the surge of representation learning and is always closely associated with the generative process in modern machine learning, following the intuition that each dimension should encode different features. \begin{table} \begin{tabular}{l c c c} \hline \hline **Methods** & **MIC \(\uparrow\)** & **TIC \(\uparrow\)** & **IRS \(\uparrow\)** \\ \hline VAE & 21.9 & 12.1 & 0.82 \\ \(\beta\)-VAE & 22.1 & 12.4 & 0.85 \\ FactorVAE & 24.3 & 15.6 & 0.89 \\ \hline CAUSAL-REP & 26.8 & 16.1 & 0.88 \\ \hline cVAE & 22.4 & 12.4 & 0.84 \\ GMVAE & 23.2 & 12.8 & 0.81 \\ \hline cdVAE & **31.9** & **20.2** & **0.89** \\ \hline \hline \end{tabular} \end{table} Table 2: Compare how ground truth generative factors are recovered (MIC/TIC) and how disentangled they are in the latent space in classification on 3dshape dataset with shift severity = 0.5. Figure 5: Compare cdVAE with \(\beta\)-Vae, CAUSAL-REP on classification under distribution shift. T represents accuracy on the target data, S-T represents the performance drop when the classifier trained on the source data is directly tested on the target data. [6] attempts to control the underlying factors by maximizing the mutual information between the images and the latent representations. [8] propose a quantitative metric with the information theory. They evaluate the disentanglement, completeness, and informativeness by fitting linear models and measuring the deviation from the ideal mapping. [9; 11; 5; 18] encourage statistical independence by penalizing the Kullback-Leibler divergence (KL) term in the VAE objective. However, the non-causal definitions of disentanglement fail to consider the cases where correlated features in the observational dataset can be disentangled in the generative process. Such a challenge is well-approached through a line of research from the causal perspective. **Causal Generative Process.** Causal methods are widely used for eliminating spurious features in various domains and improving understandable modelling behaviours[25; 26; 14]. It is not until [23] that it was introduced for a strict characterization of the generative process. [23] first provided a rigorous definition of a causal generative process and the definition of disentangled causal representation as the non-existence of causal relationships between two variables, i.e., the intervention on one variable does not alter the distribution of the others. The authors further introduce _interventional robustness_ as an evaluation metric and show its advantage on multiple benchmarks. [21] follow the path of [23] and further propose two evaluation metrics and the Candle dataset. The confounded assumption allows for correlation in the latent space without tempering with the disentanglement in the data generative. Despite effective evaluation tools, there is still a missing piece on how to infer a set of causally disentangled features. Using the proposed evaluation metric as regulation, the model implicitly assumes unconfoundedness and it falls back to finding statistical independence in the latent space. The problem of unrealistic unconfoundedness assumption is identified by [24]. They assume that confounders exist but they are unobservable. They further propose an evaluation metric considering the existence of confounders, that causally disentangled latent variables have independent support measured by the IOSS score. Similar to the evaluation metrics introduced in [23; 21], IOSS is also a necessary condition of the causal disentanglement. More importantly, as in previous work focusing on obtaining statistical independence, such a regulation suffers from the identifiability issue. **Weak Supervision for Inductive Bias.** The identifiability issue in unsupervised disentangled representation learning is first identified in [16]. Specifically, they show from the theory that such a learning task is impossible without inductive biases on both the models and the data. Naturally, a series of weak-supervised or semi-supervised methods [4; 1; 2] are proposed with a learning objective of statistical independence or alignment. In this paper, we take a step further for the confounding assumption, assuming that the confounders are observable with proper inductive bias so that the latent representation can be better identified. We, similarly, adopt partial labels of the dataset as the supervision signal. We treat the labels as a source of possible confounders and allow the learning of correlated but causally disentangled latent generative factors to be learned. ## 7 Conclusion and discussions In this paper, we recognize the importance of bringing in confounders with proper inductive bias into the discovery of causally disentangled latent factors. Specifically, we propose a framework, C-Disentanglement, that introduces the inductive bias from the domain knowledge and a method, cdVAE, to identify the generative factors. Although cdVAE helps with discovering generative factors under confounders, the learning process relies on the domain knowledge, the choice of \(\mathbf{C}\), and that \(\mathbf{C}\) has to be discrete/categorical and have a finite number of realizations. How to generalize to continuous \(\mathbf{C}\) or select effective \(\mathbf{C}\) from the available knowledge set is beyond the scope of this paper and will be investigated in future studies. We hope this work could bring attention to the importance of inductive bias of confounders and inspire future works in this direction. In addition, better capturing of generative factors could bring a more controlled generative process and positive social impact.
2302.00448
A formalisation of Gallagher's ergodic theorem
Gallagher's ergodic theorem is a result in metric number theory. It states that the approximation of real numbers by rational numbers obeys a striking 'all or nothing' behaviour. We discuss a formalisation of this result in the Lean theorem prover. As well as being notable in its own right, the result is a key preliminary, required for Koukoulopoulos and Maynard's stunning recent proof of the Duffin-Schaeffer conjecture.
Oliver Nash
2023-02-01T13:53:22Z
http://arxiv.org/abs/2302.00448v1
# A formalisation of Gallagher's ergodic theorem ###### Abstract Gallagher's ergodic theorem is a result in metric number theory. It states that the approximation of real numbers by rational numbers obeys a striking 'all or nothing' behaviour. We discuss a formalisation of this result in the Lean theorem prover. As well as being notable in its own right, the result is a key preliminary, required for Koukoulopoulos and Maynard's stunning recent proof of the Duffin-Schaeffer conjecture. L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L., L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L. College, L., L. College, L. College, L. College, L. College, L., L. College, L. College, L. College, L. College, L. College, L., L. College, L. College, L., L. College, L. College, L. College, L., L. College, L. College, L., L. College, L. College, L., L. College, L. College, L., L. College, L. College, L., L. College, L. College, L., L. College, L., L. College, L. College, L. College, L., L. College, L., L. College, L. College, L., L. College, L., L. College, L. College, L., L. College, L., L. College, L., L. College, L., L. College, L. College, L., L. College, L., L. College, L., L. College, L., L. College, L., L. College, L., L. College, L., L. College, L., L. College, L., L. College, L., L. College, L., L., L. College, L., L., L. College, L., L., L. College, L., L., L. College, L., L. ###### Abstract We consider the problem of finding a _random_ random random walk (or random walk) with random walk \(\{\mathbf{x}_{i}\}\). We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random is random if and only if the random walk is random. We show that the random walk is random is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random is random. We show that the random walk is random if and only if the random walk is random. We show that the random walk is random is random if and only if the random walk is random. We show that the random walk is random is random if and only if the random walk is random. We show that the random walk is random if and only if the random walk is random is random. We show that the random walk is random if and only if the random walk is random is random. We show that the random walk is random if and only if the random walk is random is random. We show that the random walk is random is random if and only if the random walk is random. We show that the random walk is random if and only the random walk is random is random. We show that the random walk is random if and only the random walk is random. We show that the random walk is random is random if and only the random walk is random is random. We show that the random walk is random if and only the random walk is random. We show that the random walk is random is random if and only the random walk is random. We show that the random walk is random is random if and only the random walk is random. We show that the random walk is random is random. We show that the random walk is random is random if and only the random walk is random. We show that ## 2 Basic concepts We outline some basic concepts to fix notation and to assist non-experts. ### Almost equal sets Given a measurable space with measure \(\mu\), when there is no possibility of ambiguity about the measure, we shall use the notation: \[s=_{a.e.}t, \tag{1}\] to say two subsets \(s\), \(t\) are almost equal with respect to \(\mu\). We recall that this is equivalent to the following pair of measure-zero conditions \(\hbox{\hbox{\hbox to 0.0pt{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}} \kern-3.0pt\hbox{$\sqcap$}}}\): \[\mu(s\setminus t)=0\quad\mbox{and}\quad\mu(t\setminus s)=0. \tag{2}\] ### Obeying a condition infinitely often Gallagher's theorem concerns a set of points obeying a condition 'infinitely often'. In general, given a sequence of subsets \(s_{0},s_{1},\ldots\) of some background type \(X\) the notation \(\exists\ \cdots\ i.o.\) is defined as1: Footnote 1: This is standard notation appearing throughout the informal literature. \[\{x:X\ |\ \exists\ n\in\mathbb{N},x\in s_{n}\ i.o.\}=\{x:X\ |\mbox{ the set }\{n\in\mathbb{N}\ |\ x\in s_{n}\}\mbox{ is infinite}\}. \tag{3}\] In fact there is another expression for this set; it is easy to see that \(\hbox{\hbox{\hbox to 0.0pt{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{ \kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$ \sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{ \kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$ \sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{ \kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{ \kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{ \kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{ \kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{ \kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{ \kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{ \kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{ \kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{ \kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{ \kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{ \kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{ \kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{ \kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{ \kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{ \kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{ \kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{ \kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{ \kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{ \kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{ \kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{ \kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{ \kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{ \kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{ \kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{ \kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{ \kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{ \kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{ \kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{ \kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{ \kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{ \kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{ \kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{ \kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{ \kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{ \kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{ \kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{ \kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{ \kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{ \kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{ \kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{ \kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{ \kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{ \kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{ \kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{ \kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{ \kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{ \kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{ \kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{ \kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0.0pt\hbox{$\sqcap$}\hbox{ \kern 0.0pt\hbox{$\sqcap$}\hbox{\kern 0. * [leftmargin=*,label=0.4,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=rightrightmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=rightrightmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=rightmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=rightmargin=*,align=leftmargin=*,align=leftmargin=*,align=rightmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=rightmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=rightmargin=*,align=leftmargin=*,align=leftmargin=*,align=rightmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=rightmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=rightmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=rightmargin=*,align=leftmargin=*,align=leftmargin=*,align=rightmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=rightmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=rightmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=rightmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=rightmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=rightmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=rightmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=rightmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=rightmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=rightmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=rightmargin=*,align=leftmargin=*,align=leftmargin=*,align=rightmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=rightmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=rightmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=rightmargin=*,align=leftmargin=*,align=leftmargin=*,align=rightmargin=*,align=leftmargin=*,align=leftmargin=*,align=rightmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=rightmargin=*,align=leftmargin=*,align=leftmargin=*,align=rightmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=rightmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=rightmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=rightmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=rightmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=rightmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=rightmargin=*,align=leftmargin=*,align=leftmargin=*,align=rightmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=rightmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=rightmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=rightmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin*,align=rightmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=rightmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=rightmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=right*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=right*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=leftmargin=*,align=leftmargin=*,align=leftmargin=right*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=right*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=leftmargin=*,align=leftmargin=leftmargin*,align=leftmargin=right*,align=leftmargin=*,align=leftmargin=left*,align=leftmargin=right*,align=leftmargin=*,align=leftmargin=*,align=align*,align=leftmargin=leftmargin=*,align=align=leftmargin*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=leftmargin*,align=leftmargin=*,align=leftmargin=leftmargin*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=left*,align=leftmargin=*,align=leftmargin*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin*,align=leftmargin=left*,align=leftmargin=right*,align=leftmargin=*,align=leftmargin*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=leftmargin*,align=leftmargin=*,align=leftmargin*,align=leftmargin*,align=leftmargin=*,align=leftmargin*,align=leftmargin*,align=leftmargin=*,align=leftmargin*,align=leftmargin*,align=leftmargin=*,align=leftmargin=*,align=left*,align=leftmargin=*,align=leftmargin*,align=leftmargin=*,align=leftmargin*,align=leftmargin=*,align=leftmargin*,align=leftmargin=*,align=leftmargin=*,align=leftmargin=*,align=left*,align=leftmargin*,align=leftmargin=*,align=left*,align=leftmargin*,align=leftmargin*,align=leftmargin=*,align=leftmargin*,align=leftmargin*,align=leftmargin*,align=leftmargin*,align=leftmargin*,align=leftmargin*,align=leftmargin*,align=leftmargin*,align=leftmargin*,align=leftmargin*,align=leftmargin*,align=leftmargin=*,align=leftmargin*,align=left*,align=left*,align=leftmargin*,align=leftmargin=*,align=leftmargin*,align=leftmargin*,align=align=left*,align=leftmargin*,align=left*,align=leftmargin*,align=leftmargin*,align=left*,align=left*,align=left*,align=left*,align=leftmargin*,align=left*,align=left*,align=left*,align=left*,align=leftmargin*,align=left*,align*,align=leftmargin*,align=left*,align=leftmargin*,align=left*,align=leftmargin*,align=left*,align*align=left*,align=left*,align=left*,align*,align=left*,align=left*,align=left*,align*,align=left*,align=left*,align*align=left*,align=left*,align=left*,align=left*,align=left*,align=align*,align=left*,align*,align=left*,align=left*,align*,align=left*,align=left*,align*align=left*,align*,align=left*,align=left*,align*align=left*,align*align=left*,align=left*,align*align=left*,align*,align=left*,align=align*align*,align=align*,align= ``` defadd_circle{K : Type^}[linear_ordered_add_comm_groupK] [topological_spaceK][order_topologyK](p :K) := K / ``` When \(p=1\) this is exactly \(\mathbb{R}/\mathbb{Z}\) but we allow a general value of \(p\) to support other applications3. Footnote 3: For example mathlib uses \(p=2\pi\) to define angles \(\lx@paragraphsign\). As the names suggest, circle carries an instance of mathlib's group class and add_circle carries an instance of add_group. It is interesting that mathlib's additive-multiplicative design pattern so conveniently allows both models to coexist. Substantial API for add_circle was then developed, notably an instance of the class normed_add_comm_group\(\lx@paragraphsign\) satisfying the identity (7) \(\lx@paragraphsign\) and a characterisation of the finite-order points using rational numbers\(\lx@paragraphsign\): \[\{y\in\mathbb{S}\ |\ o(y)=n\}=\{[q]\in\mathbb{S}\ |\ q\in\mathbb{Q},\operatorname{ denom}(q)=n\}, \tag{8}\] where the notation \(o(y)=n\) means that \(y\) has order \(n\). Using all of our new language, the statement of Gallagher's theorem becomes: [Gallagher's theorem] Let \(\delta_{1},\delta_{2},\dots\) be a sequence of real numbers and let: \[W=\limsup_{n>0}\operatorname{Th}(\delta_{n},\{y\in\mathbb{S}\ |\ o(y)=n\}).\] Then \(W=_{a.e.}\emptyset\) or \(W=_{a.e.}\mathbb{S}\). Using (7) and (8), theorem 2 is trivially equivalent to theorem 1. ### Metric number theory Metric number theory is the study of arithmetic properties of the real numbers (and related spaces) which hold 'almost everywhere' with respect to the Lebesgue measure. The arithmetic property in the case of Gallagher's theorem is approximation by rational numbers. To illustrate, consider the set \(\mathbb{I}\) of real numbers which have infinitely-many quadratically-close rational approximations: \[\mathbb{I} =\{x\in\mathbb{R}\ |\ \exists\ q\in\mathbb{Q},|x-q|<1/ \operatorname{ denom}(q)^{2}\ i.o.\}\] \[=\limsup_{n>0}\operatorname{Th}(1/n^{2},\{q\in\mathbb{Q}\ |\ \operatorname{ denom}(q)=n\}).\] It has been known at least since the early \(19^{\text{th}}\) Century, that \(\mathbb{I}\) is just the set of irrational numbers4: Footnote 4: Thanks to Michael Geisser and Michael Stoll, mathlib knows this fact \(\lx@paragraphsign\). \[\mathbb{I}=\mathbb{R}\setminus\mathbb{Q}. \tag{9}\] Considering this result from the point of view of metric number theory, we notice that since \(\mathbb{Q}\) has Lebesgue measure zero, \(\mathbb{I}\) is almost equal to \(\mathbb{R}\). Thus the metric number theorist would be content to summarise (9) by saying that \[\mathbb{I}=_{a.e.}\mathbb{R},\] without worrying about exactly which numbers \(\mathbb{I}\) contains. The benefit of metric number theorist's point of view is that a great many questions have answers of this shape. Gallagher's theorem is an especially-beautiful example of this phenomenon. ## 3 Doubling measures and Lebesgue's density theorem Lebesgue's density theorem is a foundational result in measure theory, required for the proof of Gallagher's theorem. Although we only needed to apply it to the circle, the density theorem holds quite generally and so we took some trouble to formalise it subject to quite weak assumptions5. Footnote 5: We were lucky that Sébastien Gouézel had recently added an extremely general theory of Vitali families which made this possible. ### Doubling measures A convenient class of measures for which the density theorem holds is the class of doubling measures. **Definition 3**.: _Let \(X\) be a measurable metric space carrying a measure \(\mu\). We say \(\mu\) is a doubling measure if there exists \(C\geq 0\) and \(\delta>0\) such that for all \(0<\epsilon\leq\delta\) and \(x\in X\):_ \[\mu(B(x,2\epsilon))\leq C\mu(B(x,\epsilon)).\] _where \(B(x,r)\) denotes the closed ball of radius \(r\) about \(x\)._ The corresponding formal definition, which the author added to mathlib for the purposes of formalising the density theorem, is: ``` classis_doubling_measure {x:Type^}[metric_space\(\alpha\)][measurable_space\(\alpha\)](\(\mu\):measure\(\alpha\)):= {exists_measure_closed_ball_le_mul[]:=(C:R\(>\)0),v^}\(\epsilon\)in\(\mathcal{N}\)[\(>\)]0,v x, \(\mu\)(closed_ball x(2^^e))\(\leq\)C^^e \(\mu\)(closed_ball x(\(\varepsilon\))) ``` Listing 4 Definition of doubling measures \({}^{\mathcal{G}}\) The parameter \(\delta\) is not explicitly mentioned in the code above because we use mathlib's standard notation for the concept of a predicate holding eventually along a filter \({}^{\mathcal{G}}\). For our application, we needed to apply the density theorem to the Haar measure [13] on the circle. Of course this turns out to be the familiar arc-length measure and so the volume of a closed ball of radius \(\epsilon\) is given by \({}^{\mathcal{G}}\): \[\mu(B(x,\epsilon))=\min(1,2\epsilon).\] Taking \(C=2\) we thus see that the Haar measure on the circle is doubling. We registered this fact using a typeclass instance as follows: ``` instance:is_doubling_measure(volume:measure(add_circleT)):= ``` Listing 5 The circle's doubling measure \({}^{\mathcal{G}}\) The unit circle corresponds to taking \(T=1\), but the code allows any \(T>0\). Thanks to this instance, Lean knows that any results proved for doubling measures automatically holds for the Haar measure on the circle. ### The density theorem The version of the density theorem which we formalised is: Let \(X\) be a measurable metric space carrying a measure \(\mu\). Suppose that \(X\) has second-countable topology and that \(\mu\) is doubling and locally finite. Let \(S\subseteq X\) and \(K\in\mathbb{R}\), then for almost all \(x\in S\), given any sequence of points \(w_{0},w_{1},\ldots\) and distances \(\delta_{0},\delta_{1},\ldots\), if: * \(\delta_{j}\to 0\) as \(j\to\infty\) and, * \(x\in B(w_{j},K\delta_{j})\) for large enough \(j\), then: \[\frac{\mu(S\cap B(w_{j},\delta_{j}))}{\mu(B(w_{j},\delta_{j}))}\to 1,\] as \(j\to\infty\). Even in the special case \(K=1\) and \(w_{0}=w_{1}=\cdots=x\), the result is quite powerful6. A point \(x\) satisfying the property appearing in the theorem statement is known as a point of density \(1\). Using this language, Lebesgue's density theorem asserts that almost all points of a set have density \(1\). In particular if \(\mu(S)>0\) then there must exist a point of density \(1\)11. As an example, if \(X=\mathbb{R}\) and \(S\) is the closed interval \([0,1]\), the set of points of density \(1\) is the open interval \((0,1)\). Footnote 6: Indeed this is probably the most common version one finds in the literature. In fact the formal version which we added to mathlib is very slightly more general since it allows \(w\) and \(\delta\) to be maps from any space carrying a filter. After a preparatory variables statement: ``` variables{\(\alpha:\)Type^}[metricspace\(\alpha\)][measurablespace\(\alpha\)](\(\mu:\)measure\(\alpha\)) [is_doubling_measure\(\mu\)][second_countable_topology\(\alpha\)][borel_space\(\alpha\)] [is_locally_finite_measure\(\mu\)] ``` it looks like this: ``` Listing 6: Variables for the density theorem ``` variables{\(\alpha:\)Type^}[metric_space\(\alpha\)][measurable_space\(\alpha\)](\(\mu:\)measure\(\alpha\)) [is_doubling_measure\(\mu\)][second_countable_topology\(\alpha\)][borel_space\(\alpha\)] [is_locally_finite_measure\(\mu\)] ``` it looks like this: ``` Listing 7: Lebesgue's density theorem for doubling measures11 ``` lemmais_doubling_measure.ee_tendsto_measure_inter_div(S:set\(\alpha\))(K:R): \(\forall^{2}x\)\(\partial\mu.\)restrictS,\(\forall\){\(\iota:\)Type^}{1:filter\(\iota\)}(w:\(\iota\to\alpha\))(\(\delta:\iota\to\R\)) ($lim:endsto$1(\(\mathcal{N}\)[>]0)) (xmem:\(\forall^{i}\)jin1,x\(\in\)closed_ball(w j)(K^{*}\(\delta\)j)),tendsto(\(\lambda\)j, \(\mu\)(S\(\cap\)closed_ball(w j)(\(\delta\)j))/\(\mu\)(closed_ball(w j)(\(\delta\)j))1(\(\mathcal{N}\)1):= ``` The method of proof is essentially is to develop sufficient API for is_doubling_measure to show that such measure spaces carry certain natural families of subsets called Vitali families and then to invoke the lemma vitali_family.ae_tendsto_measure_inter_div(r) added by Gouezel as part of an independent project [9]. ## 4 Cassels's lemma A key ingredient in the proof of Gallagher's theorem is the following result due to Cassels. **Lemma 5**.: _Let \(X\) be a measurable metric space carrying a measure \(\mu\). Suppose that \(X\) has second-countable topology and that \(\mu\) is doubling and locally finite. Let \(s_{0},s_{1},\ldots\) be a sequence of subsets of \(X\) and \(r_{0},r_{1},\ldots\) be a sequence of real numbers such that \(r_{n}\to 0\) as \(n\to\infty\). For any \(M>0\) let:_ \[W_{M}=\limsup\operatorname{Th}(Mr_{n},s_{n}),\] _then:_ \[W_{M}=_{a.e.}W_{1},\] _i.e., up to sets of measure zero, \(W_{M}\) does not depend on \(M\)._ This essentially appears as lemma 9 in [4] in the special case that: 1. \(X\) is the open interval \((0,1)\), 2. \(\mu\) is the Lebesgue measure, 3. \(s_{n}\) is a sequence of points rather than a sequence of subsets. Reusing the variables from listing 6, the formal version of lemma 5 which we added to mathlib looks like this: **Listing 8** Cassels's lemma Theorem bllimsup_thickening_mul_ae_eq (p : \(\mathbb{N}\to\) Prop) (s : \(\mathbb{N}\to\) set \(\alpha\)) {M : R} (hM : 0 < M) (r : \(\mathbb{N}\to\mathbb{R}\)) (hr : tendsto r at_top (\(\mathcal{N}\) 0)) : (blimsup (\(\lambda\) i, thickening (M'r i) (s i)) at_top p : set \(\alpha\)) ="[\(\mu\)] (blimsup (\(\lambda\) i, thickening (r i) (s i)) at_top p : set \(\alpha\)) := Several remarks are in order: * The syntax s ="m[\(\mu\)] t is mathlib's notation for sets (or functions) \(s\), \(t\) being almost equal with respect to a measure \(\mu\). It is the formal equivalent of the popular informal notation (1). * The type ascriptions : set \(\alpha\) appear because of an unresolved _typeclass diamond_ in mathlib's library of lattice theory. The issue is that the type set \(\alpha\) is definitionally equal to \(\alpha\to\) Prop. Since Prop is a complete boolean algebra \({}^{\mathcal{G}}\) it follows \({}^{\mathcal{G}}\) that \(\alpha\to\) Prop is a complete boolean algebra. Unfortunately the definition \({}^{\mathcal{G}}\) of the complete boolean algebra structure on set \(\alpha\), though mathematically equal, is not definitionally equal to that on \(\alpha\to\) Prop. Strictly speaking, because set \(\alpha\) is a type synonym, this is a permissible diamond but it would still be useful to resolve it.7 Footnote 7: The diamond is recorded in mathlib issue 16932 \({}^{\mathcal{G}}\). In fact it is only the Inf and Sup fields in the complete boolean algebra structures that differ definitionally so this should be fairly easy to resolve. * Listing 8 is stated in terms of blimsup, i.e., a \(\limsup\) bounded by a predicate \(p\). As discussed in section (2.2), this allows us to avoid having to deal with subtypes. We will see that this is convenient when applying this lemma in the proof of Gallagher's theorem. * The key ingredient in the proof of Cassels's lemma is Lebesgue's density theorem 4. In view of (2), Cassels's lemma requires us to establish a pair of measure-zero conditions. According to whether \(M<1\) or \(M>1\), exactly one of these two conditions is trivial for the two sets appearing in the statement of Cassels's lemma 5. To prove the non-trivial measure-zero condition, one argues by contradiction by assuming the measure is strictly positive, applying the density theorem to obtain a point of density 1, and showing that this is impossible for a doubling measure. The only non-trivial dependency is Lebesgue's density theorem. * Although the modifications required for the generalisation of this lemma from its original form in [4] are straightforward, the generalisation (c) from points to subsets (equivalently from balls to thickenings) is extremely useful formally. In the application of this lemma required for Gallagher's theorem, \(s_{n}\) is the set of points of order \(n\) in the circle. In the informal literature, the version of lemma 5 for sequences of points can be applied because the circle has only finitely-many points of each finite order and so one can enumerate all points of finite order as a single sequence of points. This would be messy formally. ## 5 Ergodic theory Ergodic theory is the study of measure-preserving maps. Given measure spaces \((X,\mu_{X})\) and \((Y,\mu_{Y})\), a measurable map \(f:X\to Y\) is measure-preserving if: \[\mu_{X}(f^{-1}(s))=\mu_{Y}(s),\] for any measurable set \(s\subseteq Y\). For example, given any \(c\in\mathbb{R}\), taking Lebesgue measure on both domain and codomain, the translation \(x\mapsto c+x\) is always measure-preserving whereas the dilation \(x\mapsto cx\) is measure-preserving only if \(c=\pm 1\). Fortunately mathlib already contained an excellent theory of measure-preserving maps. ### Ergodic maps, general theory Within ergodic theory, special attention is paid to ergodic maps. Let \((X,\mu)\) be a measure space and \(f:X\to X\) be measure-preserving. We say \(f\) is ergodic if for any measurable set \(s\subseteq X\): \[f^{-1}(s)=s\implies s\text{ is almost equal to }\emptyset\text{ or }X.\] Ergodicity is key concept in the proof of Gallagher's theorem and so we added the following definitions to mathlib: ``` structurepre_ergodic(\(\mu\):measure\(\alpha\).volume_tac):Prop:= (ae_empty_or_univ:\(\forall\)[s],measurable_sets\(\rightarrow\) f^1rs=s\(\rightarrow\)s="[\(\mu\)](0:set\(\alpha\))\(\vee\)s="[\(\mu\)]univ) structureergodic(\(\mu\):measure\(\alpha\).volume_tac)extends measure_preservingf\(\mu\),pre_ergodicf\(\mu\):Prop ``` The reason for the intermediate definition pre_ergodic is to support the definition of quasi-ergodic maps which we also defined, but which do not concern us here. We then developed some basic API for ergodic maps including the key result: Let \(X\) be a measurable space with measure \(\mu\) such that \(\mu(X)<\infty\). Suppose that \(f:X\to X\) is ergodic, \(s\subseteq X\) is measurable, and the image \(f(s)\) is almost contained in \(s\), then \(s\) is almost equal to \(\emptyset\) or \(X\). This result is elementary but not quite trivial and appears formally as follows: ``` lemmaae_empty_or_univ_of_image_ae_le[is_finite_measure\(\mu\)] (hf:ergodicf\(\mu\))(hs:measurable_sets)(hs^f:f^"s\(\subseteq^{\text{s}}[\mu\)]s): s="[\(\mu\)](0:setX)\(\vee\)s="[\(\mu\)]univ:= ``` This is not the first time that ergodic maps have been formalised in a theorem prover and so we have kept the above account very brief. Indeed the Archive of Formal Proofs for Isabelle/HOL contains an impressive body of results about ergodic theory due to Sebastien Gouezel with contributions from Manuel Eberl, available at the Ergodic Theory entry \({}^{\mathcal{G}}\). This entry contains many results about general ergodic theory that have not yet been added to mathlib. On the other hand, we needed to know that certain specific maps on the circle are ergodic and our formalisations of these results do appear to be the first of their kind. We discuss these next. ### Ergodic maps on the circle In order to prove Gallagher's theorem, we needed the following result: Given \(n\in\mathbb{N}\), the map: \[\mathbb{S} \to\mathbb{S}\] \[y \mapsto ny\] is measure-preserving if \(n\geq 1\) and is ergodic if \(n\geq 2\). The fact that \(y\mapsto ny\) is measure-preserving follows from general uniqueness results for Haar measures. In fact the result holds for any compact, Abelian, divisible topological group. Thanks to mathlib's extensive theory of Haar measure [13], it was easy to add a proof of this \({}^{\mathcal{G}}\). We encourage readers who are encountering this fact for the first time to examine figure 1 and appreciate why this result holds for S despite failing for \(\mathbb{R}\). The proof that \(y\mapsto ny\) is ergodic is harder. We proved it as corollary of the following lemma. We sketch a proof to give a sense of what is involved; it is not essential that the reader follow the details: the main point is that we needed to use Lebesgue's density theorem. Let \(s\subseteq\mathbb{S}\) be measurable and \(u_{0},u_{1},\ldots\) be a sequence of finite-order points in \(\mathbb{S}\) such that: * \(u_{i}+s\) is almost equal to \(s\) for all \(i\), * the order \(o(u_{i})\to\infty\) as \(i\to\infty\). Then \(s\) is almost equal to \(\emptyset\) or \(X\). Proof.: The result is fairly intuitive: \(s\) is almost equal to \(u_{i}+s\) iff it is composed of a collection of \(o(u_{i})\) components, evenly-spaced throughout the circle, up to a set of measure zero. Since this holds for all \(i\) and \(o(u_{i})\to\infty\), such components must either fill out the circle or be entirely absent, up to a set of measure zero. Figure 1: The map \(f:y\mapsto 2y\) is measure-preserving. The way to turn the above intuitive argument into rigorous proof is to use Lebesgue's density theorem 4. We must show that if \(s\) is not almost empty then \(\mu(s)=1\). Lebesgue tells us that if \(s\) is not almost empty it must contain some point \(d\) of density 1. Using \(d\), we construct the sequence of closed balls \(B_{i}\) centred on \(d\) such that \(\mu(B_{i})=o(u_{i})\). Because \(u_{i}+s\) is almost \(s\), \[\mu(s\cap B_{i})=o(u_{i})\mu(s)=\mu(B_{i})\mu(s).\] However since \(d\) has density 1, we know that: \[\mu(s\cap B_{i})/\mu(B_{i})\to 1.\] These two results force us to conclude that \(\mu(s)=1\). The formal version is very slightly more general and appears in mathlib as follows: ``` lemmaaddcircle.aeemptyorunivofforallvalueeq.eqself{s:set$addcircleT}(hs:nullmeasurablesetsvolume){t:Type^}{l(:filter$t)l.ne.bott}{u:t->addcircleT}(hu_{i}:\foralli,(u_{i})+s:set_)=[volume]s){hu_{i}:tendsto(add_order_of_ou)lat_top}:s="[volume](0):set$addcircleT}s="[volume]univ:= ``` Theorem 8 follows from lemma 9 because any set \(s\) satisfying \(f^{-1}(s)=s\) for \(f:y\mapsto ny\) satisfies \(u_{i}+s=s\) for the sequence: \[u_{i}=[1/n^{i}]\in\mathbb{S}.\] Note that we need \(n\geq 2\) in order to have \(o(u_{i})=n^{i}\to\infty\). The formal statement appears in mathlib as follows: ``` lemmaergodic_nsmul{n:N}(hn:1<n): ergodic(\(\exists\)(y:addcircleT),n-y):= ``` In fact we needed the following mild generalisation of theorem 8: Given \(n\in\mathbb{N}\) and \(x\in\mathbb{S}\), the map: \[\mathbb{S} \to\mathbb{S}\] \[y \mapsto ny+x\] is measure-preserving if \(n\geq 1\) and is ergodic if \(n\geq 2\). This follows easily from theorem 8 because if we define the measure-preserving equivalence: \[e :\mathbb{S} \to\mathbb{S}\] \[y \mapsto\frac{x}{n-1}+y\] then a quick calculation reveals: \[e\circ g\circ e^{-1}=f,\] where \(f:y\mapsto ny\) and \(g:y\mapsto ny+x\). As a result, theorem 10 follows from theorem 8 via: ``` lemmaergodic_conjugateiff{e:\(\alpha\simeq^{\#}\){b}(h:measurepreservinge\mu\mu^{\prime}): ergodic(eof\(\circ\)e.symm)\(\mu^{\prime}\leftrightarrow\)ergodicf\(\mu:=\) ``` ## 6 Gallagher's theorem ### Points of approximate order Recall the definition of the set \(W\subseteq\mathbb{S}\) appearing in the statement of theorem 2: \[W=\limsup_{n>0}\operatorname{Th}(\delta_{n},\{y\in\mathbb{S}\ |\ o(y)=n\}).\] Key to the proof of theorem 2 is the way in which the sets \(\operatorname{Th}(\delta_{n},\{y\in\mathbb{S}\ |\ o(y)=n\})\) interact with the group structure of \(\mathbb{S}\). We thus made the following definition: **Definition 11**.: _Let \(A\) be a seminormed group, \(n\in\mathbb{N}\) (non-zero), and \(\delta\in\mathbb{R}\). We shall use the notation:_ \[\mathbb{A}\mathbb{O}(A,n,\delta)=\operatorname{Th}(\delta,\{y\in A\ |\ o(y)=n\}),\] _for the set of points that have approximate order \(n\), up to a distance \(\delta\)._ For example, as shown in figure 2, \(\mathbb{A}\mathbb{O}(\mathbb{S},n,\delta)\) is a union of \(\varphi(n)\) arcs of diameter \(2\delta\), centred on the points \([m/n]\) with \(0\leq m<n\) and \(m\) coprime to \(n\) (where \(\varphi\) is Euler's totient function). The formal counterpart of definition 11 is: ``` @[to_additive]defapprox_order_of (A:Type')[seminormed_groupA](n:N)(&:R):setA:= thickening&{y|order_ofy=n} ``` Using this language, the only properties of \(\mathbb{A}\mathbb{O}(A,n,\delta)\) that we needed are as follows: **Lemma 12**.: _Let \(A\) be a seminormed commutative group, \(\delta\in\mathbb{R}\), \(a\in A\), and \(m,n\in\mathbb{N}\) (both non-zero). Then8:_ Footnote 8: If \(s\subseteq A\) the notation \(m\cdot s\) means \(\{my\ |\ y\in s\}\). * \(m\cdot\mathbb{A}\mathbb{O}(A,n,\delta)\subseteq\mathbb{A}\mathbb{O}(A,n,m\delta)\) _if_ \(m\)_,_ \(n\) _are coprime_ \({}^{\textsf{G}}\)_,_ * \(m\cdot\mathbb{A}\mathbb{O}(A,nm,\delta)\subseteq\mathbb{A}\mathbb{O}(A,n,m \delta)\)__\({}^{\textsf{G}}\)_,_ * \(a+\mathbb{A}\mathbb{O}(A,n,\delta)\subseteq\mathbb{A}\mathbb{O}(A,o(a)n,\delta)\) _if_ \(o(a)\) _and_ \(n\) _are coprime_ \({}^{\textsf{G}}\)_,_ * \(a+\mathbb{A}\mathbb{O}(A,n,\delta)=\mathbb{A}\mathbb{O}(A,n,\delta)\) _if_ \(o(a)^{2}\) _divides_ \(n\)__\({}^{\textsf{G}}\)_._ In fact property (iv) holds under the weaker assumption that \(r(o(a))o(a)\) divides \(n\) where \(r(l)\) denotes the radical of a natural number \(l\), but we needed only the version stated in the lemma. We made one last definition in support of theorem 2: Figure 2: The set of points of approximate order 5 in \(\mathbb{S}\), up to a distance \(\delta\approx 0.01\). **Definition 13**.: _Let \(A\) be a seminormed group and \(\delta_{1},\delta_{2},\ldots\) a sequence of real numbers. We shall use the notation:_ \[\mathbb{WA}(A,\delta)=\limsup_{n>0}A\mathbb{O}(A,n,\delta_{n}),\] _for the set of elements of \(A\) that are well-approximable by points of finite order, relative to \(\delta\)._ Note that \(W=\mathbb{WA}(\mathbb{S},\delta)\) where \(W\) is the set appearing in the statement of theorem 2. The formal counterpart of definition 13 is: ``` @[to_additive]defwell_approximable @[to_](A:Type^)[seminormed_groupA]($:N-R):setA:= blimsup(\(\land\)n,approx_order_ofAn($n))at_top(\(\land\)n,0<n) ``` The additive version of this definition is add_well_approximable. ### The main theorem We are finally in a position to assemble everything and provide a proof of our main result. For the reader's convenience we reproduce the formal statement which appeared above in listing 1: ``` @[to_small_]($)@[to_small_]($)@[to_small_]($)@[to_small_]($)@[to_small_]($)@[to_small_]($)@[to_small_]($)@[to_small_]($)@[to_small_]($)@[to_small_]($)@[to_small_]($)@[to_small_]($)@[to_small_]($)@[to_small_]($)@[to_small_]($)@[to_small_]($)@[to_small_]($)@[to_small_]($)@[to_small_]($)@[to_small_]($)@[to_small_]($)@[to_small_]($)@[to_small_]($)@[to_small_]($)@[to_small_]($)@[to_small_]($)@[to_small_]($)@[to_small_]($)@[to_small_]($)@[to_small_]($)@[to_small_]($)@[to_small_]($)@[to_small_]($)@[to_small_]($)@[to_small_]($)@[to_small_]($)@[to_small_]($)@[to_small_]($)@[to_small_]($)@[to_small_]($)@[to_small_]($)@[to_small_]($)@[to_small_]($)@[to_small_]($)@[to_small_]($)@[to_small_]($)@[to_small_]($)@[to_small_]($)@[to_small_]($)@[to_small_]($)@[to_small_]($)@[to_small_]($)@[to_small_]($)@[to_small_]($)@[to_small_]($)@[to_small_]($)@[to_small_]($)@[to_small_]($)@[to_small_]($)@[to_small_]($)@[to_small_]($)@[to_small_]($)@[to_small_]($)@[to_small_]($)@[to_small_]($)@[to_small_]($)@[to_small_]($)@[to_small_]($)@to_($)@to_small * \(C_{p}\) is invariant under the map \(y\mapsto y+[1/p]\). To see why (a) holds, consider: \[p\cdot A_{p} =p\cdot\limsup_{n>0,p\nmid n}\mathbb{A}\mathbb{O}(\mathbb{S},n,\delta _{n})\] \[\subseteq\limsup_{n>0,p\nmid n}p\cdot\mathbb{A}\mathbb{O}(\mathbb{S},n,\delta_{n})\] \[\subseteq\limsup_{n>0,p\nmid n}\mathbb{A}\mathbb{O}(\mathbb{S},n,p \delta_{n})\] by lemma 12 part (i) \[=_{a.e.}A_{p}\] by lemma 5. A very similar argument shows why (b) holds except using parts (ii), (iii) of lemma 12 instead of part (i). Claim (c) is actually the most straightforward and holds by direct application of lemma 12 part (iv). Now if \(A_{p}\) is not almost empty for any prime \(p\), then because it is almost invariant under an ergodic map, lemma 7 tells us that it must be almost equal to \(\mathbb{S}\). Since \(A_{p}\subseteq W\), \(W\) must also almost equal \(\mathbb{S}\) and we have nothing left to prove. We may thus assume \(A_{p}\) is almost empty for all primes \(p\). By an identical argument, we may also assume \(B_{p}\) is almost empty for all primes \(p\). In view of (10), this means that: \[W=_{a.e.}C_{p}\quad\text{for all $p$}.\] Thus, by (c), \(W\) is almost invariant under the map \(y\mapsto y+[1/p]\) for all primes \(p\). The result then follows by applying lemma 9. Omitting code comments, the formal version of this \(\sim 30\) line informal proof in mathlib requires 101 lines \({}^{\cG}\). ## 7 Final words ### Removing the \(\delta_{n}\to 0\) hypothesis As mentioned in the introduction, the hypothesis that \(\delta_{n}\to 0\) in theorem 14 may be removed. A nice follow-up project would be to supply the proof in this case. By replacing \(\delta_{n}\) with \(\max(\delta_{n},0)\), we may assume \(0\leq\delta_{n}\) for all \(n\). Given this, if \(\delta_{n}\not\to 0\), then in fact: \[\mathbb{W}\mathtt{A}(\mathbb{S},\delta)=\mathbb{S}.\] Note that this is a true equality of sets; it is not a measure-theoretic result. The main effort would be to establish some classical bounds on the growth of the divisor-count and totient functions. In fact Bloom and Mehta have already formalised some of the required bounds as part of their impressive Unit Fractions Project \({}^{\cG}\) formalising Bloom's breakthrough [1, 2]. Once the relevant results are migrated to mathlib, removing the \(\delta_{n}\to 0\) hypothesis will become even easier. ### The Duffin-Schaeffer conjecture Given some sequence of real numbers \(\delta_{1},\delta_{2},\ldots\), Gallagher's theorem tells us that \(\mathbb{W}\mathtt{A}(\mathbb{S},\delta)\) is almost equal to either \(\emptyset\) or to \(\mathbb{S}\). The obvious question is how to tell which of these two possibilities actually occurs for the sequence in hand. The Duffin-Schaeffer conjecture, now a theorem thanks to Koukoulopoulos and Maynard, provides a very satisfying answer: \[\mathbb{W}\mathbb{A}(\mathbb{S},\delta)=_{a.e.}\begin{cases}\emptyset&\text{if } \sum\varphi(n)\delta_{n}<\infty,\\ \\ \mathbb{S}&\text{if }\sum\varphi(n)\delta_{n}=\infty.\end{cases}\] where \(\varphi\) is Euler's totient function. That \(\mathbb{W}\mathbb{A}(\mathbb{S},\delta)=_{a.e.}\emptyset\) if \(\sum\varphi(n)\delta_{n}<\infty\) is very easy (it follows from the 'easy' direction of the Borel-Cantelli theorem). The converse is extremely hard. It was first stated in 1941 [6] and was one of the most important open problems in metric number theory for almost 80 years. A formal proof of the converse would be especially satisfying given how elementary the statement of the result is. After Gallagher's theorem, perhaps the next best target is lemma 5.2 in [11]. ### Developing against master It would have been impossible to complete the work discussed here without the extensive theories of algebra, measure theory, topology etc. contained within mathlib. As we have said, all of our code was added directly to the master branch of mathlib; most of it is 'library code', not specific to Gallagher's theorem. Although it is harder to develop this way, we believe it is essential in order to permit formalisation of contemporary mathematics. We therefore wish to exhibit this project as further evidence that this workflow can succeed, and we hope to encourage even more people to follow suit.
2308.15778
Liquid-assisted laser nanotexturing of silicon: onset of hydrodynamic processes regulated by laser-induced periodic surface structures
Here, upon systematic studies of femtosecond-laser processing of monocrystalline Si in oxidation-preventing methanol, we showed that the electromagnetic processes dominating at initial steps of the progressive morphology evolution define the onset of the hydrodynamic processes and resulting morphology upon subsequent multi-pulse exposure. In particular, under promoted exposure quasi-regular subwavelength laser-induced periodic surface structures (LIPSSs) were justified to evolve through the template-assisted development of the Rayleigh-Plateau hydrodynamic instability in the molten ridges forming quasi-regular surface patterns with a supra-wavelength periodicity and preferential alignment along polarization direction of the incident light. Subsequent exposure promotes fusion-assisted morphology rearrangement resulting in a spiky surface with a random orientation, yet constant inter-structure distance correlated with initial LIPSS periodicity. Along with the insight onto the physical picture driving the morphology evolution and supra-wavelength nanostructure formation, our experiments also demonstrated that the resulting quasi-regular and random spiky morphology can be tailored by the intensity/polarization distribution of the incident laser beam allowing on-demand surface nanotexturing with diverse hierarchical surface morphologies exhibiting reduced reflectivity in the visible and shortwave IR spectral ranges. Finally, we highlighted the practical attractiveness of the suggested approach for improving near-IR photoresponse and expanding operation spectral range of vertical p-n junction Si photo-detector operating under room temperature and zero-bias conditions via single-step annealing-free laser nanopatterning of its surface.
Yulia Borodaenko, Dmitriy Pavlov, Artem Cherepakhin, Eugeny Mitsai, Andrei Pilnik, Sergey Syubaev, Stanislav O. Gurbatov, Evgeny Modin, Aleksey P. Porfirev, Svetlana N. Khonina, Aleksandr V. Shevlyagin, Evgeny L. Gurevich, Aleksandr A. Kuchmizhak
2023-08-30T06:24:21Z
http://arxiv.org/abs/2308.15778v1
Liquid-assisted laser nanotexturing of silicon: onset of hydrodynamic processes regulated by laser-induced periodic surface structures ###### Abstract Ultrashort laser exposure can push materials to transient highly nonequilibrium states driving their subsequent morphology self-organization stimulated by excited electromagnetic and hydrodynamic processes. Laser processing parameters regulate these processes defining the resulting surface nano-morphologies demanding for diverse applications. Here, upon systematic studies of femtosecond-laser processing of monocrystalline Si in oxidation-preventing methanol, we showed that the electromagnetic processes dominating at initial steps of the progressive morphology evolution define the onset of the hydrodynamic processes and resulting morphology upon subsequent multi-pulse exposure. In particular, under promoted exposure quasi-regular subwavelength laser-induced periodic surface structures (LIPSSs) were justified to evolve through the template-assisted development of the Rayleigh-Plateau hydrodynamic instability in the molten ridges forming quasi-regular surface patterns with a supra-wavelength periodicity and preferential alignment along polarization direction of the incident light. Subsequent exposure promotes fusion-assisted morphology rearrangement resulting in a spiky surface with a random orientation, yet constant inter-structure distance correlated with initial LIPSS periodicity. Along with the insight onto the physical picture driving the morphology evolution and supra-wavelength nanostructure formation, our experiments also demonstrated that the resulting quasi-regular and random spiky morphology can be tailored by the intensity/polarization distribution of the incident laser beam allowing on-demand surface nanotexturing with diverse hierarchical surface morphologies exhibiting reduced reflectivity in the visible and shortwave IR spectral ranges. Finally, we highlighted the practical attractiveness of the suggested approach for improving near-IR photoresponse and expanding operation spectral range of vertical p-n junction Si photo-detector operating under room temperature and zero-bias conditions _via_ single-step annealing-free laser nanopatterning of its surface. ## I Introduction Surface micro- and nanostructuring represents an important technological procedure allowing to improve basic characteristics of critical materials or even endow them with new functionalities. Bright examples of such material improvement can be readily found in almost all aspects of modern engineering and device fabrication, where appropriate patterning allows to control electrical, optical, biological, tribological and wetting properties of the processed surfaces. Direct laser processing of materials using pulsed radiation represents non-lithographic straightforward technique exhibiting continuously growing importance for both scientific and industrial communities. Such interest is largely boosted by the development of the laser market offering cheaper, high-speed and stable pulsed sources, more precise beam shaping and scanning systems as well as the growth of fundamental understanding of laser-matter interaction _via_ development of computer-aided simulation tools and analysis techniques based on machine learning [1; 2; 3; 4; 5; 6; 7]. Apart from a common direct patterning of the surface with a tightly focused laser beam along the computer-defined trajectory, the formation of surface features _via_ laser-driven self-organization phenomena has gained significant research interest during past decades owing to practically attractive simplicity, fabrication scalability and ability of such an approach to create diverse regular or random surface morphologies with characteristic nanoscale footprints [8; 9]. Laser-induced periodic surface structures (LIPSSs; also referred to as ripples) discovered in 1965 [10] shortly after intensification of the studies related to laser-matter interaction represent remarkable example of such self-organization driven by pulsed (or even CW [11]) laser radiation on the surface of practically all types of materials [12; 13; 14]. LIPSSs usually appear as a (quasi)regular grating-type surface morphology aligned perpendicularly or along the polarization vector of the incident radiation depending on the dominating contribution of the effects driven self-organization. In most cases, these effects have "electromagnetic" nature explaining the periodical surface ordering _via_ interference of the incident radiation with the surface/scattered waves that yield a characteristic LIPSSs periodicity \(\Lambda\) close to the laser wavelength \(\lambda\) used. Specific experimental condi tions (such as texturing in liquids) or processing regimes driving the laser-irradiated material into the nonequlibrium state and facilitating interference of the counter-propagating surface waves allow to shrink the grating periodicity down to deep subwavelength values ( \(\Lambda<<\lambda\) ; high spatial frequency LIPSSs). Supra-wavelength surface patterns with the periodicity \(\Lambda>\lambda\), which formation indeed falls outside electromagnetic scenario, were also reported on the surface of diverse material manifesting themselves as universal phenomenon and unveiling complexity of the laser-driven self-organization processes that are yet far from complete understanding [15; 16; 17]. Monocrystalline Si (c-Si), a critically important semiconductor widely applied in variety of optoelectronic and nanophotonic devices [18; 19], was widely studied in the context of LIPSSs formation revealing the ability to produce on its surface both high spatial frequency LIPSSs [20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30] as well as quasi-regular supra-wavelength patterns [31; 32; 33; 34; 35; 36; 37; 38]. The latter type of the structures was generally explained by hydrodynamic processes [39], yet their ordering along polarization direction and evident correlation between the characteristic periodicity and laser wavelength were established [34] reflecting the lack of complete understanding of the underlying physical processes driven morphology self-organization. In this respect, more systematic studies targeted at unveiling the role of hydrodynamic and electromagnetic processes in formation of supra-wavelength structures are required to achieve full-scale control over the surface morphology crucial for obtaining desired optical properties as well as device performance optimization. In this paper, we systematically studied fs-laser processing of c-Si immersed in methanol revealing that the electromagnetic processes dominating at initial steps of the progressive morphology evolution define the onset of the hydrodynamic processes and resulting morphology upon subsequent multi-pulse exposure. In particular, under promoted exposure regular subwavelength LIPSSs were shown to evolve through the template-assisted development of the Rayleigh-Plateau (R.-P.) hydrodynamic instability in the molten ridges creating quasi-regular surface patterns with supra-wavelength periodicity and alignment along polarization direction of the incident light. Subsequent exposure promotes fusion-assisted morphology rearrangement resulting in spiky surface with random orientation, yet constant inter-structure distance connected with LIPSS. Along with the insight onto the physical picture driving the morphology evolution, our experiments also demonstrate that the resulting quasi-regular and random spiky morphology can be tailored by the intensity/polarization distribution of the incident laser beam allowing to achieve diverse hierarchical surface morphologies with reduced reflectivity in the visible and shortwave IR (SWIR) spectral ranges. In contrast to the well-studied fs-laser texturing of c-Si in air demonstrating similar combination of the electromagnetic and hydrodynamic processes driving morphology evolution, the methanol strongly reduces efficiency the silicon oxidation as well as allows to achieve larger density of nanostructures per surface cite defined by initially subwavelength LIPSSs periodicity. Finally, using the developed single-step fabrication strategy we produced a vertical p-n junction Si photodetector with an active laser-patterned area providing enhanced near-IR photoresponse and expanded operation window without annealing post-treatment under room temperature conditions and zero bias voltage. ## II Results and discussion Formation of spiky Si under static irradiation conditions: onset of hydrodynamic processes in LIPSSs We started from systematic studies of the fs-laser ablation of Si in methanol under static (or spotted) irradiation conditions, where laser beams with either a Gaussian- or a donut-shape intensity profiles and variable polarization distributions (further referred to as Gaussian and cylindrical vector (CV) beams) were applied to expose certain sites of the Si surface (Fig. 1a). For simplicity, to illustrate the key processes underlying the morphology modulation, namely, transition from the LIPSSs to randomly arranged spiky morphology, we fixed laser fluence at \(F=\)0.18 J/cm [2] (that is larger than the single-pulse ablation threshold of Si in liquid; [29]) and varied the number of applied pulses per site \(N\). Under such irradiation conditions, any substantial modifications of the surface start at \(N>5\) appearing as the subwavelength LIPSSs that exhibit periodicity \(\Lambda=\)260 \(\pm\) 30 nm and are arranged perpendicularly to the polarization direction (left panel, Fig. 1b). Consequently, azimuthally polarized CV beam imprints donut-shaped areas containing radially oriented LIPSSs (left panel, Fig. 1c). The LIPSSs were previously suggested to originate from interference of the incident radiation with surface plasma waves supported at the interface between surrounding liquid (methanol, in this case) and photoexcited layer of Si with the dielectric permittivity modified by increased concentration of free carriers [29]. Subsequently incident laser pulses are absorbed by the LIPSS ridges resulting in their heating and partial melting. Molten ridges of a certain size are expected to become hydrodynamically unstable that results in modulations of their height with a characteristic period \(\Lambda_{R.-P.}\approx\) 550-700 nm associated with the development of the R.-P. instability [40; 41; 42; 43]. These height/width modulations are clearly seen on the surface morphologies imprinted with both Gaussian and CV beams at \(N>25\) (middle panels, Fig. 1b,c) and are expected to facilitate light absorption of the subsequently incident laser pulses. This results in local fusion of the neighboring protrusions to spiky features. The process continues upon subsequent surface irradiation eventually leading to the formation of self-organized nanotextured surface covered with a randomly arranged protrusions (further referred to as spiky Si). Sketch provided in the Figure 1d graphically illustrates the main steps behind transition from the ordered subwavelength LIPSSs to self-organized textures. To justify the deductions regarding such laser-induced morphology evolution, its effect on the modification of the laser radiation absorption was first analyzed by numerically solving Maxwell equations with a commercial electromagnetic solver (Lumerical, Ansys). As initial step, we considered smooth Si LIPSSs in the form of the nanotrenches with semi-elliptical cross-section arranged with a periodicity of 260 nm perpendicularly to the polarization direction of the normally incident plane wave and calculated in-plane near-surface distribution of the absorbed laser energy \(\,\propto\,\)E\(\,{}^{2}\cdot\,\)Im\(\,\left[\,\epsilon\,\right]\) (E is an EM-field amplitude; \(\,\epsilon\,\) is dielectric permittivity of silicon) in their vicinity. These calculations clearly show preferential absorption of the deposited laser energy by the LIPSSs ridges (left panel, Fig. 1e) as well as rearrangement of the absorbed energy profile once the R.-P. modulations and larger fused nanostructures appear (middle and right panels, Fig. 1e). In additional, both types of structures appeared to enhance laser radiation absorption promoting material melting and fusion. Let us next justify the onset of the R.-P. instability developing in the molten LIPSS ridges with a characteristic period \(\,\Lambda_{R.-P.}\approx\,550\)-700 nm according to the experimental data (middle panels; Fig. 1b,c). We considered a ridge having at the initial state (i.e., before the instability starts developing) a half-cylindrical shape, as shown in the top panel of Fig. 1f, while at the final state each half-period of the destabilised ridge was approximated with a half-cone. The precise estimation of the Rayleigh-Plateau destabilisation requires the information about the contact angle of liquid silicon on solid silicon substrate [43], which is not known and most probably cannot be precisely identified due to the presence of Figure 1: (a) Schematically illustrated procedure of fs-laser nanotexturing of monocrystalline Si in methanol. (b,c) Series of top-view SEM images of the spotted Si surface modifications produced by linearly polarized Gaussian beam and azymuthally polarized CV beam at fixed fluence of \(\,F=\)0.18 J/cm \(\,{}^{2}\,\) and variable number of applied pulses per site \(\,N\,\) (that is indicated near each image). Scale bar in panels (b) and (c) corresponds to 0.5 and 2 \(\,\mu\,\)m, respectively. One quarter of the circle-shape morphology produced with the CV beam is shown for clarity. (d) Sketch demonstrating key steps behind laser-driven morphology transition of the Si surface from the ordered subwavelength LIPSSs to self-organized spikes. (e) In-plane maps of the absorbed laser energy \(\,\propto\,\)E\(\,{}^{2}\cdot\,\)Im\(\,\left[\,\epsilon\,\right]\) in the vicinity of the LIPSS ridges of variable morphologies. White contour highlights the modeled morphologies (elongated elliptical pitches in the Si wafer and circularly shaped protrusions). Double-side arrows indicate the polarization direction. The obtained maps were averaged over several cross-sections calculated at variable depth below Si surface. (f) Schematic representation of one half of the period of the instability used for the estimations. silicon dioxide [44]. Hence, we assessed the characteristic destabilisation time \(t_{i}\) and possible range of the characteristic periods \(\Lambda_{R.-P.}\) based on the comparison between the surface energy in the initial and final states. In doing this, we supposed that the volume of the considered ridge does not change during the R.-P. destabilisation enabling the estimation of the ratio between the initial radius \(R_{0}\) and the maximal radius of the cone in the final state \(R_{1}\). From \(\frac{1}{2}\pi R_{0}^{2}\ell=\frac{1}{2}\frac{1}{3}\pi R_{1}^{2}\ell\) follows that \(R_{1}=\sqrt{3}R_{0}\). A destabilisation is only possible if it is energetically advantageous for the system. Only such periods of the instability \(\Lambda\) can develop, for which the surface energy decreases upon the shape transformation. No other kinds of energy are considered, since the surface tension dominates on the submicrometer scale. The surface energy of the side surface of a half-cylinder is \(\mathcal{E}_{0}=\pi R_{0}\ell\sigma\) and in the final state the energy of the side surface of the half-cone is \(\mathcal{E}_{1}=\frac{1}{2}\pi R_{1}a\sigma\), where the surface tension of liquid silicon \(\sigma\approx 0.7\,...\,0.8\,N/m\)[45] and the side wall of the cone is \(a=\sqrt{\ell^{2}+R_{1}^{2}}\). From the conditions that the energy difference must be \(\mathcal{E}_{0}-\mathcal{E}_{1}>0\) and \(\ell=\Lambda_{R.-P.}/2\) follows that only periods with \(\Lambda_{R.-P.}>6R_{0}\) are possible. This estimation well fits the experimentally observed periods of the instability \(\approx 0.6\,...\,0.7\,\mu m\) at \(R_{0}\approx 0.1\,\mu m\). For the estimation of time \(t_{i}\) the instability with a given period \(\Lambda_{R.-P.}\) is needed to develop, we supposed that the gain in the energy due to the reduction of the surface limits the work of the friction forces \(A=Fs\), which appears due to the motion of the viscous melt [46]. Here \(s\sim\ell\) is the distance, at which Si atoms are transported upon the instability, and the friction force is estimated with the Newton's law as \(F=\eta Av/R_{0}\), where the viscosity of liquid silicon \(\eta=0.7\,...\,0.9\,mPa\,s\)[47], the average flow velocity of the melt on the surface is \(v\approx s/t_{i}\), and the area of the moving liquid \(A\approx\pi R_{0}\ell\). With \(\ell=\Lambda_{R.-P.}/2\) this criterion can be rewritten as: \[\frac{\eta\pi\Lambda_{R.-P.}^{3}}{8t_{i}}\lesssim\sigma\pi R_{0}\frac{\Lambda _{R.-P.}}{2}\left(1-\frac{\sqrt{3}}{2}\left(1+12\frac{R_{0}^{2}}{\Lambda_{R.- P.}^{2}}\right)\right). \tag{1}\] The term in the brackets in the equation (1) is of the order of one, and the equation can be simplified as \[t_{i}\gtrsim\frac{\eta\Lambda_{R.-P.}^{2}}{\sigma R_{0}}. \tag{2}\] Substituting the experimentally observed period of \(\Lambda_{R.-P.}\approx 0.6\,...\,0.7\,\mu m\) we obtain that the instability develops within few nanoseconds that agrees with the experimental assessments of the characteristic solidification time \(t_{m}\) during which the Si exposed with a fs-laser pulse remains in the molten state, thus allowing the instability to develop [48]. This time can be roughly estimated based on the assumption that the temperature in the molten Si ridge is \(T_{1}\approx 2T_{m}\), while the temperature of the solid substrate in the proximity of the molten ridge \(T_{2}\approx T_{m}\), where the \(T_{m}\) is the Si melting point. We supposed that after the end of the laser pulse the heat conduction is the only mechanism of cooling and use the Fick's laws of diffusion, in which the Laplace operator is approximated as \(\nabla^{2}T\approx T_{m}/R_{0}^{2}\) and the time derivative of the temperature \(\dot{T}\approx T_{m}/t_{m}\). With the thermal diffusivity in liquid silicon \(D_{T}\approx 0.1\,cm^{2}/s\)[49] we estimate \(t_{m}\approx R_{0}^{2}/D_{T}\sim 10^{-9}\,s\), which is of the order of estimated time required for the development of the instability and agrees with the previously reported data [48]. According to the performed analysis, the development of the instabilities with larger periods is evidently limited by the solidification time \(t_{m}\) as well as the characteristic LIPSSs morphology (periodicity and ridge size; Eq. (2))). This means that the regulation of the laser-processing conditions that define the surface morphology at initial state (i.e. before the instabilities start to develop) allows to control the resulting morphology of the self-organized textures, such as average size and density of the spiky features. At the same time, larger applied fluences as well as destabilization of the LIPSSs through a fusion-assisted formation of the higher light-absorbing features evidently will increase the size of the molten pool and its solidification time making possible the development of the instabilities with larger wavelengths (Fig. 1e). Formation of spiky Si under dynamic irradiation conditions: correlation between irradiation conditions, nanomorphology evolution and optical properties The observed surface morphologies including regular and destabilized LIPSSs as well as disordered spiky Si can be expanded over practically relevant areas under dynamic irradiation conditions, namely, by scanning the Si surface by a laser beam diameter \(d\) at a constant speed \(\nu\) and pulse repetition rate \(\kappa\) (Fig. 2a). In fact, both parameters define the number of pulses \(N=d\kappa\)/\(\nu\) applied per surface site, similarly to the case of static irradiation conditions discussed in the previous section. Moreover, once the instability development is related to the amount of the molten phase of the material, tailoring the laser fluence \(F\) within a certain range above the ablation threshold also allows to tune the resulting surface morphology. Series of top- and side-view SEM images in Fig. 2b-d illustrate the ability to reproduce the main discussed morphologies by scanning the Si surface with a linearly polarized Gaussian beam along meander-like trajectory at the inter-line spacing of 2.5 \(\mu\)m, \(\kappa=\)1 kHz, \(\nu=\)50 \(\mu\)m/s and varying \(F\) (Fig. 2b-d). The obtained SEM images of the surface morphologies were also subjected to Fast Fourier Transform (FFT) analysis clearly showing peaks related to the regular LIPSSs as well as those associated with the periodic modulations caused by the R.-P. instability (Fig. 2c,d). FFT analysis shows that the intensity of the latter peaks grows with the applied fluence \(F\), while protrusions mainly follow the direction perpendicular to the LIPSS orientation. At laser fluence of \(F\)=0.18 J/cm \({}^{2}\) completely disordered spiky Si surface can be produced as evidenced from the FFT analysis (Fig. 2d). Performed analysis also highlights that upon increasing the number of applied pulses electromagnetic processes underlying formation of the ordered LIPSSs are gradually replaced by the hydrodynamic ones that evidently dominate at larger \(N\). Noteworthy, further increase of the number of applied pulses (and/or laser fluence \(F\)) promotes melting and fusion of the neighboring protrusions resulting in increase of their geometric size and corresponding decrease of their density. In fact, increase of the laser fluence \(F\) from 0.166 (minimal fluence required to achieve spiky Si at \(\nu\)=50 \(\mu\) m/s) to 0.199 J/cm \({}^{2}\) decreases twice the density of the spiky features (from 2.9 to 1.53 structures per \(\mu\) m \({}^{2}\)). The increase in the average distance between the spiky features with time agrees with the theory of coarsening droplets on a plane predicting the distance growth \(\propto t^{\beta}\) with \(\beta\lesssim 3/8\)[50]. At the same time, at moderate fluence (or \(N\)) the simultaneous contribution of electromagnetic and hydrodynamic processes drives formation of unique quasi-regular hierarchical morphologies, where LIPSSs ridges act as sites for templated dewetting _via_ development of the R.-P. instability resulting in the largest density of structures per surface area. Once the LIPSSs formation governs the subsequent morphology evolution, it is reasonable to suppose that control over the morphology at the initial step (that is largely defined by electromagnetically driven self-organization processes) will substantially affect the resulting relief at the steps when hydrodynamic processes will come into action. To support this deduction, we carried out parametric studies on the Si wafer patterning in methanol using circularly polarized Gaussian-shaped beam under static and dynamic irradiation condition. Such irradiation conditions suppress the efficiency of the LIPSSs formation along a certain direction as well as deepening of the nanotrenches by subsequently incident pulses. This leads to the formation of the shallow LIPSSs with azimuthally varying orientation at \(N\)=10 and \(F\)=0.16 J/cm \({}^{2}\) (Fig. 3a, left panel). Noteworthy, similar relief was obtained by double-pulse exposure of the Si wafer using ps-scale time delay between pulses and orthogonal polarization direction[51]. The produced Figure 3: (a) Series of top-view SEM images of the spotted Si surface modifications produced by circularly polarized Gaussian beam at fixed fluence and variable number of applied pulses per site \(N\). (b) SEM images of the Si surface morphology processed in methanol by circularly polarized fs-laser pulses at variable number of applied pulses per site \(N\). Bottom panels provide FFT analysis of the main image as well as close-up tilt-view SEM images of the corresponding surface morphologies. Fixed laser fluence \(F\)=0.16 J/cm \({}^{2}\) and pulse repetition rate \(\kappa\)= 1 kHz was used for surface patterning. Scale bars in panels (a) and (b) corresponds to 1 and 2 \(\mu\) m, respectively. Figure 2: (a) Schematic illustration of the Si wafer texturing in methanol using dynamic irradiation conditions and various laser beams. (b-d) Evolution of the surface morphology of the Si wafer processed in methanol by Gaussian-shape linearly polarized fs-laser pulses at \(F\)=0.146 J/cm \({}^{2}\) (regular LIPSSs), 0.155 J/cm \({}^{2}\) (destabilized LIPSSs) and 0.177 J/cm \({}^{2}\) (spiky Si). Bottom panels provide FFT analysis of the main image as well as close-up tilt-view SEM images of the corresponding surface morphologies. Fixed scanning speed \(\nu\)=50 \(\mu\) m/s and pulse repetition rate \(\kappa\)=1 kHz were used for surface patterning. Scale bar in top-view and tilt-view SEM images in the panels (b)-(d) corresponds to 2 and 0.5 \(\mu\) m, respectively. morphology is much less pronounced being compared to those obtained by linearly polarized beam and same irradiation conditions (_N_ and \(F\) ) (see Fig. 1b). Subsequent laser exposure of this morphology promotes formation of the disordered LIPSSs which height modulation can be driven by the interference of the diverging/converging plasmon waves or R.-P. instabilities with _\(\Lambda_{R-P.}\approx\Lambda\)_ defined by reduced size of the ridges. Eventually, further irradiation (_N_ > 50) initiates merging of the neighboring elevations resulting in formation of rather smooth surface protrusions, which size increases _via_ fusion upon subsequent exposure. Similar surface morphologies were also produced with circularly polarized Gaussian beam under dynamic irradiation conditions at fixed \(F\) and _kappa_ varying scanning speed _nu_ to provide the number of applied pulses \(N\) per site ranging from 50 to 200 (Fig. 3b). These morphologies were analyzed with the FFT revealing their complete directional independence as well as constant average distance between spiky features that raises with \(N\). In contrast to the surface patterning with the linearly polarized beam, utilization of the circularly polarized one can not provide regimes where evident formation of the LIPSSs destabilized by the R.-P. instability can be justified. Additionally, the obtained morphologies exhibit more shallow and smooth profile under irradiation conditions similar to those used with linearly polarized beam, revealing clear difference in the onset of hydrodynamic processes affected by the LIPSSs formation at initial steps that confirms the previously made deductions. Similar Si patterning experiments were also carried out with an azimuthally polarized CV beam under dynamic irradiation conditions revealing even broader diversity of the obtained surface morphologies (see Fig. 4). In particular, near-threshold fluence and moderate number of applied pulses allows to produce the patterned tracks containing the radially arranged LIPSSs. Once the \(F\) (or \(N\) ) increases, the observed grating-type morphology of the peripheral areas of the track changes to densely arranged spiky surface. The feature can be explained by the donut-shape intensity profile of the CV beam, allowing to sequentially expose the Si surface by the front and back parts of the donut upon scanning along linear direction. In the central parts of the linear scans the polarization distribution appears to be identical for the front and the back parts of the donut, being almost perpendicular to the scanning direction. However, for peripheral area the front and the back part of the donut have different polarization direction as it is schematically illustrated in Fig. 4a. Evidently, in the peripheral areas of the scan this creates two overlapped LIPSS gratings with almost perpendicular orientations resulting in spiky surface with the largest density of the surface features per surface area (Fig. 4b). Further increase of the dose (_F_, \(N\) ) leads to the previously discussed hydrodynamic R.-P. destabilization and fusion-assisted formation of larger surface features. Noteworthy, unveiled nanotexturing regime with the overlapping LIPSSs can be expanded over larger surface areas upon scanning the surface with CV beam along the meander-like trajectory and keeping the offset between the linear scans equal to the half of the CV beam diameter (approx. 5 _mu_ m; Fig. 4e). The resulting surface morphology produced at \(F\) =0.16 J/cm 2 and _nu_ =100 _mu_ m/s as well as FFT analysis of this image are illustrated in Fig. 4f indicating presence of both the regular and hydrodynamically destabilized LIPSSs with a random orientation in the formed surface pattern. The produced spatially homogeneous surface morphologies were further analysed through the way they interact with the incident unpolarized electromagnetic waves. More specifically, we first compared the average reflectivity of the laser-patterned surfaces produced with a linearly polarized beam in the visible (0.5 - 0.7 _mu_ m) Figure 4: (a) Schematic illustration of the CV beam clarifying the origin of the inhomogeneous patterning of the Si surface under dynamic irradiation conditions (scanning along a linear trajectory). (b-d) SEM images of the Si surface patterned in methanol with an azimuthally polarized CV beam at fixed fluence \(F\) =0.16 J/cm 2 and variable scanning speed _nu_ = 150, 100, 50 _mu_ m/s respectively. (e) Scheme of the uniform Si surface patterning with a CV beam at an offset between the linear scans equal to the half of the beam diameter (approx. 5 _mu_ m). (f) SEM image of the surface morphology produced using such scanning procedure, \(F\) =0.16 J/cm 2 and _nu_ =100 _mu_ m/s as well as FFT analysis of this image. Scale bar corresponds to 2 _mu_ m on the main panels and 0.2 _mu_ m on the inset. and shortwave IR (SWIR; 1 - 3 \(\mu\) m) spectral regions (Fig. 5a). These studies showed that SWIR reflectivity gradually drops once the morphology evolves from the regular subwavelength LIPSSs to the spiky Si, which reflects the formation of the larger supra-wavelength structures. Noteworthy, in the SWIR spectral range the regular LIPSSs satisfy the criterion of the subwavelength grating, for which at wavelengths \(\lambda_{R}\) larger than \(\Lambda n_{Si}\) (\(n_{Si}\approx\) 3.5 is refractive index of Si in the SWIR spectral range [52]) this surface morphology can be approximated by an effective medium with a gradually decreasing refractive index when moving from the air interface towards the bulk. In this respect, the removal of the refractive index jump decreases the Fresnel reflection at the interface. However, optimal anti-reflective performance can be achieved when the thickness of this effective medium defined by the height of the surface structures \(h\) will be larger than \(\lambda_{R}\) (4 \(\sqrt{n_{Si}}\))\({}^{-1}\). Considering the average height of the ridges \(\approx\) 200 nm for regular LIPSSs (inset; Fig. 5a), this explains the increase of the reflectivity at \(\lambda_{R}>\) 1 \(\mu\) m. Onset of hydrodynamic processes allows to create dense supra-wavelength surface features that efficiently traps the IR light through multiple reflections resulting in average reflectivity R \({}_{SWIR}\approx\) 6% for spiky Si (\(vs\) only \(\approx\) 11 and 14 % for destabilized and regular LIPSSs, respectively; Fig. 5b). Noteworthy, all the considered types of the structures demonstrate rather low reflectivity in the visible spectral range (down to 5% for destabilized LIPSS; Fig. 5b), that can be associated with their light trapping performance as well as certain structural transformation of the near surface layer confirmed by combining transmission electron microscopy and Raman microspectroscopy (Fig. 5c,d). In particular, both methods revealed the formation of thin amorphous Si layer (\(\alpha\)-Si) with its thickness being about 10-25 nm for the regular LIPSSs according to TEM studies. Moreover, systematic Raman studies showed that the \(\alpha\)-Si amount decreases with the increasing amount of laser pulses applied per surface site (decreasing \(\nu\)) as evidenced from gradually decaying intensity of the corresponding Raman band centered near 480 cm \({}^{-1}\) (Fig. 5d; solid curves) [29; 53]. Importantly, this observation indicates Figure 5: (a) Representative FTIR reflectance spectra of the pristine and laser-patterned monocrystalline Si. Various types of the surface morphologies (regular #1 and destabilized LIPSSs #2 as well as the spiky Si #3) were fabricated at \(F=\)0.16 J/cm \({}^{2}\) and \(\kappa\) =1 kHz varying only the scanning speed \(\nu\) controlling the number of applied pulses per site \(N\). Inset shows SEM image of the FIB cross-sectional cut made perpendicularly to the ridge orientation. (b) Average visible and SWIR reflectivity of various types of the surface morphologies (regular #1 and destabilized LIPSSs #2 as well as the spiky Si #3) produced using linearly (L), circularly (C) and azimuthally (A) polarized beams and variable \(F\) and \(\nu\). (c) High-resolution TEM image of the LIPSS ridges with their well-seen amorphous interface layer.(d) Corresponding averaged Raman spectra of the laser-patterned Si surface sites related to the mentioned morphologies before (solid curves) and after (dashed curves) annealing at 600 \({}^{\circ}\) C and 10 \({}^{-5}\) torr for 30 min. (e) Series of SEM images of the Si surface processed in air at variable number of applied pulses \(N\) and fixed \(F=\)0.16 J/cm\({}^{2}\). Scale bar corresponds to 2 \(\mu\) m. that promoted laser exposure creates continuously growing amount of the molten Si that recrystallizes over longer time period required for minimization of the amorphous phase. In this respect, spiky Si morphology exhibits the smallest amount of the \(\alpha\)-Si almost unaffected by annealing of the patterned Si wafers in an oxygen-free (10 \({}^{-5}\) torr) atmosphere at 600 \({}^{\circ}\) C for 30 min (dashed curves; Fig. 5d). As can be seen, effect of the annealing is more evident for regular and destabilized LIPSSs where initial content of the \(\alpha\)-Si is larger. Systematically studied SWIR and visible-range reflectivity of the main types of the surface structures (marked as #1 - #3 for regular and destabilized LIPSSs as well as the spiky Si) produced using various laser beams with linear (L), circular (C) and azimuthal (A) polarization within a certain range of laser processing parameters (\(F\), \(\nu\) ) are summarized in Fig. 5b revealing the ability to control optical properties of the Si surface within the demanding spectral range upon proper surface micro- (nano-)texturing. Finally, let us highlight additional benefits of the Si laser texturing in methanol. In fact, most of the studies on fs-laser processing of Si was carried out under ambient environment revealing ability to create supra-wavelength structures (also referred to as grooves) under accumulated exposure. These structure manifested themselves as a universal phenomenon being observed under diverse experimental conditions (such as visible and IR laser wavelengths [35], angled excitation [36], shaped beams [54], GHz bursts [38], vacuum conditions [37], etc.) with their supra-wavelength periodicity generally correlating with the incident wavelength [34]. These structures at the initial step of their formation are believed to originate from the similar hybrid electromagnetic/hydrodynamic scenario unveiled here, yet alternative explanation based on hydrothermal waves were also reported [55]. Irrespectively of the chosen laser wavelength and processing parameters, the reported characteristic distance between the supra-wavelength grooves that evidently defines the average density of the spiky features in between is about twice larger compared to the corresponding LIPSSs period \(\Lambda\approx\) 0.8 \(\lambda\) upon processing in air. Taking into account the observed LIPSS periodicity of \(\Lambda\approx\) 0.55 \(\lambda\) and suggested scenario of the morphology evolution _via_ destabilization and fusion of the neighboring LIPSS ridges, more denser arrangement of the spiky features are expected for Si surface processed in methanol. To provide direct comparison, we performed laser texturing experiments of Si wafer using similar processing parameters under ambient environment and static/dynamic irradiation conditions (Fig. 5e). These results confirm the made deductions regarding (i) similarity of the physical processes driving morphology reorganization revealed under dynamic exposure that results in (ii) notably lower maximal density of the spiky features (less than 1 structure per \(\mu\) m\({}^{2}\) ; Fig. 5e) in the textures produced in air. Moreover, increase of the dose results in contentiously growing amount of the oxidized Si that can be crucial for certain applications and is to be removed by ultrasonication or HF etching, in a sharp contrast to the surface textures produced in methanol requiring no additional cleaning procedures. Comparative energy-dispersive X-ray (EDX) analysis indicates an order of magnitude larger content of oxygen (\(30\pm 7\) wt.% \(vs.\) 3 wt.% at \(N=\)50, \(F=\)0.16 J/cm\({}^{2}\) ) in the surface textures patterned under ambient environment, with a content of oxygen in the pristine Si wafer stored in air of 2 wt.%. Additionally, processing in methanol leads to an small increase of the carbon content only to 6 wt.% ( \(vs.\) 5 wt.% for non-patterned Si wafer) that can be related to a thermal decomposition of the methanol molecules or substitution of the OH-groups on the silicon surface with -OCH\({}_{3}\)[56]. ### Laser-patterned Si p-n junction photo-detector Discussed benefits of the liquid-assisted laser texturing of Si highlight practical attractiveness of the developed approach for diverse applications. For example, nanotextured Si surface can be further functionalized with noble-metal films (nanoparticles) to act as a substrate for optical sensing, where the ability to control morphological/optical properties are highly demanded to optimize the device performance for a certain pump wavelength [30; 57; 58]. Alternatively, metal-free sensing and light-emitting devices empowered by the rich optics of Mie resonances and benefiting from the absence of luminescence quenching effects are also becoming popular research direction [59; 60; 61; 62; 63]. Silicon also represent an important material applied in solar cells and photodetectors where the proper laser-assisted nanotexturing can not only remove optical coupling losses associated with an inherently high refractive index of the material, but also provide additional defect-mediated light-absorption channels boosting the device performance [64; 65]. We justified applicability of the developed liquid-assisted fs-laser texturing of Si for improvement of the characteristics of the p-n junction Si photodetector. Usually, extending the operation wavelength range and/or enhancement of the absolute photoresponse are the most demanded outputs, that are commonly achieved by nanostructuring-induced near-surface (shallow) junction formation [66; 67; 68] and Si hyperdoping accompanied with surface texturing [69; 70] when dealing with low photo-detection performance of the pristine (unprocessed) Si-based devices in the UV and NIR/SWIR ranges, respectively. Scheme of the resulted Si photo-detector and photograph of the chip-packaged device with laser-patterned light-sensitive areas are presented in Fig. 6a,b. For device realization, we choose the destabilized LIPSS morphology providing the lowest reflectivity near 1000 nm as well as moderate morphology-affected depth that is to be carefully control considering low thickness (approx. 2 \(\mu\) m) of the top p-Si. Excessive texturing of this layer can deteriorate the crystallinity of p-n junction area (space charge region near the interface between n- and p-Si layers). Measurements of the photoresponse as a function of the incident laser wavelength (Fig. 6c) revealed the improved performance of the laser-textured device (R _spiky_ ) over pristine one (R _flat_ ). For clarity, the obtained data is also plotted as photo-response ratio, r=R _spiky_ /R _flat_, showing its fast increase with the incident light wavelength, with the maximal 2-fold enhancement of the photoresponse at 1200 nm. Importantly, the laser patterning makes the Si p-n junction to be sensitive to sub-band gap illumination at least down to 1300 nm with the room-temperature photoresponse of about several mA/W under zero bias condition (self-powered mode). The mechanism of such improved NIR photo-sensitivity can be attributed to Urbach states and laser-induced structural defects generated in morphology-modified and underlying layers during the laser processing [71]. This results in a lower photo-response of laser-patterned device as compared to the pristine one, when shifting to the shorter wavelengths due to the pronounced surface recombination rate for photo-generated carriers near the edge and side walls of the Si protrusions facilitated by lower penetration depth of the laser radiation. In accordance with Fig. 6c the specific threshold wavelength is 750 nm, where photoresponse ratio is equal to 1. As wavelength of the incident light exceeding threshold value becomes comparable with characteristic period of the Si featured (\(\approx\Lambda_{R.-P.}\) ), the higher photoresponse could be detected. This period matching results in spectral shifting of the photoresponse peak value from the 750 to 850 nm with an additional local maximum at \(\approx 1100\) nm for the flat and spiky Si p-n junction photo-detectors, respectively. An additional confirmation of the defect-related enhancement of the NIR photoresponse is provided in Fig. 6d, demonstrating dependencies of the normalized photoresponse as a function of the incident laser power measured for both patterned and pristine devices at 850 and 1250 nm characteristic pump wavelengths. As can be seen, at photon energy above the Si bandgap (850 nm pump), both devices demonstrate power exponents close to 1, yet the data obtained for the laser-patterned device shows slightly sub-linear trend, evidently owing to enhanced surface recombination. At the same time, photoresponse of the laser-patterned device at sub-band photon energy (at 1250 nm) can be characterized by a relatively sharp, yet clear sub-linear power dependence with exponential factor of 0.74, while the pristine Si p-n junction is completely insensitive to such illumination due to the absence of the defect states. ## III Conclusions Here, we demonstrated how simultaneous action of electromagnetic and hydrodynamic processes driven by ultrashort multi-pulse laser exposure at the monocrystalline Si interface can be used to control the resulting micro- and nanoscale surface nanomorphology spanning from the regular LIPSSs to quasi-ordered (or randomly arranged) spikes. Our studies clearly showed that the electromagnetic processes (such as polarization-dependent excitation and interference of the surface waves) dominating at initial steps of the morphology evolution define the onset of the hydrodynamic processes and resulting morphology upon promoted multi-pulse exposure. This results in appearance of the quasi-regular morphologies having the surface features defined by the hydrodynamic instability and oriented parallel to the polarization vector of the incident radiation. Our studies add an important brick into fundamental understanding of the interaction of ultrashort laser pulses with the silicon that has been used for a long time as a key benchmark material for such studies. In particular, our results clarify the origin of formation of widely reported supra-wavelength surface features following polarization direction of the incident radiation. From application point of view, our work highlights practical benefits of liquid-assisted laser processing of Si. In a sharp contrast to the well studied processing of Si under ambient environment, methanol not only reduces the efficiency of Si oxidation and removes de Figure 6: (a) Schematically illustrated vertical p-n junction Si photodetector with a laser-patterned active area. (b) Optical photograph of the fabricated device. (c) Spectral dependence of the photoresponse (markers) of the pristine _Rflat_ and laser-patterned _Rspiky_ devices. Solid curve demonstrates spectral dependence of the photoresponse ratio \(r=R_{spiky}\) / _Rflat_. (d) Normalized photoresponse as a function of the laser pump power measured for pristine device at 850 nm pump wavelength (gray markers) as well as for laser-patterned device at 850 (red markers) and 1250 nm pump wavelengths (orange markers). bris from the nanotextured surface without any post-processing steps but also facilitates formation of the regular plasmon-mediated subwavelength LIPSSs with the periodicity \(\approx\lambda/2\) as well as denser quasi-ordered spiky features. Considering wide applicability of silicon in optoelectronics, all-dielectric photonics and optical sensors, such advanced maskless and straightforward nanotexturing procedure is expected to be of high demand for future applications. As a good demonstration of the practical applicability of the developed approach, using direct laser patterning without any post-annealing and cleaning steps we realized the self-powered vertical p-n junction Si photodetector with twice larger photoresponse at sub-bandgap photon energies and expanded NIR operation range. ## IV Methods ### Laser patterning Monocrystalline Si wafers were placed into a quartz cuvette filled with a methanol and directly processed with a second-harmonic (515 nm) fs-laser (200 fs) pulse generated at 1 kHz pulse repetition rate by a regeneratively amplified laser system (Pharos, Light Conversion). For certain experiments, as-generated linearly polarized Gaussian-shape laser beam was shaped into a CV beam by passing through a radial polarization converter (_S_ -waveplate; [72]). The laser texturing was performed through the static 4-mm thick layer of isopropanol with a Gaussian/CV beam focused onto the sample surface with a dry microscope objective having numerical aperture (NA) of 0.13. The size of the focal spot \(D_{th}\) in the case of Gaussian-shape beam was evaluated using Liu method [73]. More specifically, we plotted the squared diameter of the single-pulse Si surface modification as a function of the incident pulse energy in logarithmic scale \(D^{2}\cdot\)ln[_E_ ]. The beam size \(D_{th}\approx\)5 \(\mu\) m was obtained from the slope of the linear fit of the obtained dependence. In the case of CV beam generated by the \(S\) -waveplate, we used the Lambert function \(W\) to calculate the inner and outer radii of the focal-plane annular distribution [74] : \[r_{inner}=\sqrt{\frac{m}{2}}\sqrt{-W\left(0,-\frac{\left(e^{-m-2}m^{m}\right)^ {1/m}}{m}\right)}\omega, \tag{3}\] and \[r_{outer}=\sqrt{\frac{m}{2}}\sqrt{-W\left(-1,-\frac{\left(e^{-m-2}m^{m}\right) ^{1/m}}{m}\right)}\omega \tag{4}\] Then, the ratio of the areas of the Gaussian beam \(S_{gauss}=\pi D_{th}^{2}/4\) and the CV beam \(S_{CV}=\pi\left(r_{outer}^{2}-r_{inner}^{2}\right)\) for the focusing conditions (_NA_ =0.13) can be assessed to be \(S_{CV}/S_{gauss}=2.2\). ### Characterization Structural and morphological features of the laser-patterned surfaces were characterized using SEM device (Ultra 55+, Carl Zeiss) equipped with an EDX modality for chemical analysis. TEM studies (Titan 60-300, Thermo Fisher) were carried out with a thin lamella prepared using using dual-beam (electron/ion) device (Hellos 450, Thermo Fisher). Preparation steps include (i) coverage of the sight of interest with a protective Pt layer, (ii) FIB cutting and thinning down to electron transparency and (iii) lamella extraction for TEM imaging. Micro-Raman spectroscopy (Integra Spectra II, NT-MDT) was carried out with a CW laser pump centered at 473 nm. Linearly polarized laser radiation was focused onto the sample surface with a dry microscope objective having NA=0.28. Raman spectra were averaged over at least 20 surface sites for each laser-patterned area. Fast Fourier Transform (FFT) analysis of the SEM images of the LIPSSs and spiky Si produced at variable irradiation conditions was performed with an ImageJ package. The same software was applied to evaluate average density of the self-organized structures using a find maxima tool. Finite-difference time domain simulations were carried out to calculate the absorption (E \({}^{2}\)Im[_e_ ], where E is an EM field amplitude and Im[_e_ ] is the dielectric permittivity of Si) of the incident laser pulses by the LIPSSs with variable morphologies. In particular, we considered the smooth Si surface having several pitches with a semi-elliptical profile (depth of 200 nm and width of 150 nm) arranged at the period of 260 nm that is fully consistent with the experimental data. Destabilizations of the grating morphology were modeled either as a local elliptical-shape protrusions decorating the ridges or larger protrusions filling the space between neighboring ridges (Fig. 1e). Si grating was placed in the background medium with a refractive index of 1.33 corresponding to those for methanol [75] and exposed from the top by a Gaussian-shape (1/e-diameter of 3 \(\mu\) m) pulse having the polarization perpendicular with respect to the ridge orientation. Computational volume was limited by perfectly matched layers. Reflectivity of the laser-patterned surface in the visible (0.5 - 0.7 \(\mu\) m) and SWIR (1 - 3 \(\mu\) m) spectral ranges was assessed using an integrated-sphere spectrometer (Cary 5000, Varian) and Fourier-Transform IR spectrometer (Vertex 80, Bruker) coupled to the IR microscope (Hyperion 2000, Bruker), respectively. ### Fabrication and characterization of a Si photo-detector with a vertical p-n junction Monocrystalline n-type (resistivity of \(0.1\,\mathrm{Ohm}\cdot\mathrm{cm}\)) Si wafer with a lateral size of \(6\times 6\) mm 2 with an epitaxially grown 2 \(\,\mathrm{\mu m}\) thick p-type Si top layer (resistivity of \(10\,\mathrm{Ohm}\cdot\mathrm{cm}\)) was laser-textured following the previously described procedure to create a destabilized LIPSS morphology within certain surface areas (Fig. 6a,b). Then, \(200\) nm thick Al film was magnetron sputtered through the shadow mask to create a patched square-loop top electrode. Similarly, \(2\) nm of Sb film covered with \(100\) nm thick Au layer was deposited to form a filled square bottom electrode. Resulting device was annealed in vacuum (base pressure of \(5\times 10^{-6}\) Torr) for \(30\) min at \(400\,\mathrm{\SIUnitSymbolCelsius}\) to improve surface adhesion and reduce contact resistance. Same procedure was used to produce non-patterned (pristine) reference photodetector. Pristine and patterned Si photo-detectors were chip-packaged using ultrasonic welding with Al microwires. The photo-responsivity of the Si p-n junction devices was probed under room tempereture and zero bias voltage using spectrally tunable laser radiation generated at \(80\) MHz pulse repetition rate by a femtosecond optical parametric oscillator (TOPOL, Avesta Project LTD.). The output laser beam was shaped by a lens to yield a pump spot diameter \(\,\approx\,60\)\(\,\mathrm{\mu m}\). Different sites of the laser-patterned and pristine devices were probed for statistical averaging of the results. The incident power at the selected wavelength within the spectral range from \(700\) to \(1300\) nm was measured by a Ge optical power meter (Newport). The photocurrent was directly captured by a Keithely supercometer from the illuminated devices. Photocresponse was calculated as the generated photocurrent divided by the power of incident light. Attenuated laser lines from the TOPOL setup were used to register the photoresponse \(versus\) incident optical irradiation power for the patterned and pristine Si p-n junction photo-detectors. ## Conflict of interest The authors declare no competing financial interests. ## Acknowledgements This work was supported by the Russian Science Foundation (Grant no. 21-79-20075).
2305.04689
Improving the Efficiency of MIMO Simulations in ns-3
Channel modeling is a fundamental task for the design and evaluation of wireless technologies and networks, before actual prototyping, commercial product development and real deployments. The recent trends of current and future mobile networks, which include large antenna systems, massive deployments, and high-frequency bands, require complex channel models for the accurate simulation of massive MIMO in mmWave and THz bands. To address the complexity/accuracy trade-off, a spatial channel model has been defined by 3GPP (TR 38.901), which has been shown to be the main bottleneck of current system-level simulations in ns-3. In this paper, we focus on improving the channel modeling efficiency for large-scale MIMO system-level simulations. Extensions are developed in two directions. First, we improve the efficiency of the current 3GPP TR 38.901 implementation code in ns-3, by allowing the use of the Eigen library for more efficient matrix algebra operations, among other optimizations and a more modular code structure. Second, we propose a new performance-oriented MIMO channel model for reduced complexity, as an alternative model suitable for mmWave}/THz bands, and calibrate it against the 3GPP TR 38.901 model. Simulation results demonstrate the proper calibration of the newly introduced model for various scenarios and channel conditions, and exhibit an effective reduction of the simulation time (up to 16 times compared to the previous baseline) thanks to the various proposed improvements.
Matteo Pagin, Sandra Lagén, Biljana Bojović, Michele Polese, Michele Zorzi
2023-05-08T13:18:10Z
http://arxiv.org/abs/2305.04689v1
# Improving the Efficiency of MIMO Simulations in ns-3 ###### Abstract. Channel modeling is a fundamental task for the design and evaluation of wireless technologies and networks, before actual prototyping, commercial product development and real deployments. The recent trends of current and future mobile networks, which include large antenna systems, massive deployments, and high-frequency bands, require complex channel models for the accurate simulation of massive MIMO (m-MIMO) in millimeter wave (mmWave) and Terahertz (THz) bands. To address the complexity/accuracy trade-off, a spatial channel model has been defined by 3GPP (TR 38.901), which has been shown to be the main bottleneck of current system-level simulations in ns-3. In this paper, we focus on improving the channel modeling efficiency for large-scale MIMO system-level simulations. Extensions are developed in two directions. First, we improve the efficiency of the current 3GPP TR 38.901 implementation code in ns-3, by allowing the use of the Eigen library for more efficient matrix algebra operations, among other optimizations and a more modular code structure. Second, we propose a new performance-oriented MIMO channel model for reduced complexity, as an alternative model suitable for mmWave/THz bands, and calibrate it against the 3GPP TR 38.901 model. Simulation results demonstrate the proper calibration of the newly introduced model for various scenarios and channel conditions, and exhibit an effective reduction of the simulation time (up to 16 times compared to the previous baseline) thanks to the various proposed improvements. channel model, mmWave, Terahertz, network simulation, ns-3 + Footnote †: journal: Physics Letters B 2023 Footnote 2: [https://doi.org/10.1145/3592149.3592167](https://doi.org/10.1145/3592149.3592167) + Footnote †: journal: Physics Letters B 2023 ## 1. Introduction Mobile networks play a key role in our society and are poised to become ever more important in the coming years. In fact, the International Telecommunications Union (ITU) foresees that in 2030 and beyond wireless broadband will be ubiquitous, and will be required to provide connectivity not only to humans, but also to a plethora of intelligent devices such as wearables, road vehicles, Unmanned Aircraft Systems (UASs) and robots (Han et al., 2015). Moreover, novel use cases such as holographic communications, Extended Reality (XR) and tactile applications will further exacerbate the throughput and latency requirements which were posed by enhanced Mobile Broadband (cMBB) and Ultra-Reliable Low-Latency Communications (URLLC) (Han et al., 2015). To meet these goals, future cellular systems will further evolve 5th generation (5G) networks, which have introduced a flexible, virtualized architecture, the support for mmWave communications and the use of m-MIMO technologies (Beng et al., 2015). Notably, the research community is considering a more central role for mmWaves, a further expansion of the spectrum towards the THz band, and an Artificial Intelligence (AI)-native network design, with the goal of achieving autonomous data-centric orchestration and management of the network (Shen et al., 2015), possibly down to the air interface (Han et al., 2015). The THz and mmWave bands offer large chunks of untapped bandwidth which operators can leverage to meet the Tb/s peak rates that are envisioned by the ITU (Han et al., 2015). However, this portion of the spectrum is plagued by unfavorable propagation characteristics, comprising a marked free-space propagation loss and susceptibility to blockages (Han et al., 2015; Lagen et al., 2015), which make it challenging to harvest its potential. Although the harsh propagation environment can be partially mitigated by using directional links and densifying network deployments (Shen et al., 2015), the support for mmWave and THz bands entails a major redesign not only of the physical layer, but of the whole cellular protocol stack (Shen et al., 2015). For instance, the intrinsic directionality of the communication requires ad hoc control procedures (Kumar et al., 2017), while the frequent transitions between Line-of-Sight (LoS) and Non-Line-of-Sight (NLoS) conditions call for an ad hoc transport layer design, such as novel Transmission Control Protocol (TCP) algorithms (Kumar et al., 2017). In addition, as the network progressively becomes increasingly complex and heterogeneous, the push for spectrum expansion will be coupled with an AI-native design which, thanks to the ongoing virtualization, will not be limited to the radio link level, but will encompass the orchestration of large scale deployments as well (Kumar et al., 2017). Nevertheless, how to design, test and eventually deploy management and orchestration algorithms is an open research challenge (Kumar et al., 2017). First, the training data must accurately capture the interplay of the whole protocol stack with the wireless channel. Furthermore, optimization frameworks such as Deep Reinforcement Learning (DRL) also call for preliminary testing in isolated yet realistic environments, with the goal of minimizing the performance degradation to actual network deployments (Kumar et al., 2017; Li et al., 2018). In these regards, system-level network simulators have a central role to play. Indeed, an end-to-end evaluation of algorithms and protocols becomes paramount when considering frequencies above 6 GHz, given the impact of their peculiar propagation characteristics on the whole protocol stack. At the same time, end-to-end simulators can also serve as both sources of training data for AI models, and testing platforms for preliminary evaluation of Machine Learning (ML) algorithms prior to their deployment in commercial networks. However, the suitability of end-to-end network simulators to these tasks largely depends on the accuracy of the channel model (Kumar et al., 2017) and on the scalability for realistically-sized deployments. In fact, system-level simulators generally abstract the actual link-level transmission via an error model, which maps the Signal-to-Interference-plus-Noise Ratio (SINR) of the wireless link to a packet error probability (Kumar et al., 2017). Eventually, the latter is used to determine whether the packet has been successfully decoded by the receiver. As a consequence, the accuracy of the simulator heavily depends on the reliability of the SINR estimation, especially when considering the mmWave and THz bands. The well known ns-3 simulator features the implementation of the 3rd Generation Partnership Project (3GPP) channel model (Bogororici et al., 2016), which, according to the 3GPP, represents the state-of-the-art channel model for drop-based end-to-end simulations of devices operating at frequencies between 0.5 to 100 GHz. Despite its accuracy, the TR 38.901 channel model is particularly demanding from a computational point of view, and thus limits the scalability of the simulated scenarios. At the same time, the simpler channel models which are found in analytical studies fail to capture the peculiar characteristics of mmWave and THz links. To fill this gap, in this paper we propose optimizations to the ns-3 implementation of the TR 38.901 channel model of (Kumar et al., 2017), both at the codebase and at the design level, which aim to provide wireless researchers with the tools for simulating future dense wireless scenarios in a computationally efficient manner. Specifically, we significantly improve the runtime of simulations involving the 3GPP TR 38.901 channel model (Bogorici et al., 2016) by porting the intensive linear algebra operations to the open-source library Eigen(Giannakis et al., 2017). To this end, we also design and implement a set of common linear algebra APIs, which increase the modularity of the spectrum module with respect to the underlying data structures and algorithms. Then, we propose a simplified channel model, based on (Bogorici et al., 2016), which aims to provide an additional order of magnitude of runtime reduction, at the cost of a slight accuracy penalty. Profiling results show that the support for Eigen, coupled with further TR 38.901 optimizations, leads to a decrease of up to 5 times in the simulation time of typical Multiple Input Multiple Output (MIMO) scenarios. Furthermore, the proposed performance-oriented channel model further improved the runtime of simulations, which now take as low as 6 % with respect to the full TR 38.901 channel model, with a negligible loss in accuracy. The remainder of the paper is organized as follows. Section 2 reports the state of the art on channel models. Sections 3 and 4 describe the contributions of this work, in terms of optimizations to the ns-3 implementation of the TR 38.901 framework and the design of a performance oriented channel model, respectively. Finally, Section 5 presents benchmarks of the introduced optimizations and discusses the main use cases of these channel models, while Section 6 concludes the paper by mentioning possible future extensions of this work. ## 2. Related Work Channel modeling is a fundamental task for the design, simulation, and evaluation of current and future wireless networks. It is especially relevant to perform system-level simulations to test new algorithms, procedures, and architectures, before going into real deployment/device implementations. In the recent decades, the challenges for understanding the propagation at mmWave and THz frequencies with large antenna arrays and the use of MIMO have further motivated the channel modeling efforts in those frequency ranges (Kumar et al., 2017; Li et al., 2018). As a result, multiple channel measurement campaigns have been performed by the academic and industry communities (Kumar et al., 2017), leading to different families of channel models. The various channel models differ in their degree of simplicity and accuracy. They range from simple models that just consider a propagation loss component combined with Nakagami-m or Rayleigh fading but fail to capture the spatial dimension of the channel and the interactions with beamforming (Kumar et al., 2017), to deterministic models that are very accurate in specific scenarios but are much more complex and require a precise characterization of the environment (Kumar et al., 2017). To address the complexity-accuracy trade-off, the 3GPP has adopted a stochastic channel model for simulations of 5G and beyond networks (Bogorici et al., 2016). Stochastic channel models are generic, thanks to their stochastic nature, but at the same time can model interactions with multiple-antenna arrays. Specifically, the 3GPP defined in TR 38.901 the spatial channel model for simulations that address frequency ranges from 0.5 GHz to 100 GHz (Bogorici et al., 2016), which is parameterized for various simulation scenarios, including indoor office, indoor factory, urban macro, urban micro, and rural macro. However, for system-level simulations of large-scale systems including multiple nodes and large antenna arrays, the 3GPP spatial channel model still introduces a significant overhead in terms of computational complexity. In this line, in (Kumar et al., 2017), a simplified channel model for the system-level simulations of MIMO wireless networks is proposed. Therein, the end-to-end channel gain is obtained as the sum of several loss and gain terms that account for large-scale phenomena such as path loss and shadowing, small-scale fading, and antenna and beamforming gains. Notably, the latter terms represent a fundamental component for studies concerning modern wireless systems. In particular, an accurate characterization of the antenna radiation pattern and of the effect of the presence of multiple radiating elements becomes extremely important when studying mmWave and THz frequencies. Following the model in (Wang et al., 2017), the combined antenna and beamforming gain can be computed according to (Zhou et al., 2018), the path loss and shadowing components can follow the 3GPP model in (Beng et al., 2018), and the small-scale fading can be sampled from various statistical distributions. For the small-scale fading, authors in (Wang et al., 2017) propose to use a Nakagami-\(m\) distribution, which has been shown to provide a good fit with the 3GPP model, provided that the \(m\) parameter is appropriately chosen. Another option for small-scale fading modeling is the so-called Fluctuating Two-Ray (FTR) fading model presented in (Zhou et al., 2018), which models more accurately the fading that occurs at mmWaves. The 3GPP TR 38.901 spatial channel model was included in ns-3 thanks to the efforts of Tommaso Zugno in 2019 Google Summer of Code (Zhou et al., 2018), and later extended to address vehicular scenarios in (Zhou et al., 2018) and industrial scenarios in (Wang et al., 2017). The current spatial channel model implemented in ns-3 is very accurate for simulations in line with 3GPP specifications for a wide range of frequencies, but represents the main bottleneck in terms of computational complexity when considering large-scale simulations with many multi-antenna nodes, especially when equipped with large antenna arrays. This is because of the intrinsic complexity in the generation of the channel model according to 3GPP specifications and the need to deal with 3D channel matrix structures. The channel matrix in the ns-3 implementation of the 3GPP spatial channel model is implemented as a 3D structure whose dimensions depend on the number of the transmit antennas, receive antennas, and clusters. Currently, in ns-3, the 3GPP channel model uses a vector of vectors of vectors to represent 3D arrays, such as the channel matrix. The design of computationally efficient yet accurate channel models has been a topic of interest also in the Wireless Local Area Network (WLAN) space. The authors of (Garfani et al., 2017; Garfani et al., 2017) present a frequency-selective channel for WLANs, and use Exponential Effective SNR Mapping (EESM) Link-to-System (L2S) mapping to integrate their model with the ns-3 system-level Wi-Fi implementation. Moreover, they develop a framework which leverages cached statistical channel matrix realizations to directly estimate the effective Signal-to-Noise Ratio (SNR), thus further improving the computational efficiency of the model. Specifically, the latter is modeled as a parameterized log-SGN random variable. They extend their work in (Garfani et al., 2017), by accounting for the channel correlation over time. Moreover, (Garfani et al., 2017) compares statistical channel models for the 60 GHz band with the Quasi Deterministic (QD) Ray Tracer (RT) of (Beng et al., 2018). In this paper, we summarize the efforts carried out by Matteo Pagin in 2022 Google Summer of Code to further optimize the code in ns-3 in two directions: 1) improving the efficiency of the code by allowing the use of Eigen library, and 2) proposing a new performance-oriented MIMO channel model for reduced complexity in ns-3 large-scale simulations. First, we have improved the efficiency of the 3GPP spatial channel model in ns-3 by allowing the usage of Eigen to represent matrices, so that when Eigen is available the 3GPP channel matrix is represented as an std::vector of Eigen matrices. This already improves the performance of current models. Second, we propose an alternative model, based on the FTR channel model (Zhou et al., 2018), in which the channel is characterized by a single scalar instead of 3D matrices, and we have calibrated such model to align with the 3GPP TR 38.901 spatial channel model for various scenarios and channel conditions. This model is especially useful to speed up ns-3 large-scale simulations, when simplicity is prioritized. ## 3. Efficient MIMO modeling with the Eigen library The use of multiple antennas both at the transmitter and at the receiver, a fundamental feature of modern wireless systems, makes a scalar representation of the channel impulse response insufficient. Instead, MIMO channels are usually represented in the form of a complex matrix \(\mathbf{H}\in\mathbb{C}^{U\times S}\), whose elements depict the channel impulse response between the \(U\) and \(S\) radiating elements of the transmitting and receiving antenna arrays, respectively (Beng et al., 2018). This peculiarity significantly increases the computational complexity of MIMO channel models, compared to Single Input Single Output (SISO) ones, since the complex gain of the channel must be evaluated for each pair of transmit and receive antennas. Notably, previous analyses identified in statistical channel models the main bottleneck for system-level MIMO wireless simulations. In typical m-MIMO 5G scenarios, where the devices feature a high number of antennas, the channel matrix generation and the computation of the beamforming gain represent up to 90% of the simulation time (Zhou et al., 2018). In light of these limitations, as the first of our contributions, we optimized the implementation of the 3GPP TR 38.901 model in ns-3 introduced in (Zhou et al., 2018). First, we observed that, as of ns-3.37, part of the trigonometric operations of the GetNewChannel method of the ThreeGppChannelModel class are unnecessarily repeated for each pair of transmitting and receiving radiating elements. This represents a significant inefficiency, since the inputs of these functions, i.e., the angular parameters of the propagation clusters, depend on the cluster index only. Moreover, the standard library sin and cos functions are particularly demanding to evaluate. Therefore, we cached the trigonometric evaluations of these terms prior to the computation of \(\mathbf{H}\)'s coefficients, effectively reducing the complexity of the trigonometric operations from \(\mathcal{O}(U\times S\times N)\) to \(\mathcal{O}(N)\), where \(N\) is the number of propagation clusters. Then, we focused on improving the algebra manipulations of the channel matrix performed in the ThreeGppSpectrumPropagationLossModel by introducing the support for the open-source library Eigen in ns-3. Eigen is a linear algebra C++ template library that offers fast routines for algebra primitives such as matrix multiplication, decomposition and space transformation (Garfani et al., 2017), and is used by many open-source frameworks such as TensorFlow. We set Eigen as an optional, external ns-3 dependency, with the goal of minimizing future code maintenance efforts, and thus mimicking the support for other third-party libraries. To get Eigen, ns-3 users can either rely on packet managers, i.e., install the package libeigen3-dev (eigen) for Linux (Mac) systems, or manually install the library by following the official instructions1. Then, Eigen can be enabled via a custom flag defined in the macros-and--definitions. cmake file, and its presence in the system is shown to the user by exposing whether it has been found or not via the ns3--config-table.cmake file. The latter also defines the preprocessor definition HAVE_EIGEN3, which is used in the ns-3 source files to discern Eigen's availability. Finally, the linking of Eigen with the ns-3 source files is taken care of by the CMake configuration file provided by the library itself, as suggested in the related ns-3 guide. To prevent the need for Eigen to be installed in the host system, we developed a common set of APIs between the Eigen- and the Standard Template Library (STL)-based data structures and primitives. Thanks to this choice, the remainder of the spectrum code is completely abstracted with respect to the presence of the library. Given that most of the needed operators can not be overloaded for STL C++ vectors (for instance, operator()), the common interface for both Eigen and STL's based vectors and matrices has been implemented by defining ad hoc struts with custom operators. In particular, we defined: * The complex vector type PhasedArrayModel::ComplexVector. This data-structure is defined as an std::vector of std::complex<double> whenever Eigen is not installed, and as an Eigen vector of std::complex<double> otherwise. The set of APIs includes operators [] and 1=, which can be used to access the vector entries and to compare pairs of vectors, respectively. Additionally, we defined the STL-like methods size, norm and resize, which return the vector size, its \(\mathcal{L}^{2}\)-norm, and allow the user to resize the underlying container, respectively. These definitions follow the typical STL notation, as it is supported by Eigen as well. * The complex matrix type MatrixBasedChannelModel::Complex2DVector. In this case, the underlying type is a nested std::vector of std::complex<double> for when Eigen is disabled, and an Eigen matrix whose entries are of type std::complex<double> otherwise. In this case, we aligned the notation to the APIs provided by Eigen. Specifically, the matrix elements can be accessed via the operator (), which takes as arguments the row and column indices of the entry, while the method resize allows users to resize matrices by specifying the number of rows and columns. In turn, these can be accessed via the rows and columns methods, respectively. * The 3D matrix MatrixBasedChannelModel::Complex3DVector. This data structure is defined, regardless of Eigen's availability, as an std::vector of MatrixBasedChannelModel::Complex2DVector. In this case, the only method provided is MultiplyMatSubJetFtAndRightVec, which computes a product of the type \(\mathbf{w}_{T}\mathbf{H}\mathbf{w}_{T}^{T}\), where \(\mathbf{H}\in\mathbb{C}^{U\times S}\), \(\mathbf{w}_{T}\in\mathbb{C}^{\mathbb{I}\times U}\) and \(\mathbf{w}_{R}\in\mathbb{C}^{\mathbb{I}\times S}\). Notably, this computationally demanding evaluation, which is required for computing the beamforming gain in TreeGppSpectrumPropagationLossModel, leverages Eigen's optimized algorithms whenever the library is installed in the host system. Finally, we remark that the support for Eigen in the ns-3 codebase can possibly be further extended to improve the efficiency of other linear algebra operations, such as the Singular Value Decomposition (SVD) which is used in the mmwave and \(\mathbf{n}r\) modules to compute optimal beamformers, and the matrix-by-matrix multiplications needed for relayed channels (Sandar et al., 2016). ## 4. A Performance-Oriented MIMO Statistical Channel Model The second approach to reduce computational complexity we propose in this paper is a MIMO channel model for simulating large m-MIMO scenarios, implemented in the class TwoRaySpectrumPropagationLossModel. The goal of this auxiliary model is to offer a faster, albeit slightly less accurate, statistical channel model than the 3GPP TR 38.901 framework of (Sandar et al., 2016) by preventing the need for the computation of the complete channel matrix. In line with (Bogor et al., 2017), the frequency range of applicability of this model is \(0.5-100\) GHz, although the framework can be possibly extended to support higher frequencies as well. The overall channel model design follows the approach of (Sandar et al., 2016), i.e., the end-to-end channel gain is computed by combining several loss and gain terms which account for both large- and small-scale propagation phenomena, and the antenna and beamforming gains. In particular, let \(T\) be a device transmitting a signal \(x\) with power \(\mathrm{P}_{T}^{x}\), and \(R\) be another device in the simulation (which may or may not be the intended destination of \(x\)). The proposed model implements the PhasedArraySpectrumPropagationLossModel interface by estimating \(\mathrm{P}_{R}^{x}\), i.e., the power of \(x\) received at \(R\), as follows: \[\mathrm{P}_{R}^{x}[dBm] =\mathrm{P}_{T}^{x}[dBm]-\mathrm{PL}_{T,R}[dB]\] \[+S_{T,R}[dB]+G_{T,R}[dB]+F_{T,R}[dB], \tag{1}\] where the terms \(\mathrm{PL}_{T,R}\) and \(\mathrm{S}_{T,R}\) represent the path loss and the shadowing, respectively, while \(G_{T,R}\) and \(F_{T,R}\) denote the antenna and beamforming gain and the small-scale fading, respectively. The remainder of this section describes in detail how each of these terms is computed. ### Path loss, Shadowing, and LoS Condition The large-scale propagation phenomena are modeled according to the 3GPP TR 38.901 model (Bogor et al., 2017), since its implementation of (Sandar et al., 2016) is not computationally demanding. Nevertheless, the channel model can in principle be coupled with arbitrary classes extending the ChannelConditionModel interface. Specifically, we first determine the 3GPP scenario. Then, for each link we set the LoS condition in a stochastic manner, using the class extending ThreeGppChannelConditionModel which corresponds to the chosen scenario. Then, we compute the path loss using the 3GPP TR 38.901 formula \[PL_{T,R}=A\log_{10}(d)+B+C\log_{10}(\left\langle{{{C}}}\right\rangle[dB], \tag{2}\] where \(d\) is the 3D distance between the transmitter and the receiver, \(f_{C}\) is the carrier frequency, and \(A,B\) and \(C\) are model parameters which depend on the specific scenario and the LoS condition. To account for the presence of blockages, an optional log-normal shadowing component \(S_{T,R}\) and an outdoor-to-indoor penetration loss term are added to \(PL_{T,R}\). ### Antenna and Beamforming Gain The combined array and beamforming gain is computed using the approach of (Sandar et al., 2016). The proposed model supports the presence of multiple antenna elements at the transmitter and at the receiver, and arbitrary analog beamforming vectors and antenna radiation patterns. Therefore, ns-3 users can use this model in conjunction with any class that implements the AntennaModel interface. In this implementation, we focus on Uniform Planar Arrays (UPAs), although the methodology is general and can be applied to arbitrary antenna arrays. Let \(\theta\) and \(\varphi\) be the relative zenith and azimuth angles between transmitter and receiver, respectively, and let \(\mathbf{w}\left(\theta_{0},\varphi_{0}\right)\) denote the beamforming vector pointing towards the steering direction \((\theta_{0},\varphi_{0})\). We denote with \(U=U_{h}U_{o}\) the total, horizontal, and vertical number of antenna elements, respectively, and with \(d_{h},d_{o}\) their spacing in the horizontal and vertical domains of the array, respectively. Considering first isotropic antennas, the gain pattern of a UPA, in terms of received power relative to a single radiating element, can be expressed as (Bradner, 2003) \[G_{T,R}^{iso}(\theta,\varphi)=\left|\mathbf{a}_{i}^{\mathsf{T}}(\theta,\varphi)\mathbf{ w}\left(\theta_{0},\varphi_{0}\right)\right|^{2}, \tag{3}\] where \(\mathbf{a}_{i}(\theta,\varphi)\) is the array response vector, whose generic entry \(m,n\) with \(m\in\{0,\dots,U_{0}-1\},n\in\{0,\dots,U_{h}-1\}\) reads \[\begin{split} a_{i}(\theta,\varphi)_{m,n}&=\exp \left(j\frac{2\pi}{\lambda}md_{o}\cos(\theta)\right)\times\\ &\exp\left(j\frac{2\pi}{\lambda}nd_{h}\sin(\theta)\sin(\varphi) \right).\end{split} \tag{4}\] In this work, which supports arbitrary antennas, each antenna element \((m,n)\) actually exhibits a generic radiation pattern \(g(\theta,\varphi)_{m,n}\) towards direction \((\theta,\varphi)\). In particular, we assume that \(g(\theta,\varphi)_{m,n}\) is constant for all the elements of the array, i.e., \(g(\theta,\varphi)_{m,n}\equiv g(\theta,\varphi)\). Accordingly, we compute \(G_{T,R}(\theta,\varphi)\) in the ComputeBeamforming-Gain function of the TwoRaySpectrumPropagationLossModel class as \[G_{T,R}(\theta,\varphi)=G_{T,R}^{iso}(\theta,\varphi)\left|g(\theta,\varphi) \right|^{2}. \tag{5}\] Figures 0(a) and 0(b) report \(G_{T,R}(\theta,\varphi)\) for both the isotropic (IsotropicAntennaModel) and the 3GPP (ThreeGopAntennaModel) radiation patterns, respectively. It can be noted that our model abstracts the computation of the received signal power as a SISO keyhole channel (Bradner, 2003), which is then combined with the spatial antenna gain patterns at the transmitter/receiver to obtain the received power. This approximation is possibly imprecise when considering NLoS links, due to the lack of a dominant multipath component. To account for this limitation, we introduce a multiplicative correction factor \(\eta\) which scales the beamforming gain as \(G_{T,R}^{\prime}(\theta,\varphi)\equiv\eta G_{T,R}(\theta,\varphi)\). In line with (Kang and Wang, 2015), we set \(\eta=1/19\). ### Fast Fading The widely used Rayleigh and Rician distributions fail, even in their generalized forms, to capture the intrinsic bimodality exhibited by mmWave scenarios (Bradner, 2003; Wang and Wang, 2015; Wang and Wang, 2015). Therefore, in our implementation we model fast fading using the more general FTR model of (Wang and Wang, 2015). This fading model assumes that the received signal comprises two dominant specular components and a mixture of scattered paths, thus modeling the amplitude of the received signal \(V_{r}\) as \[V_{r}=\sqrt{\xi}\exp(j\phi_{1})+\sqrt{\xi}\exp(j\phi_{2})+X+jY, \tag{6}\] where \(\phi_{1}\), \(\phi_{2}\) are statistically independent random phases, distributed as \(\phi_{i}\sim\mathcal{U}\left[0,2\pi\right]\). \(X\) and \(Y\) are independent Gaussian random variables, i.e., \(X,Y\sim\mathcal{N}(0,\sigma^{2})\), which represent the diffuse component of the received signal, which is assumed to be the superposition of multiple weak scattered waves with independent phase. Finally, \(\xi\) is a unit-mean Gamma distributed random variable with rate \(m\) and Probability Density Function (PDF) \[f_{\xi}(u)=\frac{m^{m}u^{m-1}}{\Gamma(m)}exp(-mu). \tag{7}\] In our implementation, \(F_{T,R}=|V_{r}|^{2}\) is sampled via the GetFtrFastFading function of the TwoRaySpectrumPropagationLossModel class. The FTR fading model is usually expressed as a function of the Gamma rate \(m\) and the auxiliary parameters \[K \doteq\frac{V_{1}^{2}+V_{2}^{2}}{2\sigma^{2}} \tag{9}\] \[\Delta \doteq\frac{2V_{1}V_{2}}{V_{1}^{2}+V_{2}^{2}}\in\left[0,1\right], \tag{8}\] where \(K\) represents the ratio of the power of the specular components with respect to the diffuse ones, while \(\Delta\) denotes how similar the received powers of the specular components are. By tuning these parameters, a high degree of flexibility can be achieved. Notably, a choice of \(\Delta=0\) effectively yields a Rician-distributed signal amplitude (Wang and Wang, 2015). #### 4.3.1. Calibration In our work, we calibrated the \(V_{1},V_{2}\) and \(m\) parameters of the FTR fading model using the full 3GPP TR 38.901 channel model as a reference. In particular, we first obtained the statistics of the small-scale fading of the 3GPP model, using an ad hoc Figure 1. Overall Array and Beamforming Gain of a Uniform Planar Array, for Isotropic and 3GPP [1, Section 7.3] Radiating Elements and {1x1, 2x2, 4x4, 8x8} UPAs. The Steering Direction is Fixed to \((\theta_{0},\varphi_{0})=(0^{\circ},0^{\circ})\), and \(\theta\equiv 0^{\circ}\) calibration script(three-gpp-two-ray-channel-calibration.cc). The script produces a collection of channel gain samples obtained by using the ThreeGppSpectrumPropagationLossModel and the ThreeGppChannelModel classes, and neglecting the beamforming gain, path-loss, shadowing and blockages. Accordingly, we isolate the variation around the mean received power caused by the small-scale fading only. A separate set of these samples has been retrieved for both LoS and NLoS channel conditions, the different propagation scenarios of (Bogorst et al., 2016), and a set of carrier frequencies ranging from 0.5 to 100 GHz. However, a preliminary evaluation of the obtained data showed a negligible dependence of the small-scale fading with respect to the carrier frequency, as can be observed in Figure 2. Therefore, we calibrated the FTR parameters considering only the channel condition and the propagation scenario. The small-scale fading samples have been used to estimate the \(\Delta,K\) and \(m\) FTR parameters, and then derive analytically the values of \(V_{1}\) and \(V_{2}\) yielding the fading realizations that are the closest (in a goodness-of-fit sense) to the TR 38.901 model. To this end, we defined a discrete grid of FTR parameters, spanning their whole domain, and considered the corresponding set of parameterized FTR distributions. To find the best matching one, we measured the distance between each of these distributions and the 3GPP reference curves by using the Anderson-Darling goodness-of-fit test (Anderson, 1977). This test is used to discern whether a sorted collection of \(n\) samples \(\{Y_{1},\ldots,Y_{n}\}\) originates from a specific distribution, by evaluating the test statistic (Anderson, 1977) \[A^{2}=-n-S(\mathcal{F}), \tag{10}\] where \[S(\mathcal{F})=\sum_{i=1}^{n}\frac{2i-1}{n}\left[\ln(\mathcal{F}(Y_{i}))+\ln \left(1-(\mathcal{F}(Y_{n+1-i}))\right)\right], \tag{11}\] and \(\mathcal{F}\) is the Cumulative Distribution Function (CDF) of the target distribution. In the standard Anderson-Darling test, \(A^{2}\) is then compared to a pre-defined critical value to validate the hypothesis. Instead, in our work we find the FTR distribution \(\mathcal{T}_{m,K,\Delta}\) which yields the lowest \(S\). Specifically, for each combination of propagation scenario, LoS condition and corresponding samples \(\{Y_{1},\ldots,Y_{n}\}\) we find \[\mathcal{F}_{m^{\star},K^{\star},\Delta^{\star}}\pm\operatorname*{argmin}_{m,K,\Delta}S(\mathcal{T}_{m,K,\Delta}). \tag{12}\] Finally, we exported the calibrated FTR parameters into ns-3, by storing them in SIM_PARAMS_TO_FTR_PARAMS_TABLE, i.e., an std::map, which associates the propagation scenario and condition to the corresponding best fitting FTR parameters. We remark that this calibration process represents a pre-computation step which needs to be done only once. Indeed, when running a simulation with this channel model, the FTR parameters get simply retrieved from the pre-computed lookup table by the GetFtrParameters function. Nevertheless, for the sake of reproducibility and maintainability of the code, we provide this functionality in the Python script two--ray-to-three-gpp-ch-calibration.py. ## 5. Benchmarks, Examples and Use Cases In this section, we provide an example on how to use the performance-oriented channel model presented above, in conjunction with the New Radio (NR) (Krishnan et al., 2017) module, to simulate 5G MIMO networks. Moreover, we present benchmarks which quantify the simulation time reduction achieved with this work, and we outline some possible use cases. ### Examples and Benchmarks We demonstrate how to use the performance-oriented channel model in the cttc-nr-demo-two-ray script, i.e., a custom version of the cttc-nr-demo example which is included in the NR module. The script deploys \(N_{gNB}\) 5G NR base stations, along with \(N_{UE}\) users in each cell. Each User Equipment (UE) uploads data using two Bandwidth Parts (BWPs) operating at 28 and 30 GHz, respectively. Both base stations and user terminals feature UPAs with multiple radiating elements. Most simulation parameters can be tuned by ns-3 users. Notably, the script provides the possibility to choose whether to use the 3GPP TR 38.901 channel model of (Krishnan et al., 2017) or the FTR-based channel model proposed in this work. In such regard, the use of the TwoRay-SpectrumPropagationLossModel, instead of the TR 38.901 one, is achieved by: 1. Setting the TypeId of the SpectrumPropagationLossModel factory to TwoRaySpectrumPropagationLossModel; 2. Creating an instance of the TwoRaySpectrumPropagationLossModel class using the above factory, and setting the corresponding pointer as the SpectrumPropagationLossModel of both BWPs; 3. Setting the attribute Frequency of the TwoRaySpectrumPropagationLossModel instance as the BWP carrier frequency; 4. Specifying the 3GPP propagation scenario by setting the attribute Scenario; and 5. Creating and setting the ChannelConditionModel by using the TwoRaySpectrumPropagationLossModel class ChannelConditionModel attribute. On the other hand, the Eigen optimizations simply require users to have the corresponding library installed in their system, and to enable Eigen when configuring ns-3, using the flag enable-eigen. We validated our contributions by benchmarking the simulation times exhibited by the above simulation script, which depicts a typical MIMO 5G NR scenario. To such end, we varied the number Figure 2. Small-scale Fading Gain Statistics for the UMi Propagation Scenario Versus the Carrier Frequency \(f_{C}\), for both LoS and NLoS Channel Conditions of Next Generation Node Base (gNB) antennas and UEs deployed, and we timed 100 simulation runs for each parameter combination. Figure 3 reports the ratio of the median simulation time achieved when using the Eigen-based optimizations, and of the same metric obtained using the vanilla ns-3.37. It can be seen that the matrix multiplication routines offered by Eigen can significantly reduce simulation times. For instance, a reduction of 5 times in the simulation time is achieved when equipping gNBs with 256 radiating elements. Similarly, Figure 4 depicts the ratio of the median simulation time obtained by using the FTR-based channel model, and the 3GPP TR 38.901 with Eigen disabled. In this case the computational complexity improvement is even more dramatic, with simulations taking as low as 6 % of the time to complete, with respect to the 3GPP model implementation of (Wang et al., 2017). As a reference, the median simulation time obtained on an Intel(r) i5-6700 processor system, before the merge of this work and for \(\{2,4,8\}\) users is \(\{64.7,210.5,666.6\}\)(Beng et al., 2017), respectively. Finally, we also computed (using the same simulation script, i.e., cttc-nr-demo-two-ray) the SINR statistics achieved by the proposed FTR-based model, and compared them to those obtained using the model of (Wang et al., 2017). As can be seen in Figure 5, the two models provide similar results. Indeed, a non-negligible difference can be found only in the case of the InH-OfficeMixed propagation scenario. We remark that all the results presented in this section can be reproduced by using the SEM (Wang et al., 2017) scripts which we provide2. Footnote 2: [https://gitlab.com/pagmatt/ns-3-dev/-tree/gsoc-wns3](https://gitlab.com/pagmatt/ns-3-dev/-tree/gsoc-wns3) ### Use Cases The main goal of both the performance oriented channel model and the optimizations to the 3GPP TR 38.901 model is to enable system-level simulations of large-scale MIMO scenarios for which the implementation of (Wang et al., 2017) exhibits prohibitive computational complexity. Specifically, our contributions allow ns-3 users to simulate wireless deployments where the devices feature antenna arrays with more than hundreds of radiating elements, and/or the number of communication endpoints is particularly high. For example, the modifications presented in this work can be used in the NR and mmwave (Wang et al., 2017) modules (which both already support the proposed channel models) to simulate massive MIMO 5G NR networks. Notably, a preliminary version of the Eigen port has been used in conjunction with the mmwave (Wang et al., 2017) module to simulate 5G networks aided by Intelligent Reflective Surfaces (IRSs), i.e., devices which feature up to \(100\times 100\) reflecting elements (Wang et al., 2017). Moreover, since the supported frequency range is \(0.5-100\) GHz, this encompasses not only terrestrial 5G and Long Term Evolution (LTE) deployments, but also most non-terrestrial networks and IEEE Radio Access Technologies (RATs). Finally, the proposed Two-RaySpectrumPropagation_lossModel can be further extended to support frequencies above 100 GHz using reference fading and path loss statistics. Figure 4. Ratio of the Median Simulation Times Using the Performance-Oriented Channel Model Presented in this Work (\(T_{A}^{TR}\)) and the 3GPP Channel Model of (Beng et al., 2017) After the Merge of this Work. In this Case, Eigen is Disabled Figure 5. ECDF of the SINR Obtained Using the 3GPP Channel Model of (Beng et al., 2017), and the Performance-Oriented Channel Model Presented in this Work, for Different Propagation Scenarios Figure 3. Ratio of the Median Simulation Times After the Merge of this Work with the Eigen Integration (\(T_{A}^{3GPP}\)) and as per ns-3.37 (\(T_{B}\)), when Using the 3GPP Channel Model of (Beng et al., 2017) ## 6. Conclusions and Future Work In this paper, we presented a set of optimizations concerning the simulation of MIMO wireless channels in ns-3. First, we introduced the support for the linear algebra library Eigen in ns-3, and reduced the computational complexity of the channel matrix generation procedure by avoiding the unnecessary repetition of trigonometric evaluations. Then, we designed and implemented in ns-3 a performance-oriented statistical channel model based on the FTR fading model, which further reduces the simulation time of MIMO scenarios. Profiling results showed that, thanks to this work, the simulation of MIMO deployments in ns-3 using the 3GPP TR 38.901 channel model takes as little as 20% of the original time. Furthermore, whenever the complexity of the simulations represents a major bottleneck, ns-3 users are now given the possibility of using an additional auxiliary channel model, which achieves a further reduction in simulation time, at the cost of a negligible accuracy penalty with respect to the full 3GPP TR 38.901 model. As part of our future work, we plan to study more refined beamforming gain correction factors, using the 3GPP statistical channel model as a reference, and possibly making the estimation of such term scenario-dependent. Moreover, we envision to design more efficient storage/access data structures and linear algebra operations for 3D matrices, by better leveraging Eigen also in this context. Finally, we will consider using Single instruction, multiple data (SIMD) for speeding up the evaluation of trigonometric functions, and caching the beamforming gain in the TwoRaySpectrum-PropagationLossModel class to further reduce the simulation time of MIMO scenarios in ns-3. ###### Acknowledgements. The work of Matteo Pagin was partially funded by the Google Summer of Code 2022 program. The work of Michele Polese is partially supported by the U.S. NSF Grant CNS 2225590. CTTC authors have received funding from Grant PID2021-126431OB-I00 funded by MCIN/AEI/10.13039/501100011033 and "ERDF A way of making Europe", and TSI-063000-2021-56/57 6G-BLUR project by the Spanish Government. This work was partially supported by the European Union under the Italian National Recovery and Resilience Plan (NRRP) of NextGenerationEU, partnership on "Telecommunications of the Future"(PE0000001 - program "RESTART"). Moreover, the authors would like to thank Tom Henderson, Eduardo Almeida, and Gabriel Ferreira for their useful suggestions and support during this work.
2310.18016
Synchronization and optimization of Large Eddy Simulation using an online Ensemble Kalman Filter
An online Data Assimilation strategy based on the Ensemble Kalman Filter (EnKF) is used to improve the predictive capabilities of Large Eddy Simulation (LES) for the analysis of the turbulent flow in a plane channel, $Re_\tau \approx 550$. The algorithm sequentially combines the LES prediction with high-fidelity, sparse instantaneous data obtained from a Direct Numerical Simulation (DNS). It is shown that the procedure provides an augmented state which exhibits higher accuracy than the LES model and it synchronizes with the time evolution of the high-fidelity DNS data if the hyperparameters governing the EnKF are properly chosen. In addition, the data-driven algorithm is able to improve the accuracy of the subgrid-scale model included in the LES, the Smagorinsky model, via the optimization of a free coefficient. However, while the online EnKF strategy is able to reduce the global error of the LES prediction, a discrepancy with the reference DNS data is still observed because of structural flaws of the subgrid-scale model used.
Lucas Villanueva, Karine Truffin, Marcello Meldi
2023-10-27T09:44:56Z
http://arxiv.org/abs/2310.18016v1
# Synchronization and optimization of Large Eddy Simulation using an online Ensemble Kalman Filter ###### Abstract An online Data Assimilation strategy based on the Ensemble Kalman Filter (EnKF) is used to improve the predictive capabilities of Large Eddy Simulation (LES) for the analysis of the turbulent flow in a plane channel, \(Re_{\tau}\approx 550\). The algorithm sequentially combines the LES prediction with high-fidelity, sparse instantaneous data obtained from a Direct Numerical Simulation (DNS). It is shown that the procedure provides an augmented state which exhibits higher accuracy than the LES model and it synchronizes with the time evolution of the high-fidelity DNS data if the hyperparameters governing the EnKF are properly chosen. In addition, the data-driven algorithm is able to improve the accuracy of the subgrid-scale model included in the LES, the Smagorinsky model, via the optimization of a free coefficient. However, while the online EnKF strategy is able to reduce the global error of the LES prediction, a discrepancy with the reference DNS data is still observed because of structural flaws of the subgrid-scale model used. DA, EnKF, CONES, Synchronization, SGS model optimization ## I Introduction Among the state-of-the-art tools in Computational Fluid Dynamics (CFD) for the analysis of complex flow configurations, the Large Eddy Simulation (LES) [1; 2] is arguably the most investigated strategy in the last decades. LES relies on the application of statistical hypothesis related to turbulence theory to filter out the smallest physical scales of motion, so that the number of degrees of freedom to be simulated is drastically reduced when compared with Direct Numerical Simulation. The effects of such filtered eddies and their interactions with the resolved flow are taken into account by a specific SubGrid-Scale (SGS) closure. One of the most interesting features of LES is that it can naturally represent the unstationary, three-dimensional features of the flow. This key property, which is not obtained by most of the closures used to simulate turbulent flows, is essential for example for the prediction of extreme events. These rare occurrences must be fully taken into account in industrial applications and they are observed in a large spectrum of applications, such as internal flows for the study of combustion cyclic variability [3; 4], non-cyclic phenomena [5; 6] or direct spray injection and aerodynamics in transient combustion engines [7] and external flows for wind / urban engineering [8; 9]. The representation of instantaneous features of the flow also exhibits a great potential for LES applications in the framework of Industry 4.0 [10; 11]. Within this digital revolution, envisioned applications predict and control real configurations, usually referred to as physical twin, using a numerical counterpart, the digital twin [12; 13]. Most studies in the literature for fluid mechanics couple the physical system with reduced-order models or low-rank CFD [14; 15; 16; 17; 18; 19] and thus the communication and control is limited to statistical macro-features of the flow. Applications of LES in this context are potentially groundbreaking because the real-time coupling of a real flow with LES is consistent in terms of physical representation. Successful implementation of a LES-based digital twin could potentially anticipate extreme events via numerical simulation and prevent catastrophic occurrences for the physical twin. However, three barriers must be lifted to see the fruition of this futuristic application. First, computational resources required to perform LES are orders of magnitude larger than the real time of physical applications of industrial interest. While this barrier seems unbeatable, new technologies such as quantum computing [20; 21] may provide a needed breakthrough in terms of power needed for extended digital twin applications. Second, low-rank CFD is affected by a bias associated with the turbulence / SGS closures which often interact with the discretization error as well as explicit/implicit filtering for LES. These non-linear interactions between error sources may severely impact the accuracy of the results as they are often very sensitive to the test case of investigation. Therefore, general guidelines for applications are elusive. Third, CFD and in particular LES is extremely sensitive to perturbations and uncertainty in the initial and boundary conditions. Such perturbations, which also interact with the discretization error and the SGS modeling, may produce significant instantaneous decorrelation of initially identical fields in very short times. The second and third barriers listed, namely the accuracy of turbulence closures and the possibility for scale-resolved CFD to follow with good correlation a physical flow, have been recently investigated using data-driven methods. Uncertainty Quantification techniques have been extensively used to improve the predictive features of LES [22; 23; 24; 25] and, more recently, works in Data Assimilation [26; 27] optimized the behavior of SGS modeling in different numerical solvers [28; 29; 30]. In particular, Mons et al. [29] have performed an advanced optimization of the Smagorinsky model [31], one of the most used SGS closures in the literature, for the test case of the plane channel flow. In their work, the DA procedure relies on statistical features of the flow for optimization. While the results obtained significantly increase the global accuracy of the LES solver, this procedure is not fit for on-the-fly optimization in the framework of a digital twin. A number of DA works have also targeted numerical synchronization and reconstruction of turbulent instantaneous flows from limited data. Using DA formalism, this procedure can be referred to as _state augmentation_. Such studies have been relying on DNS [32] as well as LES [28; 33; 34]. The main conclusions that can be drawn by these studies is that the efficiency in the synchronization of the flow depends on the number and positioning of sensors, as well as on the DA technique used. Among the proposals in the literature, the Ensemble Kalman Filter [27; 35], which relies on an ensemble of numerical realizations to perform optimization and state reconstruction, appears to be a perfect candidate for this task. Thanks to its sequential features which allow to perform an instantaneous, on-the-fly update of the physical field, this tool shows potential for future integration in digital twins. The present work proposes an extensive analysis of an EnKF-based tool application to LES in terms of i) optimization of the SGS model and ii) state augmentation. The test case of investigation is the turbulent channel flow, which has already been analyzed using DA techniques [29; 32]. The novel point here is that both the optimization and the state augmentation are performed on-the-fly, progressively informing the LES ensemble members with time-resolved DNS data which are sampled at a limited amount of sensors near the wall. The objective here is to assess the robustness of the procedure, both in terms of optimization as well as flow reconstruction, when spatial-temporal sparse data are used. The on-the-fly coupling of LES simulation and DNS data is performed via CONES [36], a library developed by the team to perform online coupling between different solvers. The article is structured as follows. In section II, the numerical tools used for the analysis are going to be presented and discussed. This includes the numerical LES solver, the EnKF methodology, and the platform CONES. In section III, the test case and the set-up of the DA runs are going to be introduced. In section IV, the results of the optimization of the SGS model are discussed. In section V, the global impact of the DA methodology over the instantaneous flow predicted and the correlation with the DNS data available is investigated. Finally, in section VI concluding remarks are drawn and future perspectives are investigated. ## II Numerical tools All the numerical ingredients used to perform the present analysis are presented in this section. These tools include a description of the dynamic equations and the numerical solver used, details about the EnKF, and information about the platform CONES used to perform online DA. ### Dynamic equations and numerical solver The Navier-Stokes equations for incompressible flows and Newtonian fluid can be formulated as: \[\frac{\partial u_{j}}{\partial x_{j}} = 0 \tag{1}\] \[\frac{\partial\,u_{i}}{\partial t}+\frac{\partial\,u_{i}\,u_{j}} {\partial x_{j}} = -\frac{1}{\rho}\frac{\partial\,p}{\partial x_{i}}+\nu\frac{ \partial^{2}\,u_{i}}{\partial x_{j}\partial x_{j}}+f_{i} \tag{2}\] where \(\mathbf{u}=[u_{1},\,u_{2},\,u_{3}]=[u_{x},\,u_{y},\,u_{z}]\) is the velocity field, \(\rho\) is the density, \(p\) is the pressure, \(\nu\) is the kinematic viscosity and \(\mathbf{f}=[f_{1},\,f_{2},\,f_{3}]\) is a volume forcing. Repetition over the index \(j\) is employed for the sake of conciseness. In the LES formalism, equations 1 and 2 are filtered to obtain a global reduction of the degrees of freedom of the physical system: \[\frac{\partial\widetilde{u}_{j}}{\partial x_{j}} = 0 \tag{3}\] \[\frac{\partial\,\tilde{u}_{i}}{\partial t}+\frac{\partial\, \widetilde{u}_{i}\,\widetilde{u}_{j}}{\partial x_{j}} = -\frac{1}{\rho}\frac{\partial\,\widetilde{p}}{\partial x_{i}}+ \nu\frac{\partial^{2}\,\widetilde{u}_{i}}{\partial x_{j}\partial x_{j}}- \frac{\partial\tau_{ij}}{\partial x_{j}}+\widetilde{f}_{i} \tag{4}\] The tilde symbol stands for filtered variables and \(\tau_{ij}=\widetilde{u_{i}u_{j}}-\widetilde{u}_{i}\widetilde{u}_{j}\) is the subgrid scale stress tensor. In the Smagorinsky model [31], the deviatoric part of \(\tau_{ij}\) is modelled as an eddy viscosity effect: \[\tau_{ij}-\frac{1}{3}\tau_{kk}\delta_{ij}=-2\nu_{sgs}\widetilde{S}_{ij}\,, \quad\nu_{sgs}=(C_{S}\Delta)^{2}\sqrt{2\widetilde{S}_{ij}\widetilde{S}_{ij}} \tag{5}\] where \(\widetilde{S}_{ij}=\frac{1}{2}\left(\frac{\partial\widetilde{u}_{i}}{\partial x _{j}}+\frac{\partial\widetilde{u}_{j}}{\partial x_{i}}\right)\) is the rate-of-strain tensor of the resolved velocity field, \(\Delta\) is the filter width and \(C_{S}\) is a model coefficient that can be selected by the user. Classical values found in the literature are \(C_{S}\in[0.1,\,0.2]\). This formulation, which is derived from the asymptotic turbulence theory by Kolmogorov, fails to provide an accurate prediction of the interactions between the resolved and filtered physical variables. The reason is that the SGS stress tensor in equation 5 is inherently dissipative and affects all the simulated scales of the flow [2]. Despite these negative features, the direct and simple implementation of such a model made it a popular choice for most solvers. The numerical simulation of equations 3 - 4 is performed using the open-source code _OpenFOAM_[37]. This C++ library provides a finite volume [38] discretization of the dynamic equations and modules for turbulence / SGS closure are already implemented. The equations are resolved using a PISO loop [38] which employs a Poisson equation to iteratively obtain a solenoidal condition for the velocity field, starting from the prediction obtained by the resolution of the momentum equation 4. Second-order centered schemes have been used for the discretization of spatial derivatives. A second-order backward scheme has been used for the time advancement of the solution. The LES equations are closed using the classical Smagorinsky model previously introduced. The implementation in OpenFOAM relies on two model constants, the parameter \(C_{k}\) and the normalized dissipation parameter \(C_{\varepsilon}\). The latter usually exhibit high sensitivity to turbulence production effects and lack of homogeneity of the flow [39]. In the case of turbulent equilibrium, such as in Kolmogorov theory, \(C_{\varepsilon}=const\), and its value can be set by the user. OpenFOAM suggests a default value of \(C_{\varepsilon}=1.048\), which is in the range of experimental and numerical findings. Within this framework, the connection between \(C_{S}\) and \(C_{k}\) is: \[C_{S}^{2}=C_{k}\sqrt{\frac{C_{k}}{C_{\varepsilon}}} \tag{6}\] The LES filtering is performed implicitly using the grid resolution. The filter width \(\Delta\) is thus locally proportional to the volume of each cell \(V_{c}\) (_cube-root volume_ filter option in OpenFOAM) and more precisely \(\Delta=\sqrt[3]{V_{c}}\). ### Data Assimilation Data Assimilation [26; 27] includes a large spectrum of data-driven techniques whose main goal is to obtain an _augmented_ prediction of a random process investigated, combining different sources of information. The tools are usually grouped in two main categories. The _variational_ approaches perform the DA strategy via an optimization problem. The _sequential_ approaches usually rely on probabilistic approaches which are based on Bayes' theorem. This work will be performed using the Ensemble Kalman Filter [27; 35]. This tool, which has been extensively used in meteorological applications in the last decades, has seen numerous recent applications for problems in fluid mechanics [30; 34; 40; 41; 42; 43]. The most interesting feature of the present work is that the EnKF operates _sequentially_ i.e. it can combine data in-streaming obtained from different sources. This key feature will be exploited for on-the-fly coupling of high-precision, localized DNS data with running LES calculations. #### ii.2.1 Ensemble Kalman Filter (EnKF) The Kalman Filter (KF) is a well-known DA tool first introduced in 1960 by R.E. Kalman [44] to estimate an augmented system state from sparse external data, or observations. Both sources of information are affected by uncertainties, which are approximated to be Gaussian random variables. The augmented state is obtained by combining a set of observations and a state vector obtained via a model. In the present work, the physical quantity updated is the velocity field \(\mathbf{u}\), which is obtained via LES (the model). Observation is sampled at specific locations from a high-resolution simulation (DNS) and indicated as \(\alpha\). Corresponding sampled quantities at the same locations for the state vector are indicated as \(\mathbf{s}=\mathbf{Hu}\). \(\mathbf{H}\) is a projection matrix that maps the values of the model state to the observation space. Let us consider the time advancement of the model from the time step \(k\) to \(k+1\) in the case observation is available for the latter time. The augmented state is obtained as: \[\mathbf{u}_{k+1}^{a}=\mathbf{u}_{k+1}^{f}+\mathbf{K}_{k+1}(\alpha_{k+1}- \mathbf{s}_{k+1}) \tag{7}\] The superscript \(f\) (_forecast_) represents the time advancement of the physical quantities by the model from time \(k\) to \(k+1\). The superscript \(a\) (_analysis_) represents the final augmented state of the algorithm. The Kalman gain \(\mathbf{K}\) is obtained from manipulation of the error covariance matrix \(\mathbf{P}=\mathbb{E}((\mathbf{u}-\mathbb{E}(\mathbf{u}))(\mathbf{u}-\mathbb{ E}(\mathbf{u}))^{T})\), which measures the correlations between the state vector and the observations. It takes into account the level of confidence in the model and in the observation, respectively, which is measured by the variance of the uncertainties affecting the two sources of information. More precisely, the model and observation uncertainties can be described by an unbiased Gaussian distribution with variances \(\mathbf{Q_{k}}\) and \(\mathbf{R_{k}}\), respectively. The main drawback of the classical KF resides in the costly manipulations of the matrix \(\mathbf{P}\) and also the necessity to use linear models. The Ensemble Kalman Filter (EnKF) [35], which is an advanced DA tool based on the KF, is extensively used in weather sciences [27]. It overcomes the aforementioned drawbacks by using the Monte Carlo method to estimate the error covariance matrix \(\mathbf{P}\) through the use of an ensemble of pseudo-random realizations. An ensemble of \(N_{e}\) physical states \(\mathbf{u}\), each of them described by \(N\) degrees of freedom, is advanced in time using a model \(\mathcal{M}\), which can in this case be non-linear. A state matrix \(\mathbf{U}\) of size \([N,\,N_{e}]\) is assembled at each analysis phase. Each column \(i=1,\cdots,N_{e}\) of the state matrix represents a physical state \(\mathbf{u}_{i}\) obtained by the \(i^{th}\) ensemble member. Considering the time advancement of the solution from the instant \(k\) to \(k+1\) such as in equation 7 for the KF, the EnKF provides an ensemble estimation of the error covariance matrix \(\mathbf{P}\) using the hypothesis of statistical independence of the members : \[\mathbf{P}=\mathbf{\Gamma}(\mathbf{\Gamma})^{T} \tag{8}\] where \(\mathbf{\Gamma}\) is the anomaly matrix, which is derived from the state matrix \(\mathbf{U}\) of the ensemble members. It quantifies the deviation of the state vectors from their ensemble means: \[\mathbf{\Gamma}_{k+1}=\frac{\mathbf{u}_{i,k+1}^{f}-\langle\mathbf{u}\rangle_{ k+1}^{f}}{\sqrt{N_{e}-1}}\,\qquad\langle\mathbf{u}\rangle_{k+1}^{f}=\frac{1}{N_{e}}\sum_{i=1}^{N_{e}} \mathbf{u}_{i,k+1}^{f} \tag{9}\] In order to obtain a well-posed mathematical problem, the array of \(N_{o}\) available observations is artificially perturbed to obtain \(N_{e}\) sets of values. To do so, a Gaussian noise based on the covariance matrix of the measurement error \(\mathbf{R}_{k+1}\) is added to the observation vector: \[\alpha_{i,k+1}=\alpha_{k+1}+\mathbf{e}_{i,k+1},\text{ with }\mathbf{e}_{i,k+1} \thicksim\mathcal{N}(0,\mathbf{R}_{k+1}) \tag{10}\] The model realizations and the observations are combined over the observation space using the projection matrix \(\mathbf{H}\): \[\mathbf{s}_{i,k+1}=\mathbf{H}\mathbf{u}_{i,k+1}^{f} \tag{11}\] These elements provide a closed form for the Kalman gain: \[\mathbf{K}_{k+1}=\boldsymbol{\Gamma}_{k+1}(\mathbf{S}_{k+1})^{T}\left[\mathbf{S }_{k+1}(\mathbf{S}_{k+1})^{T}+\mathbf{R}_{k+1}\right]^{-1} \tag{12}\] with \[\mathbf{S}_{k+1}=\frac{\mathbf{s}_{i,k+1}-\langle\mathbf{s}\rangle_{k+1}}{ \sqrt{N_{e}-1}}\;,\qquad\langle\mathbf{s}\rangle_{k+1}=\frac{1}{N_{e}}\sum_{i= 1}^{N_{e}}\mathbf{s}_{i,k+1} \tag{13}\] In a limited ensemble size, \(\mathbf{R}_{k+1}\) is preferred to the anomaly matrices product of the errors \(\mathbf{E}_{k+1}(\mathbf{E}_{k+1})^{T}\) in equation 12. It provides a simplified algorithm and reduced computational cost [45; 46]. Finally, the physical state predicted by each ensemble member is updated using the Kalman Gain: \[\mathbf{u}_{i,k+1}^{a}=\mathbf{u}_{i,k+1}^{f}+\mathbf{K}_{k+1}(\alpha_{i,k+1} -\mathbf{s}_{i,k+1}) \tag{14}\] The approaches based on the EnKF can also simultaneously optimize the free parameters of the model to minimize the discrepancy between the model and observation during the analysis phase. These parameters are usually assembled in an array referred to as \(\theta\). A straightforward strategy to perform such optimization is the so-called _extended state_[27]. Here the EnKF problem is resolved for a state vector \(\mathbf{u_{ext}}\) defined as: \[\mathbf{u_{ext}}=\begin{bmatrix}\mathbf{u}\\ \theta\end{bmatrix} \tag{15}\] The size of the extended state is now equal to \(N_{ext}=N+N_{\theta}\), where \(N_{\theta}\) is the number of parameters to be optimized. This modification brings a negligible increase in computational costs if \(N_{\theta}<<N\) and it simultaneously provides an updated state estimation and optimized parametric description for the model at the end of the analysis phase. #### ii.1.2 Inflation One of the major drawbacks of the Ensemble Kalman Filter is the fast collapse of the state matrix variability. The consequence of the unwanted reduction of the variability is the convergence of the state matrix towards a localized optimum, which is strongly tied with the prior state provided. If the latter is not accurate, then the precision of the optimization via EnKF can be severely impacted. One can increase the global variability of the system and decrease the sampling errors using a higher number of members in the ensemble, gaining accuracy in the prediction of the EnKF. However, this strategy is not conceivable for fluid dynamics applications where computational costs preclude the usage of large ensembles. In fact, the number of members generally used for three-dimensional runs is around \(N_{e}\in[40,100]\)[29; 30], which is pretty far from classical Monte-Carlo convergence. This problem is usually mitigated by inflating the variance of the ensemble _after_ the analysis phase. This can be easily obtained by increasing the discrepancy between each state vector \(u_{i,k+1}^{a}\) and the ensemble mean \(\langle\mathbf{u}^{a}\rangle\) by algebraic operations driven via a coefficient \(\lambda\). This procedure is referred to as _multiplicative inflation_. The way this procedure is performed can be _deterministic_ or _stochastic_: \[deterministic\qquad{\bf u}_{i}^{a}\longrightarrow\langle{\bf u}^{a}\rangle+\lambda_ {i}({\bf u}_{i}^{a}-\langle{\bf u}^{a}\rangle)\qquad with\,\lambda_{i}>1 \tag{16}\] \[stochastic\qquad{\bf u}_{i}^{a}\longrightarrow(1+\lambda_{i}){\bf u}_{i}^{a} \qquad with\,\lambda_{i}\sim{\cal N}(0,\sigma) \tag{17}\] The deterministic implementation can be very efficient during the initial analysis phases of the calculation. Considering it is applied to the discrepancy from the mean values of the ensemble, the process is quite stable, and higher values of \(\lambda\) can be used. Nonetheless, it is less efficient when the ensemble exhibits a strong collapse of the physical solution (\({\bf u}_{i}^{a}-\langle{\bf u}^{a}\rangle\approx 0\)). On the other hand, stochastic inflation is very useful to mitigate a fast collapse of the state matrix, allowing it to target a global optimum solution. The Gaussian distribution used to determine \(\lambda_{i}\) is usually truncated to avoid the generation of outliers which could lead to the divergence of the EnKF. #### ii.1.3 Localization The coefficients of the state matrix correspond to values of the flow variables (namely the velocity field) in specific points of the physical domain, usually the center of the mesh elements. As discussed in Sec. II.2.1 and shown in eq. 12, the Kalman gain establishes a correlation between those values and the values of the state matrix projected in the observation space i.e. sensors where high-fidelity data is available. Considering that the physical correlation naturally decays with distance in continuous systems, the approximations used to determine an ensemble Kalman gain can lead to spurious effects on the analyzed state matrix for large domains. These effects can be responsible for critical problems such as unphysical solutions, which can lead to the divergence of the calculations. Again, these problems can be reduced by increasing the number of ensemble members, which is not a cost-efficient solution for applications involving CFD. Therefore, different strategies need to be employed to mitigate the effects of spurious correlations. The most used strategy to reduce them is to operate on the coefficients correlating variables in the EnKF which are calculated in points far from each other. In this case, one would expect that the physical phenomena are completely decorrelated. Two possible strategies may be adopted to obtain this result [27]. The _Covariance localization_ directly operates on the coefficients of the error covariance matrix \({\bf P}_{k+1}^{f}\), pre-multiplying them with a term that tends to zero as the physical distance between observations sensors and elements of the state increases. This process is mathematically performed using a coefficient-wise multiplication between the covariance matrix and a correction matrix referred to as \({\bf L}\). This expression can be directly added in the algorithm without any structural modification. The localized Kalman gain becomes: \[[{\bf P}_{k+1}^{f}]_{i,j}[{\bf L}]_{i,j}\longrightarrow{\bf K}_{k+1}^{loc}=[{ \bf L}]_{i,j}[{\bf K}_{k+1}]_{i,j} \tag{18}\] The structure of the matrix \({\bf L}\) must be set by the user. In fluid systems, and in particular for turbulence, the correlation decreases fast in space. Therefore, a generally used structure for the localization matrix is an exponential decay form: \[{\bf L}(i,j)=e^{-\Delta_{i,j}^{2}/l} \tag{19}\] where \(\Delta_{i,j}\) is the distance between the given observation sensor and the point of evaluation of the model (center of the mesh element in CFD). \(l\) is a correlation length scale that can be tuned accordingly to the local characteristics of the test case. Another way to localize the Kalman gain is to use _physical localization_. The principle is quite straightforward. Instead of performing the EnKF on the entire physical domain, one can proceed to do the calculation on a _clipped_ domain. The reduced space must contain the observation sensors. This strategy also has the advantage of reducing the number of degrees of freedom operating in the DA procedure, which can produce a significant gain in terms of computational resources required. _Covariance localization_ is commonly used together with physical localization to avoid discontinuities of the updated physical state, in particular at the interface of the clipped domain. This strategy prevents potential divergence of the model runs. This method is very efficient in speeding up the calculation and simultaneously improving the stability of the calculation and the accuracy of the prediction for a reduced ensemble size such as the ones currently usable for CFD-based studies [36]. The DA procedure used in this study is qualitatively shown in Fig. 1 and a detailed algorithm of the EnKF (including state-of-the-art modifications) is provided in Alg. 1. Figure 1: Scheme representing the ensemble Kalman filter ``` Input:\(\mathcal{M}\), \(\mathcal{H}\), \(\mathbf{R}_{k+1}\), and some priors for the state system \(\mathbf{u}_{i,0}^{a}\), where usually \(\mathbf{u}_{i,0}^{a}\sim\mathcal{N}(\mu_{N},\sigma_{N}^{2})\) for\(k=0\) to \(K-1\)do for\(i=1\) to \(N_{e}\)do 1 Advancement in time of the state vectors: \(\mathbf{u}_{i,k+1}^{f}=\mathcal{M}\mathbf{u}_{i,k}^{a}\) 2 Creation of an observation matrix from the observation data by introducing errors: \(\alpha_{i,k+1}=\alpha_{k+1}+\mathbf{e}_{i,k+1}\), with \(\mathbf{e}_{i,k+1}\thicksim\mathcal{N}(0,\mathbf{R}_{k+1})\) 3 Calculation of the predicted observation: \(\mathbf{s}_{i,k+1}=\mathcal{H}\mathbf{u}_{i,k+1}^{f}\) 4 Calculation of the ensemble means: \(\langle\mathbf{u}\rangle_{k+1}^{f}=\frac{1}{N_{e}}\sum_{i=1}^{N_{e}}\mathbf{u }_{i,k+1}^{f},\ \langle\mathbf{s}\rangle_{k+1}=\frac{1}{N_{e}}\sum_{i=1}^{N_{e}} \mathbf{s}_{i,k+1}\), 5 Calculation of the anomaly matrices: \(\mathbf{\Gamma}_{k+1}=\frac{\mathbf{u}_{i,k+1}^{f}-\langle\mathbf{u}\rangle_{ k+1}^{f}}{\sqrt{N_{e}-1}},\ \mathbf{S}_{k+1}=\frac{\mathbf{s}_{i,k+1}-\langle\mathbf{s}\rangle_{k+1}}{ \sqrt{N_{e}-1}}\), 6 Calculation of the Kalman gain: \(\mathbf{K}_{k+1}=\mathbf{\Gamma}_{k+1}(\mathbf{S}_{k+1})^{T}\left[\mathbf{S} _{k+1}(\mathbf{S}_{k+1})^{T}+\mathbf{R}_{k+1}\right]^{-1}\) 7 Localization of the Kalman gain: \(\mathbf{K}_{k+1}^{loc}=[\mathbf{L}]_{i,j}[\mathbf{K}_{k+1}]_{i,j}\) 8 Update of the state matrix: \(\mathbf{u}_{i,k+1}^{a}=\mathbf{u}_{i,k+1}^{f}+\mathbf{K}_{k+1}^{loc}(\alpha_{ i,k+1}-\mathbf{s}_{i,k+1})\) 9 Inflation of the state matrix: \(\mathbf{u}_{i,k+1}^{a}=(1+\lambda_{i})\mathbf{u}_{i,k+1}^{a}\) ``` **Algorithm 1**Algorithm for the Ensemble Kalman Filter #### ii.2.4 Cones _Coupling OpenFOAM with Numerical EnvironmentS_ (CONES) is a C++ library add-on to the open-source CFD software OpenFOAM. CONES allows OpenFOAM to exchange field data through MPI communications [36]. The coupling of OpenFOAM with other numerical environments is operated by CWIPI (Coupling With Interpolation Parallel Interface) developed by CERFACS and ONERA [47]. CONES has been developed by the team in order to perform on-the-fly DA with OpenFOAM, which has been coupled with a tailored EnKF code for this purpose. The main advantages CONES provides to perform DA with OpenFOAM are: * Data Assimilation is performed _online_ without stopping the CFD runs, which represent the ensemble members. The computational resources required to restart the simulations after an analysis phase are large, usually more than the total computational cost for the DA run if several analysis steps have to be performed. * Communication of large physical fields (arrays of millions of elements such as the velocity field) is performed rapidly and efficiently. * Compilation of additional functions is performed via _wmake_ routine in the user-dedicated library of OpenFOAM. * Coupling between codes is performed preserving the original structure of the existing CFD solvers. Every CONES-related function is contained in a Pstream (Part of OpenFOAM) modified library, hence, data exchange is done at the end of the solver loop by calling specific functions, and the calculation loop remains unmodified. * Direct HPC communications are established between multiple processors, which handle partitions of the numerical simulations and the DA process. Data flow and exchanges between codes are summarized in Fig. 2. As CWIPI is based on the MPI library, both MPI and CWIPI environments have to be initialized when launching the calculation. Similarly, they have to be finalized at the end. Once the forecast(s) step(s) of the EnKF algorithm is performed, sampled data \(\mathbf{s}_{k+1}\) is interpolated for each member and transferred to the EnKF code for the analysis step. The entire velocity field and the studied parameters are also sent in order to perform the EnKF algorithm. CWIPI exchanges data through coincident meshes in CONES. However, in case the mesh is not coincident, the field data is interpolated automatically. This is an important feature for the potential use of _multigrid_-based DA algorithms in the future [30]. The observation is uploaded just before the analysis step. After the state vectors \(\mathbf{u}_{i,k+1}^{a}\) have been updated, the information is sent back to each member to resume the forecast steps with the updated physical states and/or values of the model constants. The state matrix contains the velocity fields of all the members and the constant \(C_{k}\) of the turbulence model optimized in this study. Details about the optimization of this parameter will be provided in Sec. III. The observation, containing velocities of the reference data for all times available, is stored in a single.txt file that is read at each analysis phase. The related computational cost is negligible compared to the calculation of the Kalman gain when performing the EnKF algorithm as shown in appendix B. Figure 2: Scheme of the library CONES ## III Test case and set-up of the DA analysis ### Turbulent plane channel flow, \(Re_{\tau}\approx 550\) The test case chosen to perform the DA analysis is the turbulent plane channel flow for \(Re_{\tau}=u_{\tau}h/\nu=546\). Here \(u_{\tau}=\sqrt{\tau_{w}/\rho}\) is the friction velocity and \(\tau_{w}\) is the shear stress at the wall. \(h\) is the half-height of the channel and \(\nu\) is the kinematic viscosity. This academic test case, which is driven by shear mechanisms at the wall and naturally excludes complex aspects associated with favorable/adverse mean pressure gradients [1], is nonetheless problematic for LES [48]. Complex non-linear interaction occurs between two main error sources, namely those associated with the numerical discretization and the SGS closure. These mechanisms are responsible for very high sensitivity to relatively small variations in the grid discretization and the SGS closure selected. Therefore, this test case is an excellent candidate to study the objectives presented in the introduction. Results obtained from the large-eddy simulations performed in this work will be compared with DNS data on the same test case previously performed by the research team [49]. The geometric features are shown in Fig. 3. The size of the domain investigated is \(3\pi h\times 2h\times\pi h\). \(x\) is the streamwise direction, \(y\) the normal direction and \(z\) the spanwise direction. The top and bottom boundaries are no-slip walls. A periodic boundary condition is applied on the four lateral sides. A source term, already integrated within the solver of _OpenFOAM_, is included in the dynamic equations to preserve the global mass flow rate in time. More precisely, the source term targets the conservation of the bulk streamwise velocity \(u_{b}=\int\!\!\int\!\!\int_{V_{D}}u_{x}\,dV^{\prime}/V_{D}\), where \(V_{D}\) is the volume of the physical domain investigated. The targeted criterion used for all simulations is \(u_{b}=0.899u_{c}\), where \(u_{c}\) is the mean streamwise velocity at the center of the channel obtained by the DNS. The kinematic viscosity \(\nu\) is the same for the DNS and LES calculations. The bulk Reynolds number obtained by the DNS is equal to \(Re=2hu_{b}/\nu=20124\). A baseline LES is performed using the well-known Smagorinsky subgrid-scale model [31] (see Sec. II.1). This simulation is run by the pimpleFoam solver of the OpenFOAM CFD library. It is a solver tailored for the simulation of incompressible turbulent flows using the PIMPLE algorithm. The grid is composed of \(350\,000\) cells, whose details are reported in Tab. 1 along with the reference DNS. The size of the grid elements is adimensionalized with respect to the viscous wall unit \(\delta_{\nu}=\nu/u_{\tau}\). Superscript \(\star\) is used when normalizations are performed using the \(u_{\tau}\) calculated by the \(DNS\). On the other hand, the superscript \(+\) is used when \(u_{\tau}\) is obtained by each LES simulation. \(\Delta x^{\star}\) and \(\Delta z^{\star}\) are obtained using a uniform distribution. A geometric expansion is used to control the size of the elements \(\Delta y\) in the normal direction to grant higher resolution at the wall. The size of the smallest element \(\Delta y_{1}^{\star}\) at the wall and the largest element \(\Delta y_{c}^{\star}\) at the centerline are reported. The size of the mesh elements used for the calculation of the baseline simulation is larger than typical values observed in LES for this case, which are \(\Delta x^{\star}\approx 50\) Figure 3: Size of the physical domain investigated. \(\Delta y_{1}^{\star}\approx 1\) and \(\Delta z^{\star}\approx 20\)[29]. This choice was made in order to i) assess the capabilities of the DA method to provide an accurate state estimation and parametric inference even in under-resolved conditions and ii) to obtain faster runs of the DA algorithm using a sufficiently large ensemble of simulations. The initial conditions for the baseline LES case were set using an interpolated field from a DNS solution. The simulation was carried out for a duration of 50 advective times, calculated as \(t_{A}=h/u_{c}\), in order to dissipate the initial field. Then, average quantities have been calculated over a time window of \(900t_{A}\). The time step for the advancement of the solution is constant and equal to \(\Delta t=0.02t_{A}\). Results from the baseline LES are now compared with the DNS and additional reference DNS results freely available online for a very similar \(\mathit{Re}_{\tau}\)[50]. Fig. 4 show the normalized mean streamwise velocity profile \(u^{+}=\overline{u_{x}}/u_{\tau}\). Averages (indicated with the overline) are performed in time as well as in the streamwise and spanwise direction, in order to obtain improved statistical convergence. One can see that the discrepancy between the LES prediction and the DNS results is significant. One of the key elements affecting this lack of accuracy is the erroneous prediction of the shear stress at the wall \(\tau_{w}\) and thus of the friction velocity \(u_{\tau}\). For this parameter, a discrepancy of 28% with DNS results is observed. The main source of error for the prediction of this quantity is related to the SGS closure used, for which \(\nu_{sgs}\) does not correctly scale to zero approaching the wall, as shown in Fig. 5. A large discrepancy is also observed for the accuracy in the prediction of the components of the resolved Reynolds stress tensor \(\overline{u^{\prime}_{i}u^{\prime}_{j}}\). The quantities \(u^{\prime}_{x}u^{\prime}_{x}{}^{+}=\overline{u^{\prime}_{x}u^{\prime}_{x}}/u ^{2}_{\tau}\), \(u^{\prime}_{y}u^{\prime}_{y}{}^{+}=\overline{u^{\prime}_{y}u^{\prime}_{y}}/u ^{2}_{\tau}\), \(u^{\prime}_{z}u^{\prime}_{z}{}^{+}=\overline{u^{\prime}_{z}u^{\prime}_{z}}/u ^{2}_{\tau}\) and \(u^{\prime}_{x}u^{\prime}_{y}{}^{+}=\overline{u^{\prime}_{x}u^{\prime}_{y}}/u ^{2}_{\tau}\) are shown in Fig. 6. One can see that both the magnitude and the position of the peak are not accurately predicted. The results obtained via the baseline LES indicate that, using the numerical set-up described combined with the Smagorinsky SGS closure, an accurate prediction of the statistical moments of the flow field is not obtained. In subsection III.2, the DA procedure used to improve the flow prediction using this LES setup is detailed. ### Data Assimilation strategy The DA simulations performed in this work aim to provide instantaneous augmented states of the test case investigated. This objective will be achieved by coupling on-the-fly the numerical prediction of the LES solver with localized information sampled from the DNS reference. This strategy relies on three main ingredients: * _The model_, which provides a quasi-continuous description of the physical phenomenon investigated. In this analysis, the _model_ is the LES setup presented in section III.1. * _The observation_. Time-resolved samples of the instantaneous velocity field from the reference DNS are used for this purpose. The samples are collected over 10800 sensors in the physical domain for \(0.48\leq y^{+}\leq 56.4\) i.e. in the viscous sublayer, in the buffer region, and in the inertial range. Sampling in time is performed at a constant rate of \(\Delta t_{DA}=0.04t_{A}\). \begin{table} \begin{tabular}{l c c c c c c c c c c} \hline \hline Type & \(L_{x}\) & \(L_{y}\) & \(L_{z}\) & \(N_{x}\) & \(N_{y}\) & \(N_{z}\) & \(\Delta x^{\star}\) & \(\Delta y_{1}^{\star}\) & \(\Delta y_{c}^{\star}\) & \(\Delta z^{\star}\) & Cells \\ \hline LES & \(3\pi\) & \(h\) & \(\pi\) & 70 & 100 & 50 & 73 & 2.6 & 27.8 & 34 & \(3.5\times 10^{5}\) \\ DNS & \(6\pi\) & \(h\) & \(2\pi\) & 1024 & 256 & 512 & 9.9 & 0.96 & 11.2 & 6.6 & \(13.4\times 10^{7}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Details of the grids used for LES calculations as well as for the DNS * _The coupler._ CONES is used to couple the incompressible OpenFOAM solver pimpleFoam with an EnKF algorithm as presented in Sec. II.1. The setup of the EnKF procedure is now detailed. The size \([N_{ext}\,,\,N_{e}]\) of the state matrix \(\mathbf{U}\) is given by \(N_{e}=40\) (number of ensemble members) and \(N_{ext}=N+N_{\theta}\). Here \(N_{\theta}\) is the number of parameters optimized by the EnKF and it is different for the two DA runs which will be presented in the following. \(N=3\,n_{cells}\) is equal to three times the number of grid elements that are used in the DA procedure. This is because the number of degrees of freedom considered in the DA procedure is the three components of the velocity field for each of the \(n_{cells}\) mesh elements. The value of \(n_{cells}\) is strictly connected with the physical localization performed by clipping the numerical domain analyzed. This procedure, which is illustrated in Fig. 7, consists in excluding from the DA calculation the grid elements for \(0.18<y/h<1.82\). These elements are relatively far from the sensors and therefore the risk of spurious correlation affecting the stability of the DA algorithm is high. In addition, the excluded domain represents around \(56\%\) of the total number of cells used by the LES model. The computational gain for the calculation of the Kalman gain is also approximately \(56\%\) as shown in Tab. 3. Covariance localization is applied as well to the calculation so that discontinuities associated with the physical clipping are smoothed out. The Figure 4: Averaged streamwise velocity profiles \(u^{+}\) for the baseline Smagorinsky LES (\(\star\)) and the DNS (\(-\)) compared to reference data (\(--\)) Figure 5: Distribution in the wall-normal direction of \(\overline{\nu}_{sgs}/\nu\) for the baseline LES structure of the matrix \(L\) used for covariance localization is the one presented in equation 19, where the parameter \(l=0.175\) in streamwise and spanwise directions and \(l=0.000985\) in wall-normal direction. Observation is obtained from 408 sensors which have been selected among the 10800 available. The constraint \(x\in[0.6\pi,\,2.4\pi],\ z\in[0.25\pi,\,0.75\pi]\) has been applied in the selection to take into account the different domain sizes for the LES and the DNS and to exclude potential problems emerging with the periodic boundary conditions. The location of the probes, which are indicated as red dots, is shown in Fig. 7. As previously discussed, the three components of the instantaneous velocity field are sampled. However, in the following configurations, the observation array is composed of 408 samples of the streamwise velocity only. The confidence in the DNS data is driven by the matrix \(R\) presented in Sec. II.2.1. The matrix is diagonal and expressed as \(R=\sigma_{m}^{2}I\), where \(\sigma_{m}\) quantifies the uncertainty of the measurements. An accuracy of 20% is applied as a percentage to the values for each observation. This implies that the variance of the velocity field oscillates between 0.003 and 0.188 depending on the distance from the wall of the sensor considered. These last remarks also imply that the weight given to each observation is the same. A specific DA run (DA-LESA), presented in appendix A, has been performed taking as observation the three components of the velocity field for each sensor. The general algorithm for the DA run is now presented. The ensemble members are initialized with a _prior_ state in terms of initial physical field and parametric description of the SGS model. The former is the field of the converged Smagorinsky simulation shown Sec. III.1 and is the same for every member of the ensemble. The initial conditions for the parametric description of the SGS Figure 6: Components of the Reynolds stress tensor for the baseline Smagorinsky LES (\(\star\)) and the DNS (\(-\)) compared to reference data (\(--\)) model are different for the two DA simulations performed and they will be described in sections III.2.1 and III.2.2. Once the initial state is provided, the DA procedure advances in time the LES ensemble members for a total of \(300t_{A}\) times, performing an analysis phase each \(0.12t_{A}\). This choice, which implies that only one out of every three observation samples is integrated within the DA scheme, results in a total of 2500 analysis phases. If one considers that the time step for the LES simulations is \(\Delta t=0.02t_{A}\), this indicates that one analysis is performed every six forecast steps. No _state_ inflation is used in the DA runs. However, a time-varying _parametric_ stochastic inflation is included to improve the efficiency of the DA optimization. No inflation was used from \(t_{A}=0\) to \(t_{A}=12\). Then, a relatively strong inflation was included for \(t_{A}=[12; 24]\) with \(\lambda=10\%\), followed by \(\lambda=5\%\) for \(t_{A}=[24;36]\). Finally, \(\lambda=1\%\) was used to carry out the averaging for the calculation of the statistical moments. Statistical averages are calculated in the range \(t\in[50,\,300]\), in order to safely dissipate high levels of variance previously used for the convergence of the parametric description. The two main DA simulations are now presented in detail, highlighting the differences among the procedures. #### iii.2.1 DA run \(1\) (DA-LES1): optimization of the coefficient \(C_{k}\) In this first DA run (referred to as DA-LES1) the vector of the parameters to be optimized consists of one element, which is the model constant \(C_{k}\) of Smagorinsky's SGS closure. This is equivalent to optimizing the well-known coefficient \(C_{S}\), which has been studied in the literature in particular in the framework of UQ analyses [22; 23]. As previously stated, the value of this global constant is updated at each analysis phase. Initial values of the \(N_{e}=40\) ensemble simulations are determined using a bounded Gaussian distribution \(\mathcal{N}(\mu_{u},\sigma_{u}^{2})\). Considering data in the literature [22], \(\mu_{u}=0.094\) and \(\sigma_{u}=0.03\) were chosen to investigate a suitably large parametric space. The Gaussian distribution is constrained to values in the range \(\mu_{u}\pm 2\sigma_{u}\), in order to avoid initial nonphysical parametrization which could lead to the divergence of the algorithm. Figure 7: Location of the sensors used to obtain observation for data assimilation. The region in red corresponds to the physical clipping for the EnKF i.e. the region in space where the state augmentation is performed. DA run 2 (DA-LES2): model spatial expansion for \(C_{k}\) Following the results of DA-LES1, a more complex optimization is targeted to improve the predictive capabilities of the LES solver. Exploiting the homogeneity features of the test case in the streamwise direction \(x\) and the spanwise direction \(z\), the optimization of this second run (referred to as DA-LES2) targets the behavior of a functional expression for \(C_{k}(y)\). More precisely, the free coefficients in a Gaussian expansion of \(C_{k}\) are considered as variables to be optimized: \[C_{k}=\sum_{i=1}^{n}\exp\left(a_{i}-\frac{(y-y_{i})^{2}}{\sigma_{i}^{2}}\right) \tag{20}\] For each of the \(n\) Gaussian functions used in the decomposition, the free parameters to be determined are \(a_{i}\) (intensity of the peak), \(\sigma_{i}\) (width of the function), and \(y_{i}\) (position of the peak). The functions are considered to be symmetric with respect to the half channel height, owing to the statistical symmetry in the wall-normal direction \(y\). The decomposition is performed using \(n=5\) Gaussian functions. This adds up to 15 parameters in the control vector \(\theta\) to be optimized via the EnKF. The average prior distribution for these functions is shown in Fig. 8a. This initial distribution is chosen so that the peak of three functions is closer to the wall, in order to provide a suitable representation of \(\nu_{sgs}\) in this region. For each ensemble member, the value of the 15 free coefficients is determined using a Gaussian truncated (\(\pm 2\sigma\)) perturbation so that \(a_{i}=\mathcal{N}(-4.5,0.3^{2})\), \(\sigma_{i}=\mathcal{N}(0.18,0.04^{2})\) and \(y_{i}=\mathcal{N}(0.15,0.05^{2})\). ## IV Prediction of the statistical features using on-the-fly DA The previous discussion stressed how the DA tools provide an update of the physical state as well as an optimization of the model. In this section, particular attention is focused on the latter aspect. Results from DA-LES1 and DA-LES2 are investigated to observe how the DA procedure dynamically affects the value of the parameter \(C_{k}\) as well as to assess the effects of the parametric optimization over the flow statistical behavior. One important point that must be stressed is that such statistical moments are not directly observed by the DA algorithms. In fact, unlikely recent analyses in the literature [29], the DA procedure relies on instantaneous flow fields obtained from the model and sampled as observation. First, the optimized behavior of the parameter \(C_{k}\) is investigated. For classical simulations using the prescribed values of the numerical code, one has \(C_{k}=0.094\), \(C_{\varepsilon}=1.048\) which corresponds to \(C_{S}\approx 0.17\). Once the convergence of the parametric description is obtained, the coefficients exhibit a very weak time evolution. Results obtained from the run DA-LES1, which targets a global \(C_{k}\) optimization, show that the time-averaged optimized value for \(C_{k}\approx 0.014\). This result, which corresponds to \(C_{S}\approx 0.04\), is 7 times smaller than the default \(C_{k}\) value provided by the code. The uncertainty associated with the limited amount of ensemble members has been assessed repeating the initial DA phases using different random distributions for \(C_{k}\). These results indicated that optimized values fall in the range \(C_{k}\in[0.012,\,0.017]\). Within these ranges, variations in the predicted physical quantities are very small and they fall within the confidence threshold (i.e. values of the matrix \(R\)) provided for this study. The results for the run DA-LES2 are shown in Fig. 8b, where the profile of \(C_{k}\) and each function of the Gaussian spatial distribution are shown. The final \(C_{k}\) profile in red is again significantly lower than the distribution used as _prior_. The values range from \(0.008\) in the core flow region to a maximum value around \(0.012\) reached close to the wall around \(y^{+}=60\). In addition, the contributions of the modes of the Gaussian Expansion to describe the augmented \(C_{k}\) profile appear to be very different. Two main modes govern the shape of \(C_{k}\). The first mode exhibits a slow, quasi-linear decrease moving from the centerline towards the wall. On the other hand, the second one exhibits a maximum in the near-wall region ( \(y^{\star}\approx 30\)). The magnitude of the other three modes is significantly lower and they mainly smooth out the profile for \(C_{k}\). Despite the higher complexity of this strategy, one can see that the distribution in the \(y\) direction of \(C_{k}\) is quasi-constant and \(C_{k}\approx 0.01\) i.e. very similar to the global value obtained in the DA-LES1 run. The enhancement of the predictive capabilities via DA optimization is now assessed by comparing the statistical moments of the velocity field with the available DNS results as well as with the baseline LES. First of all, the prediction of the friction velocity \(u_{\tau}\), which is one of the key features of this test case, is significantly improved for all DA runs. In fact, the targeted DNS friction velocity is \(u_{\tau}=0.048\) and baseline LES simulation predicts a \(u_{\tau}=0.0614\), which represents an over-prediction of \(28\%\). DA-LES1 and DA-LES2 predict almost the same friction velocity, with \(u_{\tau}^{DL1}=0.04617\) and \(u_{\tau}^{DL2}=0.04619\). In this case, the friction velocity is under-predicted when compared with the DNS, but the discrepancy is only \(4\%\). This increase in accuracy comes with a significant reduction of the subgrid-scale viscosity in the near-wall region, which does not scale correctly for the Smagorinsky LES. Considering that the values obtained for \(C_{k}\) in the near wall region with the two DA procedures are almost identical, it is not surprising to observe minimal variations in the prediction of \(u_{\tau}\). Similar conclusions can be drawn by the analysis of Fig. 9, where the normalized mean velocity profile \(u^{\star}\) against \(y^{\star}\) are shown. Averages for the DA procedures are performed so that \(u^{\star}=\langle\overline{u_{x}}\rangle/u_{\tau}\) where \(\langle.\rangle\) is the ensemble average operator and \(\cdot\) is the time-average operator. The results obtained via the two DA procedures show a global improvement in the prediction of the velocity field, reducing on average the discrepancy with the DNS data. This observation is a direct consequence of the improved prediction for \(u_{\tau}\). The apparently more accurate behavior of the baseline LES close to the center of the channel is actually a compensation of errors between the local numerical error and the erroneous prediction of \(u_{\tau}\), which can be observed in Fig. 9a. In fact, with a more accurate prediction of \(u_{\tau}\), the baseline LES would almost exactly collapse on the results obtained by DA. Minor discrepancies can be observed between the runs DA-LES1 and DA-LES2, which are arguably associated with the rate of convergence of the EnKF using 40 ensemble members. The normalized components of the resolved Reynolds stress tensor are shown Fig. 10. Again, Figure 8: Profiles of the five Gaussian functions used to represent \(C_{k}\) in blue. Distribution of \(C_{k}\) in red. Left: prior distribution. Right: optimized distribution Figure 10: Components of the Reynolds stress tensor for DA-LES1 (\(\blacktriangleright\)), DA-LES2 (\(+\)), baseline LES (\(\star\)) and the DNS (\(-\)). Adimensionalization is performed using the friction velocity \(u_{\tau}\) obtained in the DNS run. Figure 9: Half channel velocity profiles for DA-LES1 (\(\blacktriangleright\)), DA-LES2 (\(+\)), baseline LES (\(\star\)) and the DNS (\(-\)). Adimensionalization is performed using the friction velocity \(u_{\tau}\) obtained in the DNS run. for the DA runs, \(u^{\prime}_{i}{u^{\prime}_{j}}^{\star}=\langle\overline{u^{\prime}_{i}u^{\prime}_{ j}}\rangle/u_{\tau}^{2}\). A global improvement in the accuracy of the prediction of such quantities is observed. For all the components, the location of the peak is accurately predicted. The magnitude of the components also exhibits a general improvement, which is however dependent on the component considered. In fact, while a very good agreement with DNS data is observed for \(u^{\prime}_{z}{u^{\prime}_{z}}^{\star}\), a slight decrease in accuracy is instead obtained for \(u^{\prime}_{y}{u^{\prime}_{y}}^{\star}\). The almost identical results obtained with the two runs DA-LES1 and DA-LES2 suggest that the variations of \(C_{k}\) in the \(y\) direction for the latter do not affect the flow prediction. One could argue that the present optimization reached the best performances obtainable with Smagorinsky LES, whose subgrid-scale representation is affected by strong, intrinsic limitations [29]. Another possibility is the combination of prior state and inflation employed in the present analysis for the model coefficients was not sufficient to perform a complete exploration of the parametric space, and the final solution for DA-LES2 was drawn to the same local optimized state obtained for DA-LES1. The analysis of the physical quantities normalized over the \(u_{\tau}\) calculated by each simulation (suffix \(+\)) leads to similar conclusions. The mean streamwise velocity profiles, which are shown in Fig. 11, confirm the global lack of accuracy of the baseline simulation, which is now even more magnified by the significant error in the prediction of \(u_{\tau}\). The components of the resolved Reynolds stress tensor, which are reported in Fig. 12, also provide very similar indications. Finally, a spectral analysis of the velocity field \(\mathbf{u}\) is performed in Fig. 13. This flow variable has been sampled in time at four probes located at \(y^{\star}\in[1.45,46.74]\). Power spectrums have been obtained using a Morlet Transform [51] for the baseline LES, the run DA-LES2, and the DNS. The spectra are plotted over the dimensionless wave number, \(\kappa^{+}=\kappa\nu/u_{\tau}\) with \(\kappa=2\pi f/u_{c}\). \(f\) is here the set of frequencies used for the Morlet transform. On the first line, data for the streamwise component \(u_{x}\) are shown at locations where observation is available and data from that sensor is used in the DA analysis phase (indicated as U-DA in the legend). Comparing the baseline simulation and the DA run, one can see that the accuracy of the spectra has been improved for every \(y^{\star}\) investigated. The best result is observed in the proximity of the wall as shown in Fig. 13a. For this location, the energy's amplitude is improved by approximately one order of magnitude. The comparison of the spectra from the DNS and the run DA-LES2 also indicates an offset of the wavenumber for which the spectral density starts to decrease fast. This offset, which is around one octave, is very close to the ratio of the mesh resolution in the streamwise direction (see Tab.1). For the baseline LES, this drop in energy begins at lower wavenumbers. This observation can be justified by the discrepancy in the prediction of \(u_{\tau}\) (which is used to obtain \(\kappa^{+}\)) as well as by the Smagorinsky closure, which provides an unwanted dissipative effect at the large scales. Results on the second line of Figs. 13d and 13e are obtained at a location where a DNS sensor is available and used for DA analysis, but the information assimilated (streamwise velocity) is not the one here investigated. More precisely, the power spectra for the spanwise and vertical components are shown. One can see that, similarly to what was observed for the spectra of the streamwise velocity, a global improvement is obtained for the DA-LES2 run. This result confirms the global beneficial DA effect over the complete flow field, and not just for the variables for which observation is available. The analysis is completed by the results in Fig. 13f, where the spectrum of the streamwise velocity sampled at a location not used in the DA analysis (N-DA) is shown. Again, one can see that the spectrum shows an improvement similar to what was observed at sensors actively used in the DA procedure, indicating that the optimization of the SGS closure is globally beneficial in particular to reduce the dissipation of the resolved energy at large scales. In summary, on-the-fly DA using instantaneous measurements is able to improve the accuracy of LES via calibration of the SGS closure. An interesting point is that present results are similar to findings by Mons et al. [29], which were however obtained via observation of the physical quantities used to evaluate the performance of the LES simulations. In this case, the optimization process is more complex, because of the instantaneous nature of the observation as well as for its sparsity in space and time. Thus, the present findings open perspectives of real-time optimization of scale-resolving CFD using tools based on the EnKF, once the computational architectures are Figure 12: Components of the Reynolds stress tensor for DA-LES1 (\(\blacktriangleright\)), DA-LES2 (\(+\)), baseline LES (\(\star\)) and the DNS (\(-\)). strong enough to do so. However, similarly to what was observed by Mons et al. [29], the parametric optimization can mitigate but not eliminate the discrepancy between Smagorinsky LES and DNS, due to the intrinsic limitations of the structural form of the SGS model. While this problem is difficult to challenge, one can arguably consider that on-the-fly DA has a higher potential to determine in real-time SGS model structural forms and correction for a specific case than offline EnKF approaches. Lastly, both strategies used in this analysis indicate that the best accuracy is obtained for very low values of the model constant \(C_{k}\). Despite the run DA-LES2 provides a more sophisticated space distribution of this parameter, values are low enough to consider that the dynamic effect of the SGS closure becomes globally and locally minor, as shown by the profiles obtained for DA-LES1. These results are consistent with recent works presenting extensive comparisons between explicit and implicit SGS closures [52]. ## V Synchronization of the flow field The synchronization capabilities of the DA algorithm are now investigated. With synchronization, we indicate the capability of the DA algorithm to progressively reduce the discrepancy between the instantaneous model solution and the observation, both in proximity and far from the sensors. If successful, the only state corrections applied by the analysis phase are due to the accumulation of error in forecast step(s), due to the lack of accuracy of the model. Even Figure 13: Power Spectra of the velocity field obtained with a Morlet wavelet transform. Results are shown for DA-LES2 (\(-\)), baseline LES (\(-\)) and the DNS (\(-\)). For the latter, additional results obtained using a fast Fourier transform are shown in grey. though synchronization is not necessary for the analysis of statistical moments, such as the ones investigated in Sec. IV, it has crucial importance for the analysis of instantaneous features of unstationary flows. In fact, in a digital twin system, efficient synchronization enables the model to identify extreme events and thus prevent critical occurrences for the physical counterpart. Tools based on the EnKF can naturally act on the synchronization of the instantaneous flow. Thanks to the flexibility of the quantity observed and the local correlation captured between the physical variables, their efficiency in this task is supposedly higher than classical Nudging. However, during the DA calculations, the variability of the ensemble tends to diminish relatively fast, potentially precluding an efficient synchronization. To avoid this issue, the hyperparameter known as inflation must be properly optimized. In this section, a number of DA runs are performed to study the effects of inflation over the rate of synchronization. In this case, the attention is focused on the very first analysis phases, and results are investigated over two advective times \(t_{A}\). The DA analyses are now performed every two time steps i.e. \(0.04t_{A}\), which corresponds to a total of 50 DA state updated over the time window of investigation. Such a high frequency in updates has been imposed to ensure that errors due to the sparsity in time of the data are neglected [53]. The state estimation is obtained via the flow prediction of 40 members, which are initialized using a different velocity field but share the same value of \(C_{k}\) for the SGS model obtained in the DA-LES2 procedure. The 40 velocity fields used as prior states have been generated running a single simulation with the same optimized SGS model obtained Sec. IV and sampling complete flow fields every \(10t_{A}\). The inflation is here applied only to the _state estimation_ via the stochastic approach described in Sec. II.2.2. The inflation applied to the parametric SGS description is here set to zero in order to exclude effects due to different behaviors of the LES closure. In addition, the covariance matrix \(\mathcal{R}\) is also the same for each DA run and it is set to \(R=\sigma_{m}^{2}I\) with \(\sigma_{m}=5\%\). More details about these two last hyperparameters are provided in appendix C, where parametric inflation is shown to have negligible effect for the purpose of this analysis. The effectiveness of the synchronization is evaluated using the following information: * The velocity field obtained by the ensemble members is sampled in correspondence of three sensors, which are selected among the 10800 sensors previously used in the reference DNS. Details about the sensors are given in Tab. 2. One can see that two of the probes are used in the DA algorithm, while the last one is not directly used. Still, the data obtained for the latter can be used for comparison. * A global estimation of the normalized root mean square deviation (indicated as \(\Phi\)) for the velocity field is performed considering data from the 408 sensors used within the DA algorithm and for 408 sensors that were not used in the EnKF. The definition of \(\Phi\) is given below, for an instant \(k\) : \[\Phi_{k}=\sqrt{\frac{\sum_{j=1}^{N_{o}}\left(\langle s_{j,k}\rangle-\alpha_{j, k}\right)^{2}}{N_{o}}}/\alpha_{k}^{mean}\] (21) with \(\langle s_{k}\rangle=\sum_{i=1}^{N_{e}}Hu_{i,k}\), \(\alpha_{k}^{mean}=\sum_{j=1}^{N_{o}}\alpha_{j,k}/N_{o}\) and \(N_{o}\) the number of observations The evolution of the instantaneous streamwise velocity \(u_{x}\) over the centerline average velocity \(u_{c}\) is shown in Fig. 14 for the three probes. The velocity sampled from the DNS, which is used as observation, is shown in blue. Data sampled in the same location from the ensemble members of the DA procedure is shown in black. In this case, the black line corresponds to an ensemble average. Shaded areas visually represent the confidence level/variability in the data. More precisely, the blue area is connected with the values included in the covariance matrix \(R\) showing an area of thickness \(2*\sigma_{m}\). On the other hand, the grey area represents the 95% confidence interval for the model representation. This quantity is driven by the distribution of the prior states for \(t_{A}=0\), and it is progressively affected by the inflation applied to the physical state as more analysis phases are performed. The three probes have been selected to highlight different features of the flow field. Probes 1 and 2 correspond to sensors used in the DA procedures, but they are located at different distances from the wall (\(y^{\star}=1.45\) and \(y^{\star}=19.84\), respectively). On the other hand, the probe 3 is located at \(y^{\star}=26.85\) and the corresponding sensor is not used in the DA analyses. One can see in the first line of Fig. 14 that, if no _state_ inflation is used, the initial model variability due to the prior states collapses very rapidly with a drastic shrinking of the grey area. The grey and blue areas exhibit a very limited superposition, which prevents the model realizations from synchronizing with the observation. In the second, third, and fourth lines of Fig. 14, progressively more state inflation is used during the analysis phases. One can distinctively see an increase in the grey area associated with model variability, which does not decrease for larger simulation times. The analysis of the results for probes 1 and 2 clearly indicates that a threshold level of 15% Gaussian state inflation appears to be enough to obtain a convincing synchronization of the velocity field in correspondence with the sensors. This threshold could potentially be even lower if more sophisticated algorithms for state inflation are used. Significant improvements with increasing inflation are observed as well for the probe 3, even if the synchronization is not completely obtained. Therefore, these results confirm that the effect of the EnKF is not just local but, thanks to the scale interactions captured by the underlying LES model, a global improvement in the instantaneous flow prediction is obtained. One conclusion that can be drawn is that, once a significantly large superposition of the confidence areas is obtained for a sufficiently long time, a good synchronization is obtained. Similar behavior was also observed by Tandeo et al. [54] but for a one-dimensional model. The normalized root mean square deviation defined in equation 21 is now used to provide a global assessment of the capabilities of the DA algorithm to synchronize the LES model with the DNS available data. Results are shown in Fig. 15 for the four cases previously analyzed i.e. 0%, 5%, 15%, and 25% state inflation. The red line corresponds to a limit \(\Phi_{lim}\) calculated comparing the values of the not inflated simulations against 1000 observations times. Therefore, for an infinite number of observations and locations, \(\Phi_{lim}\) for Fig. 15 (a) and (b) should be the same. It is here different due to the limited amount of probes and the heterogeneous set of coordinates of the probes. Results in Fig. 15 (a) corresponds to the average discrepancy observed over the 408 locations where sensors are used for DA. The DA runs perform significantly better in the first stages, thanks to the variability initially provided with the choice of the prior states. However, results tend to degrade pretty rapidly for the DA run without state inflation. It could be expected to show very similar errors to \(\Phi_{lim}\) after a sufficiently long time. On the other hand, the three DA experiments with non-zero state inflation behave very similarly. Their magnitude is significantly smaller than \(\Phi_{lim}\) and it does not appear to deteriorate in the time window analyzed. Fig. 15 (b) shows the results for the normalized root mean square deviation in correspondence of sensors where \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline **Probe ID** & \(\mathbf{x/h}\) & \(\mathbf{y/h}\) & \(\mathbf{z/h}\) & \(\mathbf{y^{\star}}\) & **Used in DA analysis** \\ \hline 1 & 7.372 & 0.0027 & 2.349 & 1.45 & yes \\ \hline 2 & 3.967 & 1.9631 & 1.233 & 19.84 & yes \\ \hline 3 & 7.464 & 1.9501 & 1.748 & 26.85 & no \\ \hline \end{tabular} \end{table} Table 2: Details about the sensors used to study synchronisation. DNS data is available, but it is not used in the DA procedure. Results are qualitatively similar to what previously discussed for Fig. 15 (a) even if, in this case, results for the DA runs are closer to \(\Phi_{lim}\). This observation is due to the lack of correct representation of the correlation between variables, which is due to the limited number of ensemble members (sampling error). In this case, results seem to be more sensitive to the value of the state inflation, as very strong inflation seems to be more sensitive to the value of the state inflation. Figure 14: Synchronization of the velocity field. Data in blue represents the observation, while black lines indicate the model prediction. Results are shown for (left column) Probe 1, (center column) Probe 2, and (right column) Probe 3. The magnitude of the state inflation is set to (first row) 0%, (second row) 5%, (third row) 15% and (fourth row) 25%. perform worse than moderate state inflation. One could expect in this case that the perturbations might be strong enough to introduce an unwanted noise effect on the flow prediction, degrading the global accuracy. Another potential issue with hyperparameters, which is not studied in the present work, is associated with the characteristic length used for covariance localization. If the selected length is large, spurious correlations may appear because of sampling errors. On the other hand, a short length may preclude an accurate representation of the correlation between the variables, working as a filter over the multi-scale non-local interactions observed in turbulent flows. In summary, the analysis of the global quantity \(\Phi\) stresses how much DA can be important to provide a successful instantaneous state estimation, which could be even more important than an accurate parametric optimization for the prediction of rare extreme events and for the optimization of unstationary flows exposing a strong time evolution of its features. ## VI Conclusion An online DA strategy based on state-of-the-art techniques for the Ensemble Kalman Filter has been used to improve the predictive capabilities of Large-Eddy Simulation. The attention of the work is mostly devoted to the correct representation of instantaneous features, which can be essential to predict and anticipate extreme events affecting industrial applications. To perform the analysis, an on-the-fly coupling has been performed via the platform CONES, combining LES solver runs using OpenFOAM and localized instantaneous high-fidelity information obtained from a DNS. First, the DA runs used instantaneous values of the velocity field to optimize the parametric behavior of the Smagorinsky model used for subgrid closure. Two strategies have been proposed to obtain an optimized value of the model constant \(C_{k}\). Despite the difference in complexity, both strategies provide a similar result, which is a significant reduction of the intensity of \(C_{k}\) and of the SGS model. These conclusions support recent discussion in the LES community about the usage of explicit and implicit SGS modeling [52]. This optimization reduces the discrepancy of the statistical moments of the flow field with DNS data, but it does not eliminate it, as observed by Mons et al. [29]. The reason behind this observation is associated with the structural limitations of the Smagorinsky model, whose intrinsically dissipative nature is not able to fully take into account the effects of the filtered scales and their interactions with the resolved flow field. The DA model has Figure 15: Plots of \(\Phi\) : From green to violet - 0%, 5%, 15% and 25% inflation. Red curve shows the average limit obtained comparing the values of the DA-LES simulations against 1000 observations times. then been used to analyze the efficiency in flow reconstruction and synchronization with the high-fidelity sparse data available. It was shown that DA is able to significantly improve the correlation between model results and observation, but the efficiency in such synchronization is governed by the state inflation applied. This hyperparameter is an essential key feature of the DA algorithm which deserves more specific studies in the future. Similarly, the effects of physical and covariance localization, which were excluded in the present analysis, will be extensively investigated in future research for online DA strategies. ###### Acknowledgements. Our research activities are supported by the funding of the French Agence Nationale de la Recherche (ANR) through project PRC 2020 ALEKCIA. ## Appendix A Usage of multiple physical information for each sensor in the DA procedure In order to test the sensitivity of the DA algorithm to multiple physical information available at one sensor, an additional DA run has been performed. This test, referred to as DA-LESA, is almost identical to DA-LES1. The only difference is that, for each sensor, the three components of the velocity field are here provided. We remind that for the runs DA-LES1 and DA-LES2 only the streamwise component of the velocity field was used as observation in the analysis phase. Therefore for DA-LESA, the observation matrix is composed of \(408\times 3=1224\) values at each analysis phase. The covariance matrix of the measurement error is expressed as \(R=\sigma_{m}^{2}I\), where \(\sigma_{m}\) quantifies the uncertainty of the measurements. In this case, \(\sigma_{m}\) is the same for every sensor and it is calculated accounting for a 5% uncertainty over the maximum velocity observed in the DNS to mimic the accuracy of experimental measurements. Therefore, \(\sigma_{m}\approx 0.045\) in this case. This choice implies that the confidence in the DNS results is lower approaching the wall. This decision is beneficial to obtain a robust behavior of the EnKF because large discrepancies between DNS and LES can be observed very close to the wall. The results of the optimization are similar to those of the other DA runs, indicating that the EnKF procedure is robust. The optimized value of the model constant is \(C_{k}\approx 0.025\) which is 3.7 times smaller than the baseline LES and corresponds to \(C_{S}\approx 0.06\). During the DA run, values exhibit oscillations in the range \(C_{k}\in[0.020,\,0.030]\). The DA-LESA also shows a good improvement in the prediction of the friction velocity \(u_{\tau}=0.052\) with an over-prediction of the friction velocity of 8.3%, compared with the 28% of the baseline LES. The normalized mean velocity \(u^{+}\) over \(y^{+}\) and the normalized resolved shear stress of DA-LES1 and DA-LESA are shown in Fig.16. Other statistical moments of the velocity field are not shown here for the sake of conciseness, as they provide similar information. Differences between the DA runs for the prediction of the statistical moments are noticeable and mainly associated with the different prediction of the friction velocity, which is less accurate for DA-LESA. One possible reason is associated with the level of confidence in the observation, which was set at the same level for the three components of the velocity field. In the near wall region, the streamwise component is around one order of magnitude larger than the other two components, and uncertainties propagated in the observation vector act as a random noise for \(u_{y}\) and \(u_{z}\). The problem of determining an optimized hyperparametric description of the confidence level of observation, which degraded the global accuracy of the DA run in this case, deserves future investigation when such a quantity is not directly quantifiable. ## Appendix B Computational resources required to perform the DA run The computational resources required to perform the DA runs are now discussed. Tab. 3 shows information about preliminary tests performed varying a number of key parameters such as the number of mesh elements used for the LES model \(N\), the amount of sensors/observations \(N_{o}\) and the size of the ensemble \(N_{e}\). In particular, the values investigated for \(N\) (\(350\,000\) and \(154\,000\)) correspond to the numbers of mesh elements for the complete and clipped physical domain used in the present work. Comparing the completion time between lines 1 and 2 of Tab. 3, one can see that the reduction of the degrees of freedom of the model is beneficial in terms of computational cost, dividing by \(2.25\) the completion time. However, the most important parameter is the number of observations. The comparison of lines 2, 4, and 5 shows a dramatic reduction of the computational resources required with fewer sensors. This point stresses out the importance of the quality of observations used in the DA rather than quantity, as previously shown in [36]. At last, one can see that the comparison of results in lines 2 and 3, where a different number of ensemble members \(N_{e}\) is used, has a lower impact on the computational cost when compared with the previous parameters of investigation. Figure 16: Half channel velocity profiles for the DA runs DA-LES1 (\(\blacktriangleright\)), DA-LESA (\(+\)), the baseline Smagorinsky LES (\(\ast\)) and the DNS (\(\times\)) \begin{table} \begin{tabular}{c c c c c c} \hline \hline \(N\) & \(N_{o}\) & \(N_{e}\) & \begin{tabular}{c} Operations \\ Complexity \\ \end{tabular} & \begin{tabular}{c} Completion \\ time (s) \\ \end{tabular} & Operations/s \\ \hline \(350,000\times 3\) & \(1224\) & \(40\) & \(\mathcal{O}(1.6\times 10^{12})\) & \(161\) & \(10^{10}\) \\ \(154,000\times 3\) & \(1244\) & \(40\) & \(\mathcal{O}(7.2\times 10^{11})\) & \(71.5\) & \(10^{10}\) \\ \(154,000\times 3\) & \(1224\) & \(10\) & \(\mathcal{O}(7\times 10^{11})\) & \(54\) & \(1.3\times 10^{10}\) \\ \(154,000\times 3\) & \(408\) & \(40\) & \(\mathcal{O}(8.4\times 10^{10})\) & \(8.9\) & \(9.5\times 10^{09}\) \\ \(154,000\times 3\) & \(228\) & \(40\) & \(\mathcal{O}(2.8\times 10^{10})\) & \(3.4\) & \(8.4\times 10^{09}\) \\ \hline \hline \end{tabular} \end{table} Table 3: Summary of the test performed to evaluate the computational costs required by the EnKF ## Appendix C Supplementary details about synchronization The sensitivity of synchronization to inflation in the parametric description of the model and in the variance of the observation is here discussed. Fig. 17a shows the normalized root mean square deviation with variation of the inflation on the parameters and state inflation. Three levels of parameter inflation are used from light to dark color: 0%, 2%, and 5%. State inflation is set to 0% in blue, 5% in green, and 15% in orange colors. Inflation for the model parameters appears to have a negligible effect on the synchronization obtained via DA when compared with state inflation. Fig. 17b shows four levels of the prescribed variance for the observation, for 5% inflation of the state. Again, synchronization does not seem to be affected by the level of confidence in the observations here tested, which is in the range of recommendations for robust application of the EnKF. Very low or very high confidence in the observation can lead to poor synchronization as well as inaccurate parametric optimization, as shown by Tandeo et al. [54]. Figure 17: \(\Phi\) of the 408 probes used in the DA analysis, using different levels of parameter inflation (left), from light to dark color: 0%, 2%, and 5% inflation, and observation confidence (right) - from green to violet: 0.5%, 1%, 5%, and 10% confidence
2305.13493
Cooperative Channel Capacity Learning
In this paper, the problem of determining the capacity of a communication channel is formulated as a cooperative game, between a generator and a discriminator, that is solved via deep learning techniques. The task of the generator is to produce channel input samples for which the discriminator ideally distinguishes conditional from unconditional channel output samples. The learning approach, referred to as cooperative channel capacity learning (CORTICAL), provides both the optimal input signal distribution and the channel capacity estimate. Numerical results demonstrate that the proposed framework learns the capacity-achieving input distribution under challenging non-Shannon settings.
Nunzio A. Letizia, Andrea M. Tonello, H. Vincent Poor
2023-05-22T21:22:22Z
http://arxiv.org/abs/2305.13493v1
# Cooperative Channel Capacity Learning ###### Abstract In this paper, the problem of determining the capacity of a communication channel is formulated as a cooperative game, between a generator and a discriminator, that is solved via deep learning techniques. The task of the generator is to produce channel input samples for which the discriminator ideally distinguishes conditional from unconditional channel output samples. The learning approach, referred to as cooperative channel capacity learning (CORTICAL), provides both the optimal input signal distribution and the channel capacity estimate. Numerical results demonstrate that the proposed framework learns the capacity-achieving input distribution under challenging non-Shannon settings. channel capacity, capacity-achieving distribution, deep learning, capacity learning. ## I Introduction Data-driven models for physical layer communications [1] have recently seen a surge of interest. While the benefit of a deep learning (DL) approach is evident for tasks like signal detection [2], decoding [3] and channel estimation [4, 5], the fundamental problem of estimating the capacity of a channel remains elusive. Despite recent attempts via neural mutual information estimation [6, 7, 8], it is not clear yet whether DL can provide novel insights. For a discrete-time continuous memoryless vector channel, the capacity is defined as \[C=\max_{p_{X}(\mathbf{x})}I(X;Y), \tag{1}\] where \(p_{X}(\mathbf{x})\) is the input signal probability density function (PDF), \(X\) and \(Y\) are the channel input and output random vectors, respectively, and \(I(X;Y)\) is the mutual information (MI) between \(X\) and \(Y\). The channel capacity problem involves both determining the capacity-achieving distribution and evaluating the maximum achievable rate. Only a few special cases, e.g., additive noise channels with specific noise distributions under input power constraints, have been solved so far. When the channel is not an additive noise channel, analytical approaches become mostly intractable leading to numerical solutions, relaxations [9], capacity lower and upper bounds [10], and considerations on the support of the capacity-achieving distribution [11, 12]. It is known that the capacity of a discrete memoryless channel can be computed using the Blahut-Arimoto (BA) algorithm [13], whereas a particle-based BA method was proposed in [14] to tackle the continuous case although it fails to scale to high-dimension vector channels. In the following section, we show that a data-driven approach can be pursued to obtain the channel capacity. In particular, we propose to learn the capacity and the capacity-achieving distribution of any discrete-time continuous memoryless vector channel via a cooperative framework referred to as CORTICAL. The framework is inspired by generative adversarial networks (GANs) [15] but it can be interpreted as a dual version using an appropriately defined value function. In fact, CORTICAL comprises two blocks cooperating with each other: a generator that learns to sample from the capacity-achieving distribution, and a discriminator that learns to differentiate paired channel input-output samples from unpaired ones, i.e. it distinguishes the joint PDF \(p_{XY}(\mathbf{x},\mathbf{y})\) from the product of marginal \(p_{X}(\mathbf{x})p_{Y}(\mathbf{y})\), so as to estimate the MI. The paper is organized as follows: Sec. II describes the main idea and contribution. The parametric implementation and training algorithm are discussed in Sec. III. Sec. IV presents numerical results for different types of channels. Sec. V concludes the paper. ## II Cooperative Principle for Capacity Learning To understand the working principle of CORTICAL and why it can be interpreted as a cooperative approach, it is useful to briefly review how GANs operate. In the GAN framework, the adversarial training procedure for the generator \(G\) consists of maximizing the probability of the discriminator \(D\) making a mistake. If \(\mathbf{x}\sim p_{\text{data}}(\mathbf{x})\) are the data samples and \(\mathbf{z}\sim p_{Z}(\mathbf{z})\) are the latent samples, the Nash equilibrium is reached when the minimization over \(G\) and the maximization over \(D\) of the value function \[\mathcal{V}(G,D) =\mathbb{E}_{\mathbf{x}\sim p_{\text{data}}(\mathbf{x})}\bigg{[} \log\bigl{(}D(\mathbf{x})\bigr{)}\bigg{]}\] \[\quad+\mathbb{E}_{\mathbf{x}\sim p_{Z}(\mathbf{z})}\bigg{[}\log \bigl{(}1-D\bigl{(}G(\mathbf{z})\bigr{)}\bigr{)}\bigg{]}, \tag{2}\] is attained so that \(G(\mathbf{z})\sim p_{\text{data}}(\mathbf{x})\). Concisely, the minimization over \(G\) forces the generator to implicitly learn the distribution that minimizes the given statistical distance. Conversely to GANs, the channel capacity estimation problem requires the generator to learn the distribution maximizing the mutual information measure. Therefore, the generator and discriminator need to play a cooperative max-max game with the objective for \(G\) to produce channel input samples for which \(D\) exhibits the best performance in distinguishing (in the Kullback-Leibler sense) paired and unpaired channel input-output samples. Thus, the discriminator of CORTICAL is fed with both the generator and channel's output samples, \(\mathbf{x}\) and \(\mathbf{y}\), respectively. Fig.1 illustrates the proposed cooperative framework that learns both for the cooperative training, as discussed next. **Theorem 1**.: _Let \(X\sim p_{X}(\mathbf{x})\) and \(Y|X\sim p_{Y|X}(\mathbf{y}|\mathbf{x})\) be the vector channel input and conditional output, respectively. Let \(Y=H(X)\) with \(H(\cdot)\) being the stochastic channel model and let \(\pi(\cdot)\) be a permutation function 1 over the realizations of \(Y\) such that \(p_{\pi(Y)}(\pi(\mathbf{y})|\mathbf{x})=p_{Y}(\mathbf{y})\). In addition, let \(G\) and \(D\) be two functions in the non-parametric limit such that \(X=G(Z)\) with \(Z\sim p_{Z}(\mathbf{z})\) being a latent random vector. If \(\mathcal{J}_{\alpha}(G,D)\), \(\alpha>0\), is the value function defined as_ Footnote 1: The permutation takes place over the temporal realizations of the vector \(Y\) so that \(\pi(Y)\) and \(X\) can be considered statistically independent vectors. \[\mathcal{J}_{\alpha}(G,D) =\alpha\cdot\mathbb{E}_{\mathbf{z}\sim p_{Z}(\mathbf{z})}\biggl{[} \log\biggl{(}D\biggl{(}G(\mathbf{z}),H(G(\mathbf{z}))\biggr{)}\biggr{)}\biggr{]}\] \[\quad-\mathbb{E}_{\mathbf{z}\sim p_{Z}(\mathbf{z})}\biggl{[}D \biggl{(}G(\mathbf{z}),\pi(H(G(\mathbf{z})))\biggr{)}\biggr{]}, \tag{3}\] _then the channel capacity \(C\) is the solution of_ \[C=\max_{G}\max_{D}\frac{\mathcal{J}_{\alpha}(G,D)}{\alpha}+1-\log(\alpha), \tag{4}\] _and \(\mathbf{x}=G^{*}(\mathbf{z})\) are samples from the capacity-achieving distribution, where_ \[G^{*}=\arg\max_{G}\max_{D}\mathcal{J}_{\alpha}(G,D). \tag{5}\] Proof.: The value function in (2) can be written as \[\mathcal{J}_{\alpha}(G,D) =\alpha\cdot\mathbb{E}_{(\mathbf{x},\mathbf{y})\sim p_{XY}( \mathbf{x},\mathbf{y})}\biggl{[}\log\biggl{(}D(\mathbf{x},\mathbf{y})\biggr{)} \biggr{]}\] \[\quad-\mathbb{E}_{(\mathbf{x},\mathbf{y})\sim p_{X}(\mathbf{x})p_ {Y}(\mathbf{y})}\biggl{[}D(\mathbf{x},\mathbf{y})\biggr{]}. \tag{6}\] Given a generator \(G\) that maps the latent (noise) vector \(Z\) into \(X\), we need firstly to prove that \(\mathcal{J}_{\alpha}(G,D)\) is maximized for \[D(\mathbf{x},\mathbf{y})=D^{*}(\mathbf{x},\mathbf{y})=\alpha\frac{p_{XY}( \mathbf{x},\mathbf{y})}{p_{X}(\mathbf{x})p_{Y}(\mathbf{y})}. \tag{7}\] Using the Lebesgue integral to compute the expectation \[\mathcal{J}_{\alpha}(G,D) =\alpha\int_{\mathbf{y}}\int_{\mathbf{x}}\biggl{[}p_{XY}(\mathbf{ x},\mathbf{y})\log\biggl{(}D(\mathbf{x},\mathbf{y})\biggr{)}\] \[\quad-p_{X}(\mathbf{x})p_{Y}(\mathbf{y})\biggl{(}D(\mathbf{x}, \mathbf{y})\biggr{)}\biggr{]}\,\mathrm{d}\mathbf{x}\,\mathrm{d}\mathbf{y}, \tag{8}\] taking the derivative of the integrand with respect to \(D\) and setting it to \(0\), yields the following equation in \(D\): \[\alpha\frac{p_{XY}(\mathbf{x},\mathbf{y})}{D(\mathbf{x},\mathbf{y})}-p_{X}( \mathbf{x})p_{Y}(\mathbf{y})=0, \tag{9}\] whose solution is the optimum discriminator \[D^{*}(\mathbf{x},\mathbf{y})=\alpha\frac{p_{XY}(\mathbf{x},\mathbf{y})}{p_{X} (\mathbf{x})p_{Y}(\mathbf{y})}. \tag{10}\] In particular, \(\mathcal{J}_{\alpha}(D^{*})\) is a maximum since the second derivative of the integrand \(-\alpha\frac{p_{XY}(\mathbf{x},\mathbf{y})}{D^{*}(\mathbf{x},\mathbf{y})}\) is a non-positive function. Therefore, substituting \(D^{*}(\mathbf{x},\mathbf{y})\) in (8) yields \[\mathcal{J}_{\alpha}(G,D^{*}) =\alpha\int_{\mathbf{y}}\int_{\mathbf{x}}\bigg{[}p_{XY}(\mathbf{ x},\mathbf{y})\log\biggl{(}\alpha\frac{p_{XY}(\mathbf{x},\mathbf{y})}{p_{X}( \mathbf{x})p_{Y}(\mathbf{y})}\biggr{)}\] \[\quad-p_{X}(\mathbf{x})p_{Y}(\mathbf{y})\biggl{(}\alpha\frac{p_{ XY}(\mathbf{x},\mathbf{y})}{p_{X}(\mathbf{x})p_{Y}(\mathbf{y})}\biggr{)}\bigg{]}\, \mathrm{d}\mathbf{x}\,\mathrm{d}\mathbf{y}. \tag{11}\] Now, it is simple to recognize that the second term on the right hand side of the above equation is equal to \(-\alpha\). Thus, \[\mathcal{J}_{\alpha}(G,D^{*}) =\alpha\cdot\mathbb{E}_{(\mathbf{x},\mathbf{y})\sim p_{XY}( \mathbf{x},\mathbf{y})}\biggl{[}\log\biggl{(}\frac{p_{XY}(\mathbf{x},\mathbf{y}) }{p_{X}(\mathbf{x})p_{Y}(\mathbf{y})}\biggr{)}\biggr{]}\] \[\quad+\alpha\log(\alpha)-\alpha\] \[=\alpha\bigl{(}I(X;Y)+\log(\alpha)-1\bigr{)}. \tag{12}\] Finally, the maximization over the generator \(G\) results in the mutual information maximization since \(\alpha\) is a positive constant. We therefore obtain, \[\max_{G}\mathcal{J}_{\alpha}(G,D^{*})+\alpha-\alpha\log(\alpha)=\alpha\max_{ p_{X}(\mathbf{x})}I(X;Y). \tag{13}\] Theorem 1 states that at the equilibrium, the generator of CORTICAL has implicitly learned the capacity-achieving distribution with the mapping \(\mathbf{x}=G^{*}(\mathbf{z})\). In contrast to the BA algorithm, here the generator samples directly from the optimal input distribution \(p_{X}(\mathbf{x})\) rather than explicitly modelling it. No assumptions on the input distribution's nature are made. Moreover, we have access to the channel capacity directly from the value function used for training as follows: \[C=\frac{\mathcal{J}_{\alpha}(G^{*},D^{*})}{\alpha}+1-\log(\alpha). \tag{14}\] In the following, we propose to parametrize \(G\) and \(D\) with neural networks and we explain how to train CORTICAL. ## III Parametric Implementation It can be shown (see Sec.4.2 in [15]) that the alternating training strategy described in Alg. 1 converges to the optimal \(G\) and \(D\) under the assumption of having enough capacity and training time. Practically, instead of optimizing over the space of functions \(G\) and \(D\), it is reasonable to model both the generator and the discriminator with neural networks \((G,D)=(G_{\theta_{G}},D_{\theta_{D}})\) and optimize over their parameters \(\theta_{G}\) and \(\theta_{D}\) (see Alg. 1). We consider the distribution of the source \(p_{Z}(\mathbf{z})\) to be a multivariate normal distribution with independent components. The function \(\pi(\cdot)\) implements a random Fig. 1: CORTICAL, Cooperative framework for capacity learning: a generator produces input samples with distribution \(p_{X}(\mathbf{x})\) and a discriminator attempts to distinguish between paired and unpaired channel input-output samples. derangement of the batch \(\mathbf{y}\) and it is used to obtain unpaired samples. We use simple neural network architectures and we execute \(K=10\) discriminator training steps every generator training step. Details of the architecture and implementation are reported in GitHub [16]. It should be noted that in Th. 1, \(p_{X}(\mathbf{x})\) can be subject to certain constraints, e.g., peak and/or average power. Such constraints are met by the design of \(G\), for instance using a batch normalization layer. Alternatively, we can impose peak and/or average power constraints in the estimation of capacity by adding regularization terms (hinge loss) in the generator part of the value function (3), as in constrained optimization problems, e.g., Sec. III of [17]. Specifically, the value function becomes \[\mathcal{J}_{\alpha}(G,D)=\alpha\cdot\mathbb{E}_{\mathbf{z}\sim p _{Z}(\mathbf{z})}\bigg{[}\text{log}\bigg{(}D\bigg{(}G(\mathbf{z}),H(G(\mathbf{ z}))\bigg{)}\bigg{)}\bigg{]}\] \[-\mathcal{E}_{\mathbf{z}\sim p_{Z}(\mathbf{z})}\bigg{[}D\bigg{(} G(\mathbf{z}),\pi(H(G(\mathbf{z})))\bigg{)}\bigg{]}\] \[-\lambda_{A}\max(||G(\mathbf{z})||_{2}^{2}-A^{2},0)-\lambda_{P} \max(\mathbb{E}[||G(\mathbf{z})||_{2}^{2}]-P,0), \tag{15}\] with \(\lambda_{A}\) and \(\lambda_{P}\) equal to \(0\) or \(1\). ## IV Numerical Results To demonstrate the ability of CORTICAL to learn the optimal channel input distribution, we evaluate its performance in three non-standard scenarios: 1) the additive white Gaussian noise (AWGN) channel subject to a peak-power constrained input; 2) an additive non-Gaussian noise channel subject to two different input power constraints; and 3) the Rayleigh fading channel known at both the transmitter and the receiver subject to an average power constraint. These are among the few scenarios for which analysis has been carried out in the literature and thus offer a baseline to benchmark CORTICAL. For both the first and third scenarios, it is known that the capacity-achieving distribution is discrete with a finite set of mass points [11, 12, 18]. For the second scenario the nature of the input distribution depends on the type of input power constraint [19]. Additional results including the study of the classical AWGN channel with average power constraint are reported in GitHub [16]. ### _Peak Power-Limited Gaussian Channels_ The capacity of the discrete-time memoryless vector Gaussian noise channel with unit-variance under peak-power constraints on the input is defined as [20] \[C(A)=\sup_{p_{X}(\mathbf{x}):||X||_{2}\leq A}I(X;Y), \tag{16}\] where \(p_{X}(\mathbf{x})\) is the channel input PDF and \(A^{2}\) is the upper bound for the input signal peak power. The AWGN Shannon capacity constitutes a trivial upper bound for \(C(A)\), \[C(A)\leq\frac{d}{2}\log_{2}\bigg{(}1+\frac{A^{2}}{d}\bigg{)}, \tag{17}\] while a tighter upper bound for the scalar channel is provided in [10] \[C(A)\leq\min\bigg{\{}\log_{2}\bigg{(}1+\frac{2A}{\sqrt{2\pi e}} \bigg{)},\frac{1}{2}\log_{2}\big{(}1+A^{2}\big{)}\bigg{\}}. \tag{18}\] For convenience of comparison and as a proof of concept, we focus on the scalar \(d=1\) and \(d=2\) channels. The first findings on the capacity-achieving discrete distributions were reported in [11]. For a scalar channel, it was shown in [21] that the input has alphabet \(\{-A,A\}\) with equiprobable values if \(0<A\lessapprox 1.6\), while it has ternary alphabet \(\{-A,0,A\}\) if \(1.6\lessapprox A\lessapprox 2.8\). Those results are confirmed by CORTICAL which is capable of both understanding the discrete nature of the input and learning the support and probability mass function (PMF) for any value of \(A\). Notice that no hypothesis on the input distribution is provided during training. Fig. (a)a reports the capacity-achieving input distribution as a function of the peak-power constraint \(A^{2}\). The figure illustrates the Fig. 2: AWGN scalar peak-power constrained channel: a) capacity-achieving distribution learned by CORTICAL, the marker’s radius is proportional to the PMF; b) capacity estimate and comparisons. typical bifurcation structure of the distribution. Fig. 2b shows the channel capacity estimated by CORTICAL with (14) and compares it with the upper bounds known in the literature. Similar considerations can be made for the \(d=2\) channel under peak-power constraints [20], and in general for the vector Gaussian channel of dimension \(d\)[22]. The case \(d=2\) is considered in Fig. 3a (\(A=\sqrt{10}\)) and Fig. 3b (\(A=5\)) where CORTICAL learns optimal bi-dimensional distributions, matching the results in [20]. In this case the amplitude is discrete. We now consider the MIMO channel where the analytical characterization of the capacity-achieving distribution under a peak-power constraint remains mostly an open problem [23]. We analyze the particular case of \(d=2\): \[C(\mathbf{H},r)=\sup_{p_{X}(\mathbf{x}):||\mathbf{H}X||_{2}\leq r}I(X;\mathbf{ H}X+N), \tag{19}\] where \(N\sim\mathcal{N}(0,\mathbf{I}_{2})\) and \(\mathbf{H}\in\mathbb{R}^{2\times 2}\) is a given MIMO channel matrix known to both transmitter and receiver. We impose \(r=1\), and we also impose a diagonal structure on \(\mathbf{H}=\text{diag}(1,r_{2})\) without loss of generality since diagonalization of the system can be performed. We study two cases: \(r_{2}=1\), which is equivalent to a \(d=2\) Gaussian channel with unitary peak-power constraint; and \(r_{2}=3\), which forces an elliptical peak-power constraint. The former set-up produces a discrete input distribution in the magnitude and a continuous uniform phase, as shown in Fig. 4a. The latter case produces a binary distribution, as shown in Fig. 4b. To the best of our knowledge, no analytical results are available for \(1<r_{2}<2\) and thus CORTICAL offers a guiding tool for the identification of capacity-achieving distributions. ### _Additive Non-Gaussian Channels_ The nature of the capacity-achieving input distribution remains an open challenge for channels affected by additive non-Gaussian noise. In fact, it is known that under an average power constraint, the capacity-achieving input distribution is discrete and the AWGN channel is the only exception [24]. Results concerning the number of mass points of the discrete optimal input have been obtained in [25, 18]. If the transmitter is subject to an average power constraint, the support is bounded or unbounded depending on the decay rate (slower or faster, respectively) of the noise PDF tail. In this section, we consider a scalar additive independent Cauchy noise (AICN) channel with scale factor \(\gamma\), under a specific type of logarithmic power constraint [19]. In particular, given the Cauchy noise PDF \(\mathcal{C}(0,\gamma)\) \[p_{N}(n)=\frac{1}{\pi\gamma}\frac{1}{1+\left(\frac{n}{\gamma}\right)^{2}}, \tag{20}\] we are interested in showing that CORTICAL learns the capacity-achieving distribution that solves \[C(A,\gamma)=\sup_{p_{X}(\mathbf{x}):\mathbb{E}\left[\log\left(\left(\frac{A+ \gamma}{\lambda}\right)^{2}+\left(\frac{x}{\lambda}\right)^{2}\right)\right] \leq\log(4)}I(X;Y), \tag{21}\] for a given \(A\geq\gamma\). From [19], it is known that under such power constraint the channel capacity is \(C(A,\gamma)=\log(A/\gamma)\) and the optimal input distribution is continuous with Cauchy PDF \(\mathcal{C}(0,A-\gamma)\). For illustration purposes, we study the case of \(\gamma=1\) and \(A=2\) and report in Fig. 5a the input distribution obtained by CORTICAL after \(100\) and \(10000\) training steps. For the same channel, we also investigate the capacity-achieving distribution under a unitary peak-power constraint. Fig. 5b shows that the capacity achieving distribution is binary. Fig. 4: MIMO channel input distributions learned by CORTICAL for different channels \(\mathbf{H}\): a) \(r_{2}=1\); b) \(r_{2}=3\). Locus of points satisfying \(||\mathbf{H}X||_{2}=1\) is also shown. Fig. 5: AICN channel input distributions learned by CORTICAL at different training steps under: a) logarithmic power constraint; b) peak-power constraint. Fig. 3: AWGN \(d=2\) channel input distributions learned by CORTICAL under a peak-power constraint: a) \(A=\sqrt{10}\); b) \(A=5\). ### _Fading Channels_ As the last representative scenario, we study the Rayleigh fading channel subject to an average power constraint \(P\) with input-output relation given by \[Y=\alpha X+N, \tag{22}\] where \(\alpha\) and \(N\) are independent circular complex Gaussian random variables, so that the amplitude of \(\alpha\) is Rayleigh-distributed. It is easy to prove that an equivalent model for deriving the distribution of the amplitude of the signal achieving-capacity is obtained by defining \(U=|X|\sigma_{\alpha}/\sigma_{N}\) and \(V=|Y|^{2}/\sigma_{N}^{2}\), which are non-negative random variables such that \(\mathbb{E}[U^{2}]\leq P\frac{\sigma_{\alpha}^{2}}{\sigma_{N}^{2}}\triangleq a\) and whose conditional distribution is \[p_{V|U}(v|u)=\frac{1}{1+u^{2}}\exp\biggl{(}-\frac{v}{1+u^{2}}\biggr{)}. \tag{23}\] A simplified expression for the conditional pdf is \[p_{V|S}(v|s)=s\cdot\exp(-sv) \tag{24}\] where \(S=1/(1+U^{2})\in(0,1]\) and \(\mathbb{E}[1/S-1]\leq a\). From [17], it is known that the optimal input signal amplitude \(S\) (and thus \(U\)) is discrete and possesses an accumulation point at \(S=1\) (\(U=0\)). We use CORTICAL to learn the capacity-achieving distribution and verify that it matches (24) for the case \(a=1\) reported in [17]. Fig. 6 illustrates the evolution of the input distribution for different CORTICAL training steps. Also in this scenario, the generator learns the discrete nature of the input distribution and the values of the PMF. ## V Conclusions In this paper, we have presented CORTICAL, a novel deep learning-based framework that learns the capacity-achieving distribution for any discrete-time continuous memoryless channel. The proposed approach consists of a cooperative max-max game between two neural networks. Non-Shannon channel settings have validated the algorithm. The proposed approach offers a novel tool for studying the channel capacity in non-trivial channels and paves the way for the holistic and data-driven analysis/design of communication systems. ## VI Acknowledgement The authors would like to thank Nir Weinberger and Alex Dytso for the valuable discussions about capacity-achieving distributions provided while Nunzio A. Letizia was visiting Princeton University.
2310.18770
Leveraging Multimodal Features and Item-level User Feedback for Bundle Construction
Automatic bundle construction is a crucial prerequisite step in various bundle-aware online services. Previous approaches are mostly designed to model the bundling strategy of existing bundles. However, it is hard to acquire large-scale well-curated bundle dataset, especially for those platforms that have not offered bundle services before. Even for platforms with mature bundle services, there are still many items that are included in few or even zero bundles, which give rise to sparsity and cold-start challenges in the bundle construction models. To tackle these issues, we target at leveraging multimodal features, item-level user feedback signals, and the bundle composition information, to achieve a comprehensive formulation of bundle construction. Nevertheless, such formulation poses two new technical challenges: 1) how to learn effective representations by optimally unifying multiple features, and 2) how to address the problems of modality missing, noise, and sparsity problems induced by the incomplete query bundles. In this work, to address these technical challenges, we propose a Contrastive Learning-enhanced Hierarchical Encoder method (CLHE). Specifically, we use self-attention modules to combine the multimodal and multi-item features, and then leverage both item- and bundle-level contrastive learning to enhance the representation learning, thus to counter the modality missing, noise, and sparsity problems. Extensive experiments on four datasets in two application domains demonstrate that our method outperforms a list of SOTA methods. The code and dataset are available at https://github.com/Xiaohao-Liu/CLHE.
Yunshan Ma, Xiaohao Liu, Yinwei Wei, Zhulin Tao, Xiang Wang, Tat-Seng Chua
2023-10-28T17:54:26Z
http://arxiv.org/abs/2310.18770v1
# Leveraging Multimodal Features and Item-level User Feedback ###### Abstract. Automatic bundle construction is a crucial prerequisite step in various bundle-aware online services. Previous approaches are mostly designed to model the bundling strategy of existing bundles. However, it is hard to acquire large-scale well-curated bundle dataset, especially for those platforms that have not offered bundle services before. Even for platforms with mature bundle services, there are still many items that are included in few or even zero bundles, which give rise to sparsity and cold-start challenges in the bundle construction models. To tackle these issues, we target at leveraging multimodal features, item-level user feedback signals, and the bundle composition information, to achieve a comprehensive formulation of bundle construction. Nevertheless, such formulation poses two new technical challenges: 1) how to learn effective representations by optimally unifying multiple features, and 2) how to address the problems of modality missing, noise, and sparsity problems induced by the incomplete query bundles. In this work, to address these technical challenges, we propose a Contrastive Learning-enhanced Hierarchical Encoder method (CLHE). Specifically, we use self-attention modules to combine the multimodal and multi-item features, and then leverage both item- and bundle-level contrastive learning to enhance the representation learning, thus to counter the modality missing, noise, and sparsity problems. Extensive experiments on four datasets in two application domains demonstrate that our method outperforms a list of SOTA methods. The code and dataset are available at [https://github.com/Xiaohao-Liu/CLHE](https://github.com/Xiaohao-Liu/CLHE). Bundle Construction, Multimodal Modeling, Contrastive Learning + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Journal Journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems sufficient bundle data for training. Second, even for platforms with mature bundle services, the situation is far from ideal due to the various cold-start problems. On the one hand, there are quite a number of items that are only involved in a few bundles, consequently, it is challenging to obtain informative representations for these sparse items to construct new bundles. Worse still, there many new items, which haven't been part of previous bundles while are continuously pushed online, and how to swiftly bundle these cold-start items with existing _warm_ items is crucial for platforms to promote new products and keep sustained growth. Addressing these challenges, instead of seeking any silver bullet, we are more keen on practical solutions that make full use of the large amount of easy-to-access resources: multimodal features and item-level user feedback. The motivation behind this solution is that these data are well aligned to diverse bundling strategies. First, multimodal features, such as text, image, and audio, contain rich semantic information that is helpful to find either similar or compatible items and form bundles, as shown in Figure 1. More importantly, most items, even those sparse and newly introduced items, usually have one or multiple such features. A plethora of previous efforts, such as personalized recommendation (Zhou et al., 2017), have demonstrated the efficacy of multimodal features in handling sparse and cold-start items. Second, item-level user feedback information endows precious crowd-sourcing knowledge that is crucial to bundle construction. Intuitively, the items that users frequently co-interact with are strong candidates for bundling. More importantly, a large amount of such user feedback signals are available even to platforms that do not offer bundle services. Compared with previous works (Zhou et al., 2017), we pioneer the integration of multimodal features and item-level user feedback for bundle construction. Given the outlined motivations, we aim to leverage both multimodal features and item-level user feedback, along with the existing bundles, to develop a comprehensive model for bundle construction. However, it is non-trivial to design a model to capture all three types of information and achieve optimal bundle construction performance. First, how to learn effective representations in each modality and well capture the cooperative association among the three modalities is a key challenge. Second, some items might not be associated with user feedback or affiliated to bundles comprehensively, thus the so-called modality-missing issue may degrade the modeling capability. What's more, during the inference stage of bundle construction, we usually need to provide several seed items as a partial bundle to initiate the construction process. However, the incompleteness of the partial bundle imposes noise and sparsity challenges to the bundle representation learning, which will impede the bundle construction performance. In this work, to address the aforementioned challenges, we propose a **C**ontrastive **L**earning-enhanced **H**ierarchical **E**ncoder (CLHE) for bundle construction. In order to obtain the representations of items, we make use of the recently proposed large-scale multimodal foundation models (_i.e.,_ BLIP (Zhou et al., 2017) and CLAP (Zhou et al., 2017)) to extract the multimodal features of items. Concurrently, we pre-train a collaborative filtering (CF)-based model (_i.e._, LightGCN (Liu et al., 2018)) to obtain the items' representations that preserve the user feedback information. Then, we employ a hierarchical encoder to learn the bundle representation, where the self-attention mechanism is devised to expertly fuse multimodal information and multi-item representations. To tackle the modality missing problem and the sparsity/noise issues induced by the incomplete partial bundle, we employ two levels of contrastive learning (Zhou et al., 2017; Wang et al., 2018), _i.e.,_ item-level and bundle-level, to fully take advantage of the self-supervision signals. We conduct experiments on four datasets from two domains, and the results demonstrate that our method outperforms multiple leading methods. Various ablation and modal studies further justify the effectiveness of key modules and demonstrate multiple crucial properties of our proposed model. We summarize the key contributions of this work as follows: * [leftmargin=*,noitemsep,topsep=0pt] * We introduce a pioneering approach to bundle construction by holistically combining multimodal features, item-level user feedback, and existing bundles. This integration addresses prevailing challenges such as data insufficiency and the cold-start problem. * We highlight multiple technical challenges of this new formulation and propose a novel method of CLHE to tackle them. * Our method outperforms various leading methods on four datasets from two application domains with different settings, and further diverse studies demonstrate various merits of our method. ## 2. Methodology We first formally define the problem of bundle construction by considering all three types of data. Then we describe the details of our proposed method CLHE (as shown in Figure 2). ### Problem Formulation Given a set of items \(\mathcal{I}=\{i_{1},i_{2},\cdots,i_{N}\}\), each item has a textual input \(t_{i}\), which can be its title, description, or metadata, and a media input \(m_{i}\), which can be an image, audio, or video of the item. In addition, for the items that have been online for a while, we have collected some item-level user feedback data, which is denoted as a user-item interaction matrix \(\mathbf{X}_{M\times N}=\{x_{ui}|u\in\mathcal{U},i\in\mathcal{I}\}\), where \(\mathcal{U}=\{u_{1},u_{2},\cdots,u_{M}\}\) is the user set. We define a bundle as a set of items, denoted as \(b=\{i_{1},i_{2},\cdots,i_{n}\}\), where \(n=|b|\) is the size of the bundle. Given a partial bundle \(b_{s}\subset b\) (_i.e._, a set of seed items), where \(|b_{s}|<|b|\), the bundle construction model targets at predict the missing items \(i\in b\setminus b_{s}\). We have a set of known bundles for training, denoted as \(\mathcal{B}=\{b_{1},b_{2},\cdots,b_{O}\}\) and a set of unseen bundles for testing, denoted as \(\mathcal{B}=\{b_{O+1},b_{O+2},\cdots,b_{O+O}\}\), where \(O\) is the number of training bundles and \(\tilde{O}\) is the number of testing bundles. We would like to train a model based on the training set \(\mathcal{B}\), for an unseen bundle \(b\in\tilde{\mathcal{B}}\), when given a few seed items \(\tilde{b}_{s}\), _aka_. the partial bundle, the model can predict the missing items \(b\setminus b_{s}\) thus to construct the entire bundle. ### Hierarchical Encoder We utilize a hierarchical encoder for multimodal bundle representation. Initially, we extract multimodal features using multimodal foundation models, while concurrently pre-training a CF-based model to capture item-level user feedback. Subsequently, a self-attention encoder is introduced to integrate these multimodal features, resulting in a fused item representation. Another self-attention encoder then aggregates these representations, producing a comprehensive bundle representation. #### 2.2.1. Item Representation Learning We first detail the feature extraction process and then present the self-attention encoder. **Multimodal Feature Extraction**. We seek large-scale multimodal foundation models to extract the textual and media features of items. Compared with previous uni-modal feature extractors, such as in Computer Vision (CV) [22, 36], Natural Language Processing (NLP) [15, 34], or audio [8, 26], multimodal foundation models are more powerful to capture the multimodal semantics of the input data, which have demonstrated to be effective in transferring or generalizing to various downstream tasks. Concretely, for image data, we use the BLIP [27] model to extract both textual and visual features. For audio data, we use the CLAP [50] model to extract textual and audio features. After the feature extraction, we obtain the textual feature \(\mathbf{t}_{i}\in\mathbb{R}^{768}\) and media feature \(\mathbf{m}_{i}\in\mathbb{R}^{768}\). Given their shared representation space, we perform a simple average pooling over them, resulting in the content feature of the item, denoted as \(\mathbf{c}_{i}=\text{average}(\mathbf{t}_{i},\mathbf{m}_{i})\). **Item-level User Feedback Feature Extraction**. We employ the well-performing CF-based model, _i.e._, LightGCN [23], to obtain item representations from user feedback. Specifically, we devise a bipartite graph based on the user-item interaction matrix, then train a LightGCN 1 model over the bipartite graph, denoted as: Footnote 1: Other CF-based models can also be used. \[\begin{cases}\mathbf{p}_{u}^{(k)}=\sum_{i\in\mathcal{N}_{u}}\frac{1}{\sqrt{| \mathcal{N}_{u}|}\sqrt{|\mathcal{N}_{i}|}}\mathbf{p}_{i}^{(k-1)},\\ \mathbf{p}_{i}^{(k)}=\sum_{u\in\mathcal{N}_{i}}\frac{1}{\sqrt{|\mathcal{N}_{i }|}\sqrt{|\mathcal{N}_{u}|}}\mathbf{p}_{u}^{(k-1)},\end{cases} \tag{1}\] where \(\mathbf{p}_{u}^{(k)},\mathbf{p}_{i}^{(k)}\in\mathbb{R}^{d}\) are embeddings for user \(u\) and item \(i\) at the \(k\)-th layer, and \(d\) is the dimensionality of the hidden representation; \(\mathcal{N}_{u}\) and \(\mathcal{N}_{i}\) are the neighbors of the user \(u\) and item \(i\) in the user-item interaction graph. We only make use of the item representation \(\mathbf{p}_{i}\), which captures the item-level user feedback information. It is tailored by aggregating the item representations over \(K\) layers' propagation, denoted as: \[\mathbf{p}_{i}=\frac{1}{K}\sum_{k=0}^{K}\mathbf{p}_{i}^{(k)}. \tag{2}\] **ID Embedding Initialization**. We also initialize an id embedding \(\mathbf{v}_{i}\in\mathbb{R}^{d}\) for each item to capture its bundle-item affiliation patterns. Please note that for those items (both during training and testing) that do not have user feedback features, we copy the content feature to its user feedback feature slot. Analogously, for the cold-start item that do not have an id embedding, we copy its corresponding content feature to take the slot. **Modality Fusion via Self-attention**. Given the three types of features, _i.e._, \(\mathbf{c}_{i}\), \(\mathbf{p}_{i}\), and \(\mathbf{v}_{i}\), we first apply a feature transformation layer to project the multimodal and user-feedback features into the same latent space with the id embeddings, then we concatenate all the three features into a feature matrix \(\mathbf{F}_{i}\in\mathbb{R}^{3\times d}\), denoted as: \[\mathbf{F}_{i}=\text{concat}(\mathbf{c}_{i}\mathbf{W}_{c},\mathbf{p}_{i} \mathbf{W}_{p},\mathbf{v}_{i}), \tag{3}\] where \(\mathbf{W}_{c}\in\mathbb{R}^{768\times d}\) and \(\mathbf{W}_{p}\in\mathbb{R}^{768\times d}\) are the transformation matrices for multimodal and user-feedback features, respectively; \(\text{concat}(\cdot)\) is the concatenation function. Then, we devise a self-attention layer to model the correlations of multiple features, denoted as: \[\begin{cases}\mathbf{A}_{i}^{(l)}=\frac{1}{\sqrt{d}}\hat{\mathbf{F}}_{i}^{(l-1 )}\mathbf{W}_{I}^{K}(\hat{\mathbf{F}}_{i}^{(l-1)}\mathbf{W}_{I}^{Q})^{\top},\\ \hat{\mathbf{F}}_{i}^{(l)}=\text{softmax}(\mathbf{A}_{i}^{(l)})\hat{\mathbf{F} }_{i}^{l-1},\end{cases} \tag{4}\] where \(\mathbf{W}_{I}^{K}\in\mathbb{R}^{d\times d}\) and \(\mathbf{W}_{I}^{Q}\in\mathbb{R}^{d\times d}\) are the trainable parameters for this item-level encoder to project the input feature embeddings into the key and value spaces; \(\hat{\mathbf{F}}_{i}^{(l)}\in\mathbb{R}^{3\times d}\) is the hidden feature representations in the intermediate layer \(l\), and \(\hat{\mathbf{F}}_{i}^{(0)}=\mathbf{F}_{i}\); \(\text{softmax}(\cdot)\) is the softmax function and \(\hat{\mathbf{F}}_{i}^{L}\) denotes the features' representations after \(L\) layers of self-attention. We then average the multiple features to obtain the item representation \(\mathbf{f}_{i}\in\mathbb{R}^{d}\) after multimodal Figure 2. The overall framework of our proposed method CLHE, which consists of two main components: hierarchical encoder (_aka._ multimodal feature extraction, item and bundle representation learning) and contrastive learning. fusion, formally defined as: \[\mathbf{f}_{i}=\text{average}\big{(}\tilde{\mathbf{F}}_{i}^{(L)}\big{)}. \tag{5}\] #### 2.2.2. Bundle Representation Learning After obtaining the item representation, we build a second self-attention module to learn the representation of the given partial bundle. For a certain partial bundle \(b_{s}\), its representation \(\mathbf{e}_{b_{s}}\)2 is learned by: Footnote 2: For simplicity, we omit the subscript \(\mathbf{s}\) in \(b_{s}\) and just use \(b\) and \(\mathbf{e}_{b}\) to represent the partial bundle and its representation if there is no ambiguity. \[\begin{cases}\text{A}_{b}^{(z)}=\frac{1}{\sqrt{d}}\tilde{\mathbf{F}}_{b}^{(z- 1)}\mathbf{W}_{B}^{K}\big{(}\tilde{\mathbf{E}}_{b}^{(z-1)}\mathbf{W}_{B}^{Q} \big{)}^{\intercal},\\ \tilde{\mathbf{E}}_{b}^{(z)}=\text{softmax}\big{(}\mathbf{A}_{b}^{(z)}\big{)} \tilde{\mathbf{E}}_{b}^{z-1},\end{cases} \tag{6}\] where \(\mathbf{W}_{B}^{K}\in\mathbb{R}^{d\times d}\) and \(\mathbf{W}_{B}^{Q}\in\mathbb{R}^{d\times d}\) are the trainable parameters in the bundle-level to project the input item embeddings into the key and value spaces; \(\tilde{\mathbf{E}}_{b}^{(z)}\in\mathbb{R}^{|\mathcal{B}|\times d}\) is the hidden representations in the middle layer \(z\), and \(\tilde{\mathbf{E}}_{b}^{(0)}=\text{concat}\big{(}\{\mathbf{f}_{i}\}_{i\in b} \big{)}\); \(\tilde{\mathbf{E}}_{b}^{Z}\) denotes the features' representations after \(Z\) layers of self-attention. We then average the multiple features to obtain the item representation \(\mathbf{e}_{b}\) after multimodal fusion, formally defined as: \[\mathbf{e}_{b}=\text{average}\big{(}\tilde{\mathbf{E}}_{b}^{(Z)}\big{)}. \tag{7}\] ### Contrastive Learning Even though the hierarchical encoder can well attain the correlations among multiple features and multiple items, it still suffers from noise, sparsity, or even cold-start problems in both item and bundle levels. Specifically, at the item level, the items that have fewer user feedbacks or are involved in fewer bundles during training may also be prone to deteriorate representations, which is the so-called sparsity issue. Even worse, some cold-start items may have never interacted with any users or been included in any bundles before, therefore, the cold-start problem will severely deteriorate the representation quality. Second, at the bundle level, the partial bundle's representation is susceptible to noise and sparsity issues. Instead of a complete bundle that is sufficient to depict all the functionalities or properties of the bundle, the given partial bundle only encompasses some of the items. Consequently, the bundle representation may be biased due to the arbitrary seed items. To tackle these problems, we aim to harness contrastive learning over both item and bundle levels to mine the self-supervision signals. Recently, contrastive learning has achieved great success in various tasks, including CV (Krizhevsky et al., 2014), NLP (Krizhevsky et al., 2014), and recommender systems (Zhu et al., 2017). The main idea is to first corrupt the original data and generate some augmented views for the same data point, and then leverage an InfoNCE loss to pull close the representations across multiple augmented views for the same data point, while pushing away the representations of different data points. Therefore, the representations could be more robust to combat noise and sparsity. #### 2.3.1. Item-level Contrastive Learning For each item \(i\), we tailor its representation \(\mathbf{f}_{i}\) in Equation 5. We leverage various data augmentations to generate the augmented view \(\mathbf{f}_{i}^{\prime}\). The item-level data augmentation methods we used include: 1) No Augmentation (NA) (Zhu et al., 2017): just use the original representation as the augmented feature without any augmentation; 2) Feature Noise (FN) (Zhu et al., 2018): add a small-scaled random noise vector to the item's features; 3) Feature Dropout (FD) (Zhu et al., 2017): randomly dropout some values over the feature vectors; and 4) Modality Dropout (MD): dropout the whole feature of a randomly selected modality on a randomly selected item. Then, we use the InfoNCE (Zhu et al., 2017) to generate the item-level contrastive loss, denoted as: \[\mathcal{L}_{I}^{C}=\frac{1}{|I|}\sum_{i\in\mathcal{I}}-\text{log}\frac{\exp( \cos(\mathbf{f}_{i},\mathbf{f}_{i}^{\prime})/\tau)}{\sum_{\mathbf{o}\in \mathcal{I}}\exp(\cos(\mathbf{f}_{i},\mathbf{f}_{\mathbf{o}}^{\prime})/\tau)}, \tag{8}\] where \(\cos(\cdot)\) is the cosine similarity, and \(\tau\) is the temperature. #### 2.3.2. Bundle-level Contrastive Learning For each bundle \(b\) and its original representation \(\mathbf{e}_{b}\), we also implement various data augmentations to generate an augmented view \(\mathbf{e}_{b}^{\prime}\). The data augmentation methods we leveraged include: 1) Item Dropout (ID): randomly dropout some items in the bundle; and 2) Item Replacement (IR): randomly select some items in the bundle and replace them with some other items that have not appear in the bundle. Following on, the bundle-level contrastive loss is tailored by: \[\mathcal{L}_{B}^{C}=\frac{1}{|\mathcal{B}|}\sum_{b\in\mathcal{B}}-\text{log} \frac{\exp(\cos(\mathbf{e}_{b},\mathbf{e}_{b}^{\prime})/\tau)}{\sum_{v\in \mathcal{B}}\exp(\cos(\mathbf{e}_{b},\mathbf{e}_{b}^{\prime})/\tau)}. \tag{9}\] ### Prediction and Optimization After obtain the partial bundle representation \(\mathbf{e}_{b_{s}}\) and the item representations \(\mathbf{f}_{i}\), we leverage the inner-product function to induce the score \(\hat{y}_{b_{s},i}\) that indicates the possibility of item \(i\) being included into bundle \(b\) to make it complete, defined as: \[\hat{y}_{b_{s},i}=\mathbf{e}_{b_{s}}\mathbf{f}_{i}^{\intercal}. \tag{10}\] To optimize our model, we follow the previous approaches (Zhu et al., 2017; Zhu et al., 2017) and leverage the negative log-likelihood loss, therefore, the loss for bundle \(b\) is denoted as: \[\mathcal{L}_{b}=\frac{1}{|I|}\sum_{i\in\mathcal{I}}-y_{b_{s},i}\text{log}\pi_{b_ {s}}(\hat{y}_{b_{s},i}), \tag{11}\] where \(\pi(\cdot)\) is the softmax function which produces the probabilities over the entire items. In collaboration with the contrastive loss and regularization, we have the final loss, denoted as: \[\mathcal{L}=\frac{1}{|\mathcal{B}|}\sum_{b\in\mathcal{B}}\mathcal{L}_{b}+\alpha _{1}\mathcal{L}_{I}^{C}+\alpha_{2}\mathcal{L}_{B}^{C}+\beta\|\mathbf{\Theta}\| _{2}^{2}, \tag{12}\] where \(\alpha_{1},\alpha_{2}\) and \(\beta\) are hyper-parameters to balance different loss terms, \(\|\mathbf{\Theta}\|_{2}^{2}\) is the L2 regularization term, and \(\|\mathbf{\Theta}\|\) denotes all the trainable parameters in our model. ## 3. Experiments We evaluate our proposed methods on two application domains of product bundling, _i.e.,_ fashion outfit and music playlist. We are particularly interested in answering the research questions as follow: * **RQ1:** Does the proposed CLHE method beat the leading methods? * **RQ2:** Are the key modules, _i.e.,_ hierarchical transformer and contrastive learning, effective? * **RQ3:** How does out method work in countering the problems of cold-start items, modality missing, noise and sparsity of the partial bundle? How the detailed configurations affect its performance and how about the computation complexity? ### Experimental Settings There are various application scenarios that are suitable for product bundling, such as e-commerce, travel package, meal, _etc._, each of which has one or multiple public datasets. However, only datasets that include all the multimodal item features, user feedback data, and bundle data can be used to evaluate our method. Therefore, we choose two representative domains, _i.e._, fashion outfit and music playlist. We use the POG (Grover et al., 2017) for fashion outfit. For the music playlist, we use the Spotify (Grover et al., 2017) dataset for the bundle-item affiliations, and we acquire the user feedback data from the Last.fm dataset (Beng et al., 2017). Since the average bundle size is quite small in POG (it makes sense for fashion outfit), we re-sample a second version POG_dense which has denser user feedback connections for each item. In contrast, the average bundle size in Spotify dataset is large, thus we sample a sparser version Spotify_sparse, which has smaller average bundle size. To be noted, we keep the integrity of all the bundles in all the versions, which means we do not corrupt any bundles during the sampling. For each dataset, we randomly split all the bundles into training/validation/testing set with the ratio of 8:1:1. The statistics of the datasets are shown in Table 1. We use the popular ranking protocols of Recall@K and NDCG@K as the evaluation metric, where K=20. #### 3.1.1. Compared Methods Due to the new formulation of our work, there are no previous works that have exactly same setting with ours. Therefore, we pick several leading methods and adapt them to our settings. For fair comparison, all the baseline methods use all the three types of extracted features that are same with our method. In addition, they all use the same negative log-likelihood loss function. * **MultiDAE**(Wang et al., 2017) is an auto-encoder model which uses an average pooling to aggregate the items' representations to get the bundle representation. * **MultiVAE**(Wang et al., 2017) is an variational auto-encoder model which employ the variational inference on top of the MultiDAE method. * **Bi-LSTM**(Wang et al., 2017) treats each bundle as a sequence and uses bi-directional LSTM to learn the bundle representation. * **Hypergraph**(Wang et al., 2017) formulates each bundle a hyper-graph and devises a GCN model to learn the bundle representation. * **Transformer**(Wang et al., 2017; Wang et al., 2017) tailors a transformer to capture the item interactions and generate the bundle representation. * **TransformerCL** is the version that we add bundle-level contrastive loss to the above Transformer model. #### 3.1.2. Hyper-parameter Settings The embedding and hidden representation size is \(64\), and we use Xavier (Xavier et al., 2017) initialization, batch size \(2048\), and Adam optimizer (Kingmae and Ba, 2014). We find the optimal hyper-parameter setting by adopting grid search. Wherein, learning rate is searched in range of \(\{10^{-2},2\times 10^{-2},10^{-3},2\times 10^{-3},10^{-4},2\times 10^{-4}\}\) and \(\beta\) is tuned in range of \(\{10^{-3},10^{-4},10^{-5},10^{-6}\}\). In most cases, the optimal value of learning rate is \(10^{-3}\) and the one of \(\beta\) is \(10^{-5}\). According to the contrastive learning, we search \(\alpha_{1},\alpha_{2}\) and \(\tau\) in range of \(\{0.1,0.2,0.5,1,2\}\) and \(\{0.1,0.2,0.5,1,2,5\}\), respectively. Besides, we dropout features and modalities in augmentation step randomly with the ratio in range of \(\{0,0.1,0.2,0.5\}\) and add noise with a weight in range of \(\{0.01,0.02,0.05,0.1\}\). We search the number of propagation layers \(K,L,Z\) from \(\{1,2,3\}\). For the baselines, we follow the designs in their articles to achieve the best performance. Certainly, we keep the same settings to ensure a fair comparison. ### Overall Performance Comparison (RQ1) Table 2 shows the overall performance comparison between our model CLHE and the baseline methods. We have the following observations. First, our method beats all the baselines on all the datasets, \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline Dataset & \#U & \#I & \#B & \#B-I & \#U-I & \#Avg.I/B & \#Avg.B/I & \#Avg.I/U & \#Avg.U/I \\ \hline POG & 17,449 & 48,676 & 20,000 & 72,224 & 237,519 & 3.61 & 1.48 & 13.61 & 4.88 \\ POG\_dense & 2,311,431 & 31,217 & 29,686 & 105,775 & 6,345,137 & 3.56 & 3.39 & 2.75 & 203.26 \\ Spotify & 118,994 & 254,155 & 20,000 & 1,268,716 & 36,244,806 & 63.44 & 4.99 & 304.59 & 142.61 \\ Spotify\_sparse & 118,899 & 213,325 & 12,486 & 549,900 & 32,890,315 & 44.04 & 2.58 & 276.62 & 154.18 \\ \hline \hline \end{tabular} \end{table} Table 1. The statistics of the four datasets on two different domains. \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline \multirow{2}{*}{Models} & \multicolumn{2}{c}{POG} & \multicolumn{2}{c}{POG\_dense} & \multicolumn{2}{c}{Spotify} & \multicolumn{2}{c}{Spotify\_sparse} \\ \cline{2-9} & Rec@20 & NDCG@20 & Rec@20 & NDCG@20 & Rec@20 & NDCG@20 & Rec@20 & NDCG@20 \\ \hline **MultiDAE** & 0.0119 & 0.0063 & 0.3213 & 0.2179 & 0.0578 & 0.0944 & 0.0506 & 0.0574 \\ **MultiVAE** & 0.0196 & 0.0104 & 0.3221 & 0.2086 & 0.0400 & 0.0656 & 0.0325 & 0.0365 \\ **Bi-LSTM** & 0.0170 & 0.0097 & 0.2932 & 0.1745 & 0.0833 & 0.1486 & 0.0645 & 0.0822 \\ **Hypergraph** & 0.0207 & 0.0111 & 0.3063 & 0.2256 & 0.0572 & 0.0941 & 0.0529 & 0.0590 \\ **Tranformer** & 0.0215 & 0.0114 & 0.3525 & 0.2527 & 0.0875 & 0.1460 & 0.0768 & 0.0902 \\ **TranformerCL** & 0.0202 & 0.0134 & 0.3170 & 0.2374 & 0.1014 & 0.1696 & 0.0874 & 0.1062 \\ \hline **CLHE (ours)** & **0.0284** & **0.0193** & **0.3811** & **0.2773** & **0.1081** & **0.1806** & **0.0980** & **0.1212** \\ **\%Improv.** & 32.45 & 44.03 & 8.13 & 9.71 & 6.61 & 6.49 & 12.12 & 14.15 \\ \hline \hline \end{tabular} \end{table} Table 2. The overall performance of our CLHE and baselines. “\(\%\)Improv.” denotes the relative improvement over the strongest baseline. The best baselines are underlined. demonstrating the competitive performance of our model. Second, over the baselines, Transformer and TransformerCL achieve the best performance, showing that the self-attention mechanism and contrastive learning can well preserve the correlations among items within the bundle, thus yielding good bundle representations. Third, comparing the results between different versions of dataset, we find that: 1) the performance on POG_dense is much larger than that on POG due to denser user-item interactions, demonstrating that user feedback information is quite helpful to the performance; 2) the performance of Spotify_sparse is relatively smaller than that on Spotify since the sparser bundle-item affiliation data, justifying our hypothesis that large-scale and high-quality bundle dataset is vital to bundle construction. Finally, we have an interesting observation that the performance improvements on the four datasets is negatively correlated with "_Avr.B/T_, as shown in Table 1, in another word, in scenarios that items are included in fewer bundles (_i.e.,_ the dataset include more sparse items), our method performs even better. This phenomenon further justifies the advantage of our method in countering the issue of sparse items. ### Ablation Study of Key Modules (RQ2) To further evaluate the effectiveness of the key modules of our model, we conduct a list of ablation studies and the results are shown in Table 3. First and foremost, we aim to justify the effectiveness of the user feedback features. Thereby, we remove the user feedback features from our model (_i.e.,_ remove \(\mathbf{p}_{i}\) from \(\mathbf{f}_{i}\)) and build an ablated version of model, _i.e.,_ w/o _user feedback_. According to the result in Table 3, after removing user feedback features, the performance reduces clearly, verifying that user feedback feature is significant for bundle construction. Second, we would like to evaluate whether each component of the hierarchical encoder is useful. We progressively remove the two self-attention modules from our model and replace them with an vanilla average pooling, thus yielding three ablated models, _i.e.,_ w/o _item,_ w/o bundle,_ and _w/o both_. The results in Table 3 show that the removal of either self-attention modules causes performance drop. These results further verify the efficacy of our self-attention-based hierarchical encoder framework. Third, to justify the contribution of contrastive learning, we progressively remove the two levels of contrastive loss, thus generating three ablations, _i.e.,_ w/o _item,_ w/o _bundle_, and _w/o both_. Table 3 depicts the results, which demonstrate the both contrastive losses are helpful, especially on the sparser version of datasets. ### Model Study (RQ3) To explicate more details and various properties of our method, we further conduct a list of model studies. #### 3.4.1. Cold-start Items One of the main challenges for bundle construction is cold-start items that have never been included in previous bundles. It is difficult to directly evaluate the methods solely based on cold-start items since there are few testing bundles where both the input and result partial bundles purely consist of cold-start items. Nevertheless, we come up with an alternative way to indirectly test how these methods perform against cold-start items. Specifically, we remove all the cold-start items and just keep the warm items in the testing set, _i.e.,_ the warm setting. We test our method and all the baseline models on this warm setting. The results shown in Table 4 illustrate that: 1) the performance of all the models on the warm setting are much better than that of the warm-cold hybrid setting (the normal setting as shown in Table 2), exhibiting that the existence of cold-start items significantly deteriorate the performance; and 2) the performance gap between CLHE and the strongest baseline in the hybrid setting is obviously much larger than that on the warm setting, implying that our method's strength in dealing with cold-start items. #### 3.4.2. Sparsity and Noise in Bundle Another merit of our approach is that the contrastive learning is able to counter the sparsity and \begin{table} \begin{tabular}{l c c c c} \hline \hline Models & POG & POG\_dense & Spotify & Spotify\_sparse \\ \hline **MultiDAE** & 0.0338 & 0.2725 & 0.1527 & 0.0839 \\ **MultiVAE** & 0.0424 & 0.2871 & 0.0900 & 0.0384 \\ **Bi-LSTM** & 0.0145 & 0.1997 & 0.0946 & 0.0281 \\ **Hypergraph** & 0.0393 & 0.2860 & 0.0822 & 0.0401 \\ **Tranformer** & 0.0520 & 0.2969 & 0.1837 & 0.1199 \\ **TranformerCL** & 0.0280 & 0.2747 & 0.1766 & 0.1152 \\ \hline **CLHE (ours)** & **0.0554** & **0.3218** & **0.1846** & **0.1245** \\ **\%Improv.** & 6.53 & 8.38 & 0.48 & 3.81 \\ \hline \hline \end{tabular} \end{table} Table 4. The overall performance (NDCG@20) of our CLHE and baselines on the warm setting. “%Improv.” denotes the relative improvement over the strongest baseline. Figure 3. Performance analysis with varying rates of sparsity and noise in the partial bundle. \begin{table} \begin{tabular}{l c c c c} \hline \hline Settings & POG & POG\_dense & Spotify & Spotify\_sparse \\ \hline CLHE & 0.0193 & 0.2773 & 0.1806 & 0.1212 \\ \hline w/o user feedback & 0.0168 & 0.2733 & 0.1695 & 0.1174 \\ \hline & w/o item & 0.0168 & 0.2551 & 0.1334 & 0.0822 \\ SelfAtt. & w/o bundle & 0.0127 & 0.2141 & 0.1785 & 0.1210 \\ & w/o both & 0.0034 & 0.20647 & 0.0418 & 0.0334 \\ \hline CL & w/o item & 0.0171 & 0.2742 & 0.1735 & 0.1203 \\ & w/o bundle & 0.0176 & 0.2598 & 0.1740 & 0.1123 \\ & w/o both & 0.0178 & 0.2662 & 0.1730 & 0.1084 \\ \hline \hline \end{tabular} \end{table} Table 3. Ablation study of the hierarchical encoder and contrastive learning (the performance is NDCG@20). noise issue when the input partial bundle is incomplete. To elicit this property, we corrupt the testing dataset to make the input partial bundle sparse and noisy. Specifically, we randomly remove certain portion of items from the input partial bundle to make them sparser. To make the partial bundle more noisy, we randomly sample some items from the whole item set and add them to the bundle. Then we test our model and the model without both levels of contrastive loss, and the performance curves are shown in Figure 3, where the x-axis is the ratio of bundle size after corruption compared with the original bundle, and the ratio=1 corresponds to the original clean data. From this figure, we can derive the conclusion that: 1) with the sparsity and noise degree increasing, both our method and baselines' performance drops; 2) our method still outperforms baselines even under quite significant sparsity or noise rate, such as removing 50% seed items or adding 50% more noisy items; and 3) the contrastive loss in our model is able to combat the parse and noise bundle issue to some extent. #### 3.4.3. Data Augmentations Data augmentation is the crux to contrastive learning. We search over multiple different data augmentation strategies at both item- and bundle-level contrastive learning, in order to find the best-performing setting. In Table 5, we present the performance of CLHE under various data augmentations at both item- or bundle-level. Overall speaking, data augmentation methods may affect the performance and proper selection is important for good results. #### 3.4.4. Computational Complexity Self-attention calculates every pair of instances in a set, _i.e._, features of an item or items of a bundle, thus it usually suffers from high computational complexity. We record the time used for every training epoch and the time used from the beginning of training till convergence, and the records of our method and two baselines, _i.e._, MultiDAE and Transformer, are illustrated in Figure 4. The bar chart reveals that on the one hand, our method is computationally heavy since it takes the longest time for each training epoch; on the other hand, our method takes the least training time to reach convergence on three datasets. In conclusion, our method is effective and efficient during training, while the inherent complexity induced by hierarchical self-attention may impose the inference slower. We argue that various self-attention acceleration approaches could be considered in practice, which is out of the scope of this work. #### 3.4.5. Case Study We would like to further illustrate some cases to portrait how the hierarchical encoder learn the association of multimodal features and the multiple items' representations. Specifically, for both item- and bundle-level self-attention modules, we take the last layer's output representation as each feature's (item's) representation, and calculate the cosine similarity score with the whole item (bundle). We cherry pick some example items and bundles, as shown in Figure 5. The results of feature-item similarity exhibit that the three type of features could play distinctive roles in different items, showing the importance of all the three types of features. For the item-bundle similarity results, we can find that items do not equally contribute to their affiliated bundles, thus it is crucial to model the bundle composition patterns. Here we just intuitively illustrate some hints about the bundle representation learning, more sophisticate analysis, such as feature pair or item pair co-effects, is left for future work. #### 3.4.6. Hyper-parameter Analysis We also present the model's performance change _w.r.t._ the key hyper-parameters, _i.e._, the temperature \(\tau\) in contrastive loss and the weights \(\alpha_{1},\alpha_{2}\) for the two contrastive losses. The curves in Figure 6 reveal that the model is still sensitive to these hyper-parameters, and proper tuning is required to achieve optimal performance. Figure 4. Computational complexity analysis. Figure 5. Illustration of similarity a) between each feature and the whole item representation; and b) between each item and the whole bundle representation. The size of the circles is positively correlated with its corresponding cosine similarity. \begin{table} \begin{tabular}{c c c c c c} \hline \hline & Setting & POG & POG\_dense & Spotify & Spotify\_sparse \\ \hline \multirow{3}{*}{Item} & NA & 0.0148 & 0.2428 & 0.1712 & 0.1202 \\ & FN & 0.0193 & 0.2753 & 0.1735 & 0.1172 \\ & FD & 0.0178 & 0.2763 & 0.1718 & 0.1200 \\ & MD & 0.0162 & 0.2773 & 0.1806 & 0.1212 \\ \hline \multirow{2}{*}{Bundle} & ID & 0.0184 & 0.2773 & 0.1791 & 0.1212 \\ & IR & 0.0193 & 0.2750 & 0.1806 & 0.1185 \\ \hline \hline \end{tabular} \end{table} Table 5. Study of data augmentations (NDCG@20). ## 4. Related Work We review the literature about bundles, including: 1) bundle recommendation and construction, and 2) bundle representation learning. ### Bundle Recommendation and Construction Product bundling is a mature marketing strategy that has been applied in various application scenarios, including fashion out-fit [(11; 29)], e-commerce [(42)], online music playlist [(4; 7)], online games [(14)], travel package [(32)], meal [(28)], and _etc._ Personalized bundle recommendation [(5; 9; 37)] is the pioneering work that first focuses on bundle-oriented problems in the data science community. Soon after that, researchers realize that just picking from predefined bundles cannot satisfy people's diverse and personalized needs. Thereby, the task of personalized bundle generation [(1; 6; 13; 17; 24; 44)] is naturally proposed where the model aims to automatically generate a bundle from a large set of items catering to a given user. It has to simultaneously deal with both users' personalization and item-item compatibility patterns, where the user-item interaction is specifically utilized for personalization modeling. In this paper, we only focus on bundle construction, which is committed to generate more bundles to enrich the bundle catalog for the platform. In addition, most of the bundle-oriented research in general domain still falls into the id-based paradigm, where very few domains, such as the fashion domain, have explored multimodality. We extend the multimodal learning to one more domain of music playlist. Moreover, we also leverage user feedback to multimodal bundle construction. ### Bundle Representation Learning Bundle representation learning is the crux of all the bundle-oriented problems. Initial studies [(39)] treat a bundle as a special type of item and just use the bundle id to represent it. Naturally and reasonably, people get to consider the encapsulated items within a bundle to generate more detailed representation. The simplest method is performing average pooling over the included items [(51)]. Later on, sequential models, such as Bi-LSTM [(21)], are utilized to capture the relations between two consecutive items. However, the items within a bundle are not ordered essentially, and sequential models cannot well capture all the pair-wise correlations. To address the limitation, attention models [(9; 24; 33)], Transformer [(3; 30; 35; 40; 43; 46)] and graph neural networks (GNNs) [(5; 54; 16; 55; 38)] are leveraged to model not only every pair of items within a bundle, but also the higher-order relations by stacking multiple layers. Even though many efforts have been paid to the item correlation learning to achieve good bundle representation, the multimodal information has been less explored. Multimodal information, such as textual, visual, or knowledge graph information of items, demonstrates to be effective in general recommendation [(45; 47; 48)]. In the fashion domain, visual and textual features have been extensively investigated for pairwise mix-and-match [(20; 52)] or outfit compatibility modeling [(12; 41)]. However, these works have not been extended to other domains, such as music playlist, where the audio modality has been rarely studied in the bundle recommendation or construction problem. More importantly, we argue that the user-item interaction information, which is widely utilized in the personalized recommendation problem, can serve as an additional modality in bundle construction. Sun _et al._[(42)] leverage a pre-trained CF model to obtain item representation to enhance the bundle completion task, while they have not fully justify the rationale and motivation. To the best of our knowledge, none of the previous works put together all the user-item interaction, bundle-item affiliation, and item content information for bundle construction. ## 5. Conclusion and Future Work In this work, we systematically study the problem of bundle construction and define a more comprehensive formulation by considering all the three types of data, _i.e._, multimodal features, item-level user feedback data, and existing bundles. Based on this formulation, we highlight two challenges: 1) how to learn expressive bundle representations given multiple features; and 2) how to counter the modality missing, noise, and parity problem. To tackle these challenges, we propose a novel method of **C**ontrastive **L**earning-enhanced **H**ierarchical **E**ncoder (CLHE) for bundle construction. Our method beats a list of leading methods on four datasets of two application domains. Extensive ablation and model studies justify the effectiveness of the key modules. Despite the great performance that has been achieved by this work, there is still large space to be explored for bundle construction. First, the current evaluation setting is a little bit rigid and inflexible, it is interesting to extend it to more flexible setting to align with real applications. For example, given arbitrary number of seed items, the model is asked to construct the bundle. Second, some of the feature extractors are pre-trained and fixed, _i.e._, the multimodal feature extraction and user-item interaction models. Is it possible to optimize these feature extractors in an end-to-end fashion thus the extracted features would be more aligned to the bundle construction task? Finally, this work just targets at unpersonalized bundle construction. It is an interesting and natural direction to push forward this work to personalized bundle construction. Figure 6. Hyper-parameter analysis. ## Acknowledgement This research is supported by NExT Research Center, National Natural Science Foundation of China (9227010114), and the University Synergy Innovation Program of Anhui Province (GXXT-2022-040).
2310.07650
Variational quantum eigensolver for closed-shell molecules with non-bosonic corrections
The realization of quantum advantage with noisy-intermediate-scale quantum (NISQ) machines has become one of the major challenges in computational sciences. Maintaining coherence of a physical system with more than ten qubits is a critical challenge that motivates research on compact system representations to reduce algorithm complexity. Toward this end, quantum simulations based on the variational quantum eigensolver (VQE) is considered to be one of the most promising algorithms for quantum chemistry in the NISQ era. We investigate reduced mapping of one spatial orbital to a single qubit to analyze the ground state energy in a way that the Pauli operators of qubits are mapped to the creation/annihilation of singlet pairs of electrons. To include the effect of non-bosonic (or non-paired) excitations, we introduce a simple correction scheme in the electron correlation model approximated by the geometrical mean of the bosonic (or paired) terms. Employing it in a VQE algorithm, we assess ground state energies of H2O, N2, and Li2O in good agreements with full configuration interaction (FCI) models respectively, using only 6, 8, and 12 qubits with quantum gate depths proportional to the squares of the qubit counts. With the adopted seniority-zero approximation that uses only one half of the qubit counts of a conventional VQE algorithm, we find our non-bosonic correction method reaches reliable quantum chemistry simulations at least for the tested systems.
Kyungmin Kim, Sumin Lim, Kyujin Shin, Gwonhak Lee, Yousung Jung, Woomin Kyoung, June-Koo Kevin Rhee, Young Min Rhee
2023-10-11T16:47:45Z
http://arxiv.org/abs/2310.07650v2
# Variational quantum eigensolver for closed-shell molecules with non-bosonic corrections ###### Abstract The realization of quantum advantage with noisy-intermediate-scale quantum (NISQ) machines has become one of the major challenges in computational sciences. Maintaining coherence of a physical system with more than ten qubits is a critical challenge that motivates research on compact system representations to reduce algorithm complexity. Toward this end, quantum simulations based on the variational quantum eigensolver (VQE) is considered to be one of the most promising algorithms for quantum chemistry in the NISQ era. We investigate reduced mapping of one spatial orbital to a single qubit to analyze the ground state energy in a way that the Pauli operators of qubits are mapped to the creation/annihilation of singlet pairs of electrons. To include the effect of non-bosonic (or non-paired) excitations, we introduce a simple correction scheme in the electron correlation model approximated by the geometrical mean of the bosonic (or paired) terms. Employing it in a VQE algorithm, we assess ground state energies of H\({}_{2}\)O, N\({}_{2}\), and Li\({}_{2}\)O in good agreements with full configuration interaction (FCI) models respectively, using only 6, 8, and 12 qubits with quantum gate depths proportional to the squares of the qubit counts. With the adopted seniority-zero approximation that uses only one half of the qubit counts of a conventional VQE algorithm, we find our non-bosonic correction method reaches reliable quantum chemistry simulations at least for the tested systems. ## 1 Introduction The concept of quantum simulation using a quantum computer was first proposed by Feynman [1], from an insight that a coupled quantum state has the ability to efficiently and accurately simulate another quantum mechanical system. More than a decade later, the conjectured efficiency was confirmed by Lloyd [2]. In the early developments, the phase estimation algorithm (PEA) proposed by Kitaev was adopted crucially [3], with experimental verification in a small qubit system such as a nuclear magnetic resonance (NMR) device [4]. Among the many different possibilities of using quantum algorithms in solving real-world problems, efficiently solving electronic structure problems as suggested by Aspuru-Guzik et al. [5] has drawn much attention. While quantum computers in the future are expected to outperform classic computers for some specific problems, in the viewpoint of scalability we currently only have few tens of noisy qubits as Preskill pointed out [6]. The present devices are conventionally characterized as noisy-intermediate-scale quantum (NISQ) devices. Thus, demonstrating quantum superiority within this limitation is one of the major challenges. With this situation, the variational quantum eigensolver (VQE) [7] has become likely the primary algorithm for performing quantum chemistry simulations. In the realm of the electronic structure theory, small molecules approximately having 10 electrons are simulated by quantum hardware [8, 9, 10], while systems that require \(20-30\) qubits have been calculated on models [11, 12]. Development of VQE algorithms to perform more efficient quantum simulations using NISQ hardware is being reported continuously [13, 14]. In electronic structure problems, obtaining the exact solution of the time-independent Schrodinger equation, i.e., full configuration interaction (FCI) energy, has a complexity close to \(O(N!)\) or exponential for practical purposes with the basis set size \(N\). FCI calculation considers all possible electron configurations within the available orbital space beyond the Hartree-Fock (HF) determinant. A VQE approach initiates the post-HF calculation by mapping the \(N\) selected spin-orbitals to qubits and generates an ansatz that can be prepared by a system of unitary coupled cluster (UCC) gates or some other heuristic gates. This is followed by the measuring energy expectation value for the corresponding Pauli operators in the computational basis. VQE obtains an approximate value of the FCI energy with a polynomial complexity with respect to \(N\), which is attained by adjusting the parameters of ansatz using the classical optimizer. Practically, however, as the numbers of qubits and gates of a noisy quantum computer increase, the fidelity starts to drop rapidly and the VQE algorithm do not achieve the desired accuracy. Therefore, in the NISQ era, constructing an efficient VQE algorithm that reduces the numbers of qubits and gates is one of the most significant approaches. In conventional VQE algorithms, a spin orbital is encoded by a single qubit by Jordan-Wigner [15] or Bravyi-Kitaev [16] transformation. For the closed-shell molecules, however, a seniority-zero approximation is routinely applied to reduce the total number of qubits required [17], which truncates the un-paired excitations in the VQE ansatz. An example of this approximation is doubly occupied configuration interaction (DOCI), where one includes all determinants only with doubly occupied orbitals. Because such pair-correlated methods can capture a significant portion of static correlations, previous studies [18, 19] focused on improving the missing dynamic correlations. From the viewpoint of the NISQ device, pair-correlated approximation is promising since the number of qubits required to implement the ansatz can be reduced by a factor of two, because one qubit encodes a spatial orbital [17], not a spin-orbital. Indeed, some of the authors have recently demonstrated this advantage with a trapped-ion quantum hardware with the orbital-optimized pair-correlated unitary pair coupled cluster double ansatz (oo-upCCD) [20]. Thus, designing a VQE algorithm that can recover the missing correlation energy of the pair-approximation will be important for the utility of a NISQ quantum computer. Here, we propose a simple correction scheme for the orbital-optimized pair-correlated VQE simulation. Specifically, we first construct an ansatz by using exchange gates between the qubits corresponding to occupied and virtual spatial orbitals. The essence of this construct is the same as in the earlier studies listed in the above. The VQE optimization within the ansatz and then subsequent measurements provide information needed for further performing orbital optimizations [18, 20], which we then perform with a classical algorithm. For handling electron correlations involving singly occupied orbitals besides the double excitations, we propose a simple non-bosonic correction based on the terms designed with the geometric means of the related bosonic excitation terms. The correction can be performed without using any quantum resource. We test the performance of our scheme by considering a series of molecular systems. Indeed, reasonable agreements with the FCI results are attained for all the tested systems, and the non-bosonic corrections in many cases are shown crucial in achieving the agreements. In fact, the non-bosonic correction scheme that we are proposing is computationally almost free yet improves greatly the practical accuracy of the paired-electron approximation. ## 2 Methods In our construction, two electrons in one molecular orbital (MO) correspond to one qubit. Accordingly, when \(2n\) electrons are contained in \(N\) MOs, the qubit ansatz is prepared as \[\left|\Psi_{0}\right\rangle=\left|11...1100...00\right\rangle \tag{1}\] where the first \(n\) qubits are in the 1-state while the remaining \(N-n\) qubits are in the 0-state. Then, exchange gates [21] are applied between all \(O\) occupied orbitals (\(i,j,\dots\)) and \(V\) virtual orbitals (\(a,b,\dots\)) as shown in Fig. 1. The exchange gate between the qubits \(i\) and \(a\) actually consists of three gates also as shown in the same figure with \(i\)-to-\(a\) CNOT, \(a\)-to-\(i\) controlled \(x\)-rotation, and \(i\)-to-\(a\) CNOT. Therefore, \(OV\) parameters \(\{\theta_{ia}\}\) assigned for each of the \(OV\) exchange gates are used in the ansatz generation, resulting in \(OV\) parameters and \(3OV\) two-qubit gates in total. This process can be repeated \(D\) times with additional sets of parameters with the same structure. The repeating number \(D\) can be determined phenomenologically through an optimization process over the entire algorithm. For the molecules tested in this work, \(D=1\) was good enough. While the ansatz preparations based on UCCSD and qubit coupled cluster (QCC) methods have gate complexities of \(O(N^{3})\) or \(O(N^{4})\) for a system with \(N\) electrons [22], the quantum circuit part in our case can be constructed with only \(O(N^{2})\) gates and a similarly scaling number of parameters. The efficiency of this exchange-type gates was also confirmed by Barkoutsos et al. [21]. Now, the final state can be translated into \[\left|\Psi\right\rangle=\prod_{i,a}U_{ia}^{\mathrm{ex}}(\theta_{ia})\left| \Psi_{0}\right\rangle=c_{0}\left|11...1100...00\right\rangle+c_{1}\left|11...1 010...00\right\rangle+\dots+c_{M}\left|00...0011...11\right\rangle \tag{2}\] and the parameters are optimized by adopting the conventional Hamiltonian \[H=\sum_{pq}\sum_{\sigma}h_{pq}a_{p\sigma}^{\dagger}a_{q\sigma}+\sum_{pqrs} \sum_{\sigma\tau}\frac{1}{2}(ps|qr)a_{p\sigma}^{\dagger}a_{q\tau}^{\dagger}a _{r\tau}a_{s\sigma} \tag{3}\] with the one- and two-electron integrals \(h_{pq}\) and \((pq|rs)\) in chemists' notation. Here, the spin indices \(\sigma\) and \(\tau\) supplement the spatial orbital indices \(p,q,\dots\) that cover all possible MOs. Note that we are only including bosonic pair excitations from \(i\) to \(a\), which will reduce the Hamiltonian into a simpler Figure 1: Ansatz preparation based on exchange-type gates, as suggested in ref [21]. form. The details of this reduction can be found elsewhere [20], and we will briefly walk through it here for completeness. Because only paired excitations contribute, the summation over the two-electron terms can first be grouped into \[\sum_{pqrs} = \sum_{\begin{subarray}{c}p=q\\ \neq r=s\end{subarray}}+\sum_{\begin{subarray}{c}p=r\\ \neq q=s\end{subarray}}+\sum_{\begin{subarray}{c}p=s\\ \neq q=r\end{subarray}}+\sum_{\begin{subarray}{c}p=r\\ =q=s\end{subarray}}+\sum_{\begin{subarray}{c}p=r\\ =s\end{subarray}} \tag{4}\] \[= \sum_{\begin{subarray}{c}p=q\\ \neq r=s\end{subarray}}+\sum_{\begin{subarray}{c}p=r\\ \neq q=s\end{subarray}}+\sum_{\begin{subarray}{c}p=s\\ =s\end{subarray}} \tag{5}\] The merge of the last two sums in the first line in the second line is for a later convenience. The first sum is non-vanishing only if \(\sigma\neq\tau\), and by introducing \(d_{p}^{\dagger}\) and \(d_{p}\) as the pair creation and annihilation operators with \(d_{p}=a_{p\beta}a_{p\alpha}\) and \(d_{p}^{\dagger}=a_{p\alpha}^{\dagger}a_{p\beta}^{\dagger}\), we can easily get \[\text{(first sum)}=\sum_{p\neq r}(pr|pr)d_{p}^{\dagger}d_{r} \tag{6}\] By properly considering commutation relations, we can trivially reach \[\text{(second sum)}= -\sum_{p\neq q}\sum_{\sigma}\frac{1}{2}(pq|qp)a_{p\sigma}^{ \dagger}a_{p\sigma}a_{q\sigma}^{\dagger}a_{q\sigma}=-\sum_{p\neq q}K_{pq}n_{p} n_{q} \tag{7}\] \[\text{(third sum)}= \sum_{pq}\sum_{\sigma\tau}\frac{1}{2}(pp|qq)a_{p\sigma}^{\dagger} (a_{p\sigma}a_{q\tau}^{\dagger}-\delta_{pq}\delta_{\sigma\tau})a_{q\tau}=\sum _{pq}2J_{pq}n_{p}n_{q}-\sum_{p}J_{pp}n_{p} \tag{8}\] Note that in the second sum, the condition of \(p\neq q\) additionally reduced the summation over the spin indices. We also adopted \(J_{pq}\) and \(K_{pq}\) to respectively represent the Coulomb and the exchange integrals associated with \(p\) and \(q\) in compact forms, as well as the number operator \(n_{p}=a_{p\alpha}^{\dagger}a_{p\alpha}=a_{p\beta}^{\dagger}a_{p\beta}\). With these, the working expression for the Hamiltonian is obtained as \[H=\sum_{p}(2h_{pp}-J_{pp})n_{p}+\sum_{p\neq q}K_{pq}d_{p}^{\dagger}d_{q}-\sum _{p\neq q}K_{pq}n_{p}n_{q}+\sum_{pq}2J_{pq}n_{p}n_{q} \tag{9}\] While we can adopt the set of HF MOs for constructing this Hamiltonian, it will not be an ideal choice for obtaining the molecular energy. Therefore, we performed orbital optimizations toward minimizing \(\langle\Psi|H|\Psi\rangle\) as in ref [20]. Then, the total energy is calculated as \[E= \langle\Psi|H|\Psi\rangle+E_{\text{nB}} \tag{10}\] where \(E_{\text{nB}}\) is the energy contributed by the non-bosonic excitation terms with the configurations neglected in Eq. (2). Although the analytic expression of \(E_{\text{nB}}\) cannot be derived based on the information available with the pair excitations, its contribution may still be accounted for toward achieving more reliable energy calculations at least at a heuristic level. We propose to approximate it as \[E_{\text{nB}}=-\sum_{pqrs}^{N}{}^{\prime}(pr||qs)\left[\langle\Psi|a_{p}^{ \dagger}a_{r}a_{r}^{\dagger}a_{p}|\Psi\rangle\langle\Psi|a_{q}^{\dagger}a_{s} a_{q}^{\dagger}|\Psi\rangle\right]^{1/2} \tag{11}\] where \((pr||qs)\) denotes the conventional electron repulsion integral (ERI), \((pr||qs)=(pr|qs)-(ps|qr)\), [23] related to electron excitations from orbitals \((p,q)\) to \((r,s)\). The primed sum denotes that a dummy item that will correspond to a paired excitation (namely, with \(p=q\) and \(r=s\)) should be avoided. This correction was devised with the following reasoning. First, among all missing unpaired configurations, the ones with two or four unpaired electrons (seniority 2 and 4) will contribute most. This is a reasonable assumption when we consider the terms that constitute low order correlation corrections. Also, such configurations can be generated by operating \(a_{r}^{\dagger}a_{p}a_{s}^{\dagger}a_{q}\) (with omitted spin indices for brevity) on some doubly occupied configuration. Next, we argue that the energy contribution by an unpaired configuration will be proportional to the associated ERI, \((pr||qs)\). To know the actual contribution, we need its proportionality constant ("amplitude"), but by construction we do not have that information. We finally reason that this unknown amplitude can be approximated by the geometric mean of the two contributions related to \(a_{r}^{\dagger}a_{p}\) and \(a_{s}^{\dagger}a_{q}\), namely the portions of \(|\Psi\rangle\) that have filled \(p\) (\(q\)) orbital and empty \(r\) (\(s\)) orbital. These will be the norms of \(a_{r}^{\dagger}a_{p}\,|\Psi\rangle\) and \(a_{s}^{\dagger}a_{q}\,|\Psi\rangle\), leading to Eq. 11. After a short algebra, we can also show that Eq. 11 is equivalent to \[E_{\rm nB}=-\sum_{pqrs}^{N}{}^{\prime}(pr||qs)\,\langle\Psi|(1-n_{r})n_{p}| \Psi\rangle^{1/2}\,\langle\Psi|(1-n_{s})n_{q}|\Psi\rangle^{1/2} \tag{12}\] which is useful for the sake of measurements. In calculating this energy correction with \(E_{\rm nB}\), we need a way of fixing the orbital signs. In this work, we aimed to adjust the signs such that the energy becomes as low as possible. In principle, we can test all different combinations of the signs of all orbitals, but doing so will require testing on \(\sim 2^{N}\) combinations with the number of orbitals \(N\), which will be unacceptable. Thus, we have taken the following practical tactic. When the number of electron pairs is denoted as \(n\), we started by arbitrarily fixing the signs of the \(n\)-th and the \((n+1)\)-th orbitals, which will correspond to the highest occupied and the lowest unoccupied orbitals with the HF picture. Of course, after the further orbital optimizations, HF picture will not last any longer, but the indices inherited from the initial HF calculations still remain. With an index \(a\) fixed to \(a=(n+1)\), then we walked down the indices over \(\{i=(n-1),(n-2),\ldots,1\}\) together with \(i_{+}=i+1\), and at each stage fixed the sign of the \(i\)-th orbital such that the electron-repulsion integral \((i_{+}a|ia)\) becomes positive. The same process was also taken for over \(\{a=(n+1),(n+2),\ldots,N\}\) by considering \((na_{-}|na)\) with \(a_{-}=(a-1)\) and by walking up in the index space of a. For these procedures, we can possibly use indexes of canonical orbitals in defining \(i\) and \(a\). However, doing so does not provide any connection between adjacent orbitals, and sometime \((i_{+}a|ia)\) or \((na_{-}|na)\) is too close to zero, which render the sign fixing process rather ill defined. Choosing spatially best overlapping orbital as the neighboring one will be better in this regard, but orbitals are conventionally given with orthogonality intrinsically implemented and using overlap will not be a good tactic in this regard, either. Therefore, we adopted the one-electron matrix \(h_{pq}\) to decide the proximity in orbitals. Namely, for any given orbital \(i\), we chose \(i_{+}\) such that \(i_{+}\) was not considered before and \(h_{ii_{+}}\) is the maximum. The same can be applied for choosing \(a_{-}\). Apparently, the computational efforts for choosing the sequence of orbitals in this manner scale as \(\sim N^{2}\). ## 3 Results We first tested our VQE algorithm using H\({}_{2}\) with the minimal basis set, STO-3G. Because of the symmetry constraint, this case with only two MOs does not involve any correlations with a non-bosonically excited configuration. Thus, it can serve as a baseline benchmark for our closed-shell quantum simulator algorithm. In terms of the number of qubits, only two were required while four qubits will be needed with the conventional Jordan-Wigner encoding for handling four spin-orbitals. Because there are only one occupied and one virtual orbitals in this case, one exchange gate with one mixing parameter was enough for VQE. The results at varying bond distances, in the range from 0.3 to 2.4 A is plotted in Figure 2, together with the corresponding FCI and the HF results. One can clearly see that the VQE result with \(\sim\)10\({}^{5}\) shots is in an excellent agreement with the FCI energies with nearly negligible errors of \(\ll\)1 milliHartree (mH), evidencing that our VQE algorithm is working properly. We next tested our scheme by computing the bond dissociation potential curves of a series of small molecules: H\({}_{2}\)O, LiH, N\({}_{2}\), and Li\({}_{2}\)O, with the same STO-3G basis set and the same number of shots for obtaining the expectation values of the Pauli terms. The adopted computational resources for the molecules are listed in Table 1. In this work, we adopted a quantum simulator IBM-Qiskit [24], and thus with actual quantum hardware, the optimal numbers of shot averages will differ depending on the fidelity and the coherence time of the hardware. The first tested molecule, LiH, serves the purpose of checking how much correlation energy can be recovered with our scheme. In this case, there are six MOs with the STO-3G basis. VQE simulations were performed by a circuit consisting of six qubits, corresponding to two occupied and four virtual orbitals. The ground state energies were obtained in the Li-H distance range of 0.5 to 2.4 A and are shown in Fig. 3. As shown in this figure, hereafter, we will designate the VQE energies based on HF orbitals with "vqe" and the ones after orbital optimizations with "oo-vqe". When \(E_{\rm nB}\) in eq 12 is added, of course, \begin{table} \begin{tabular}{|l||l|l|l|l|l|} \hline Molecule & H\({}_{2}\) & LiH & H\({}_{2}\)O & N\({}_{2}\) & Li\({}_{2}\)O \\ \hline No. qubits (occ,vir) & 2 (1,1) & 3 (1,2) & 6 (4,2) & 8(5,3) & 12 (4,8) \\ No. paramters (\(\theta_{ia}\)) & 1 & 2 & 8 & 15 & 32 \\ No. two-qubit-gates & 3 & 6 & 24 & 45 & 96 \\ \hline \end{tabular} \end{table} Table 1: Quantum resource requirements for tested small molecules Figure 2: Dissociation curve of the H\({}_{2}\) molecule. the energies will be respectively designated as "vqe-nB" and "oo-vqe-nB". Both vqe and vqe-nB results show good agreements to FCI energy around the distance near the equilibrium separation. However, the errors become quite large at long separations, and orbital optimizations indeed cover this discrepancy rather nicely. This is expected as the spin-restricted HF orbitals should severely fail in such a case, and pair-correlation methods are known to perform well for handling the related nondynamical correlations [25]. In any case, over the entire region, our pair-correlated VQE performs quite well after orbital optimization and \(E_{\rm nB}\) does not contribute much with LiH. The next tested molecule, H\({}_{2}\)O, has seven MOs with the minimal basis set. Of the seven, oxygen \(1s\) hardly participates in forming the bonds, and we excluded this core orbital from the VQE calculations via the frozen-core approximation. Hence, VQE was performed by mapping six MOs (four occupied and two virtual ones) to as many qubits. Thus, there were 8 parameters in applying the exchange gates between the occupied and virtual orbitals, with a total of 24 two-qubit gates. Figure 4 shows how the molecular energy changes by varying the H-O bondlength at a fixed H-O-H angle of 104.45 deg. From this figure, we can again see that the paired ansatz by itself (vqe) displays a significant deviation from the FCI result. Much of the discrepancy is fixed with the subsequent orbital optimization (oo-vqe), and the non-bosonic correction that we propose here almost correctly recovers from the remaining error, with the largest deviation from the FCI curve being barely noticeable from the figure. Interestingly, the non-bosonic correction without performing the orbital optimization ("vqe-nB") also displays quite a reliable agreement in this case. Now, let us move on to a larger system Li\({}_{2}\)O. Indeed, the molecule is practically related to the operation of Li-air batteries, and enabling quantum simulations of battery materials will be of significant industrial interest [11]. Similarly to H\({}_{2}\)O, by applying the frozen core approximation for \(1s\) orbitals, we were able to conduct VQE simulations with only 12 qubits. As they represented 4 occupied and 8 virtual orbitals, a total of 32 parameters matched with 96 two-qubit gates were needed. The ground state energies at varying Li-O bond distances are shown in Fig. 5. Compared to H\({}_{2}\)O, because there are more virtual orbitals available, Figure 3: LiH ground state energy according to a function of the Li–H distance. we can expect that the contribution of the non-bosonic excitations will be larger in this case. Indeed, from the figure, we can see that vqe or oo-vqe does not reproduce the FCI PES that well. Instead, oo-vqe-nB is reproducing FCI much better with errors within \(\sim\)10 mH. How well each electronic configuration in the CI expansion is represented in VQE by the non-bosonic correction terms will be discussed in more details in a later section. Now, let us consider the dissociation curve of N\({}_{2}\) to further confirm the utility of our approach. In fact, N\({}_{2}\) with its triple bond has been considered as one of the most difficult systems to model with electronic structure theories, and accordingly, it will act as a stringent test case for us. Not surprisingly, even the CCSD(T) method fails drastically in describing this triple bond dissociation because it is still a single-reference approach (Fig. 6). On the contrary, our VQE results with the frozen core approximation reasonably reproduce the FCI results, with the equilibrium bond length and the entire energetic features being in good agreements. We note, however, that the energies stretched from the equilibrium geometry (\(R_{\rm N-N}\sim 1.2\) A) displays an error of 30 - 50 mH. We will defer commenting on this quantitative discrepancy to a later section, as a more detailed analysis about this mismatch will be covered in the Discussion section. ## 4 Discussion and Summary In order to see how our method that separately treats bosonic and non-bosonic excitations constructs the electronic structure, with Li\({}_{2}\)O at 1.6 A separation, we compared the populations of each excited configuration with FCI and VQE. The largest population among bosonically excited configurations was associated with the excitation involving the 7-th and the 13-th MOs (in the order of canonical orbital energies), with the FCI population of 0.11156. The same population from vqe can be compared as \(\sim\)0.11244, which is in close agreement with the FCI value. The second largest population, involving an excitation between the 6-th and the 12-th HF MOs, has essentially the same population with 0.11156 for Figure 4: H\({}_{2}\)O ground state energies with differing O–H distances. The two bond lengths were kept identical to each other. FCI and 0.11142 for vqe. The third populations similarly compare favorably as 0.04982 and 0.04628. Therefore, FCI and our VQE scheme show very similar electronic structures. The situation with N\({}_{2}\) was somewhat different. At its minimum energy geometry (\(\sim\)1.2 A), the Figure 5: Li\({}_{2}\)O ground state energy at varying Li-O distances. Figure 6: Ground state energies of N\({}_{2}\) at varying bond lengths. contribution by a quadruple excitation term was quite important with FCI. Namely, a double pair excitation from the 6-th and the 7-th (occupied) MOs to the 8-th and the 9-th (virtual) MOs contributed by \(\sim\)30% as large as the most important bosonic double excitation (7-th \(\rightarrow\) 9-th). While our method could still accommodate such an excitation in pairs and thus the result was not too bad, the situation will likely increase the importance of pair-broken multiple excitations. Thus, our approach started to deviate from the exact answer in stretched geometries as higher excitations become more important. Of course, the fidelity of the simple correction with Eq 11 will also deviate. Thus, we note that treating a triple bond still remains as a difficult problem. We also wish to point out the effects and the limitations of our bosonic mapping and non-bosonic correction. It has been discussed that the bosonic mapping is a powerful tool and can qualitatively reproduce the dissociation curves of simple molecules [17]. However, it is destined to underestimate the correlation energy because it only includes a subset of properly treated excitations. Interestingly, our non-bosonic correction mostly shows quite good agreements with exact answers at least for the single-bonded molecules, as exemplified with H\({}_{2}\)O and Li\({}_{2}\)O. In contrast, in the cases of molecules involving double or triple bonds, our correction term did not work that well. Especially, in the case of N\({}_{2}\) with a triple bond (Fig. 6), similar results were obtained whether the correction term was added or not. Even in such cases, however, our approach can still be considered meaningful in terms of reducing the required resources for simulations. Although several reports have shown that quantum unitary coupled cluster singles and doubles (UCCSD) can reproduce the dissociation curves of some small single-bonded molecules and even N\({}_{2}\) within a few mH error [26, 27, 28, 29], the required number of gates were about \(10^{4}\) - \(10^{5}\)[26], which are still distant from the practical applicability with the currently available NISQ devices. It is also interesting to note that various symmetry-restricted versions of quantum UCCSD approaches show errors in the tens of mH regime [26]. Our model actually provided curves with an at least similar level of errors, but with only the half number of qubits and tens of two-qubit gates. Indeed, our resource requirements will likely be much more reasonable with the presently available devices. In summary, we have proposed a VQE quantum simulation method that uses a qubit to map a spatial MO in closed-shell molecules with a rather simple correction. The method requires only a half number of qubits compared with conventional Jordan-Wigner or Bravyi-Kitaev mapping based methods. Using the method, dissociation curves of LiH, H\({}_{2}\)O, and Li\({}_{2}\)O were obtained, demonstrating errors within \(\sim\)10 mH in comparison with FCI. The number of gates and optimization parameters are proportional to \(O(N^{2})\), which is significantly less than the more conventional \(O(N^{3})\) or \(O(N^{4})\) scaling [22] and is reasonably accessible with actually available physical qubit systems. We wish that our method can be of help in further advancing techniques in the NISQ era that we have already entered.
2305.03680
"Un-Equal Online Safety?" A Gender Analysis of Security and Privacy Protection Advice and Behaviour Patterns
There are indications in literature that women do not engage with security and privacy (SP) technologies, meant to keep them safe online, in the same way as men do. To better understand this gender gap, we conduct an online survey with N=604 U.K. participants, to elicit SP advice source preference and usage of SP methods and technologies. We find evidence of un-equal SP access and participation. In particular, advice from intimate and social connections (ISC) is more prevalent among women, while online content is preferred by men. ISC do not closely associate with nor predict the use of SP technologies, whereas online sources (such as online forums, reviews, specialist pages and technology adverts) and training do. Men are also more likely to use multiple advice sources, that enhances the likelihood of using SP technologies. Women are motivated to approach ISC due to their perceptions of the advisor (such as IT related expertise, experience and trustworthiness) while men approach ISC to evaluate options and seek reassurance for their own practices. This research raises questions about the equity of online safety opportunities and makes recommendations.
Kovila P. L. Coopamootoo, Magdalene Ng
2023-05-05T16:50:35Z
http://arxiv.org/abs/2305.03680v1
# "Un-Equal Online Safety?" A Gender Analysis of Security ###### Abstract There are indications in literature that women do not engage with security and privacy (SP) technologies, meant to keep them safe online, in the same way as men do. To better understand this _gender gap_, we conduct an online survey with N=604 U.K. participants, to elicit SP advice source preference and usage of SP methods and technologies. We find evidence of un-equal SP access and participation. In particular, advice from intimate and social connections (ISC) is more prevalent among women, while online content is preferred by men. ISC do not closely associate with nor predict the use of SP technologies, whereas online sources (such as online forums, reviews, specialist pages and technology adverts) and training do. Men are also more likely to use multiple advice sources, that enhances the likelihood of using SP technologies. Women are motivated to approach ISC due to their perceptions of the advisor (such as IT related expertise, experience and trustworthiness) while men approach ISC to evaluate options and seek reassurance for their own practices. This research raises questions about the equity of online safety opportunities and makes recommendations. ## 1 Introduction Digital technologies are a powerful driver of gender equality with the potential to give women and girls access to information, opportunities and resources. But the gender divide persists worldwide, often because of social and gender norms and deep-rooted gender stereotypes, resulting in gendered use of technology [7]. However, while the digital divide with respect to gender is said to be decreasing in developed countries [89], literature offers (only) a handful of empirical research that demonstrated that women are poorer in engaging with protective security and privacy (SP) technology although they report more concern [64, 20, 66] compared to men, such as in using firewalls and spamfilters [64], tracking protection [20], or engaging with technical privacy protection [66]. In parallel, there are reports that women are more at risk of online harms than men [63], and calls by both international and local organisations for action to ensure women's safety online, such as the United Nation Development Fund (UNDP), which advocates that it is not enough for women and girls to simply have access to technology and digital skills - _"they must also become active agents of change to create a safer and equitable digital future for all"_ and have called for a safe, affordable, and inclusive Internet [87]. In the U.K., the regulator of communications, Ofcom, is also urging technology firms to take actions to keep women safe online [62]. So far there has not been much usable SP research effort specifically dedicated to understanding the gender discrepancy, albeit [66, 20, 64]. This paper reports on research providing insights into the characteristics (what) and causes (why) of this gender discrepancy, thereby demonstrating how technology stereotypes [1] and SP stereotypes [91] play out in practice. The first part of our investigation focuses on SP access and use via the research questions: * **RQ1**: What advice source do individuals use for SP protection, given their gender differences? * **RQ2**: What SP technologies and methods do women versus men use? * **RQ3**: How does advice source associate with and impact SP usage, given gender differences? In the second part, we look into why individuals seek protective SP advice from intimate and social connections (ISC), referring to family / partners, friends, colleagues and social acquaintances and what kind of advice they receive via: * **RQ4**: For what reasons do women versus men approach ISC for protective SP advice? * **RQ5**: What type of advice do women versus men receive from ISC? We focus on ISC advice because while there are indications that advice from ISC may not cover how to use protective SP [93, 56], ISC remains an accessible and popular (and therefore valuable) source [93] and previous research has already looked into the quality SP advice from the web [80]. **Contributions**: The main distinguishing contribution of this work, vis-a-vis previous research with regards to SP ad vice [72, 78, 80] or socially supported SP [56, 60, 93], is our relatively large-scale binary gender focus in SP protection access, in particular from ISC, and its implications on SP usage, thereby complementing and extending the few gender analysis in SP research [20, 64, 66, 91] who have also looked at women versus men, but not addressed advice source. Overall this research (1) provides evidence that women and men have diverging access and participation patterns with SP - which can contribute to the in-equity of online safety and supports arguments of women being more at risk online compared to men, (2) supports local and international calls for action to keep women safe online, and (3) makes recommendations for multi-stakeholder actions. **Summary of Findings.** **(1) Women access protective SP differently to men.****(a)** We find evidence that women prefer advice from ISC whereas men prefer online content. **(b)** Family (only or in combination with another source) is the most reported advice source for women, whereas general research for men. **(c)** Women are more likely to report not use any advice source, while men are more likely to report using multiple sources. **(2) Women use SP technologies differently to men.** Women are more fluent with simple or builtin SP (such as privacy settings, HTTPS, builtin security, security software updates, or passwords) and non-technology methods for SP protection, compared to men who are fluent with a wider spectrum of SP protection, including more sophisticated methods (such as firewall, VPN, anti-spyware, anti-malware, anti-tracking or multiple factor authentication). **(3) Online advice and the number of advice sources influence the use of SP technologies.****(a)** A preference for advice from ISC does not predict the use of SP technologies, whereas online advice shows 3 to 11 times enhanced likelihood of using SP technologies. **(b)** An increase in the number of advice sources used, from 1 to 3, gradually increases the likelihood of using SP technologies. **(4) Different motivation for ISC advice.** Women are 3 times more likely to approach ISC for SP protection, where their motivation (perceived expertise of advisor, experience and trustworthiness) suggest reliance on ISC, while men approach ISC to evaluate options and seek reassurance for their own practices. **(5) Different themes of SP advice.** A higher % of women receive authentication advice, whereas a higher % of men receive malware, fraud and communication / network privacy advice. **Note.** This paper does not make value judgements on sources of SP advice. And ISC advice is not inherently poorer than online advice. ## 2 Background In this section, we look into literature addressing security and privacy (SP) inequities, review previous work on SP advice, as well as social connections in relation to protective SP. ### (Gender) Inequity in Security & Privacy While the digital divide broadly conceptualises how societal diversity impart differential technology access, skills and outcomes, that reinforces societal inequalities [89], research across disciplines has touched on inequity with respect to privacy [13, 76], with particular mention to gender [4, 20, 50, 64, 66]. First, a handful of recent research has conceptually addressed privacy inequality, in particular using the conceptual scholarship of the digital divide to discuss how privacy contributes to deepening or reproducing one's social standing, framing privacy inequity as a cause or consequence of unequal digital participation [67], and arguing how the unequal distribution of privacy is a societal problem, where online privacy is sensitive to social inequalities pertaining to age, education and gender [13], and how trusting others with personal data, concerns about negative or exploitative online experiences, and feelings of control over one's data, influence differential web uses [76]. Second, feminist perspectives have a long standing of questioning whether online privacy is on the same side for women as it is for men [4, 50]. In particular, that historically women have had the 'wrong kind of privacy' such as isolation and confinement, while what they merit morally and politically are 'the right kinds of privacy', namely, meaningful opportunities for choice-ful seclusion, intimacies, and legal rights of decision about personal life and health, where this true privacy for women would further equality and entail radical transformation [4, 5]. Third, empirical research has highlighted that even though women are more concerned than men about their privacy online, they are less likely to engage with protective technologies [8]. In particular, women are more likely to report using less technical, non-technology and simple means to protect themselves online, compared to men who employ more diverse technology means [64, 66, 20]. Self-reported cybersecurity behaviour also differ between the gender of employees in organisations [6] and in general individuals hold gender stereotypes with regards to SP, where men are expected to be more engaged with SP topics or to behave in SP-enhancing ways, while women are expected to have poor SP confidence [91]. ### Security & Privacy Advice Source Previous research has evidenced that individuals access security and privacy (SP) advice from various sources, often covering various aspect of SP. Advice sources and ways for individuals to learn about SP can be considered as (1) informal while referring to family, friends [23], coworkers [70], peers [60], or news articles; (2) semi-formal referring to webpages from third parties (including online Government webpages, retailers, vendors of software or security-, privacy-focused organisations such as Privacy International) [34, 42, 72]; and (3) more formal sources including training and education. While a high percentage of internet users are not aware of protective SP technologies online [18], other SP information such as the source of threats, how they materialise and their consequences may be gained from informal sources. In particular, personal stories from non-experts (family, friends, peers) have been seen to focus on who the source of threat is, while news articles focus on noteworthy descriptions of incidents and advice relevant to the wider society [72]. Of semi-formal sources, webpages are thought to be most authoritative, with their aim to educate Internet users who turn to these when seeking expertise online. They communicate concerns that organisations or governments think non-experts should be aware of, such as Privacy International [71], the Electronic Frontier Foundation [26], Microsoft [54] or the U.K.'s National Cyber Security Centre [57]. In general [77], within households [56], for particular demographics [53, 56, 59, 61], or in technology contexts involving collective ownership of personal information such as social media and smart homes [24, 61], family, friends or colleagues have been found to be a prevalent source of SP advice. Within SP protection contexts such as choosing passwords, authentication, antivirus use, software updating, device / software prompts have been seen as a prevalent source of advice, followed with online sources, print, TV news or articles and online forums [77]. ### Social connections & Protective SP advice The social dimension of protective behaviour is grounded in the individual's _social capital_. Social scientists share the understanding that social capital consists of resources embedded in social relations and social structure, which can be mobilised when individuals wish to increase the likelihood of purposive actions [49]. In particular, social capital is the total actual or potential resources individuals have access to through their social network [11], and it includes physical, emotional (instantiated as emotional or social support) and informational (such as advice or novel information) resources [49]. Social capital has been investigated with regards to SP, in the context of negotiating privacy and information disclosure on social network sites [14, 27, 86], while recent research has conceptualised how social groups (imitate relationships, families and households, social acquaintances and the public) influence others' SP behaviours, in particular via reliance on these social connections for advice and knowledge [93]. _Within households:_ Members of households providing informal support to those within the household, have been referred to as _SP stewards_ or _self-appointed technology managers_[23, 56], who support less technically versed household members (such as older adults) and may also establish guidelines for technology usage for the whole family [56]. However SP stewards are themselves not SP experts and may have gaps in knowledge, and often do not focus on SP technologies (tools and settings) but around past experiences. They may also impose their threat and protection models on household members and control technology use, thereby suggesting paternalism and loss of digital agency. For younger age groups, parents have been seen to provide advice and help to their children [47]. Overall, it is thought that those with lower internet skills take advice from family and friends and engage in fewer SP practices, thereby potentially increasing the vulnerability of these disadvantaged users [77]. _Outside the home:_ Older adults are thought to seek advice from peers such as _cyberguardians_, who are members of a community providing peer-to-peer SP support [60], and who prioritise the availability of a provider of information [59, 61]. Protective SP information from social circles has also been found in notifying others about what has been experienced [23], learning lessons from others' shared stories [73], seeking out information in response to security incident [75], or being influenced by others' security feature adoption such as on social media [24]. In addition, those with higher internet skills and socio-economic status are thought to take advice from their workplace and have the technical skills to learn from experience [77]. While overall, user characteristics including age [56, 60], education level [78], skills and socio-economic status [77] impact from whom individuals take SP advice, and the advice source in turn impacts the type of advice individuals receive and consequently their experience of SP [78], to our knowledge, previous research efforts have not focused on how women versus men receive, seek or gain SP advice and how such behaviour impact their protective behaviour. ### Research Gap & Contributions First, this paper expands the third category of research described above in Section 2.1, empirically extending knowledge on the previously identified 'gender gap' [64, 20, 66], but differs from them with a focus into the advice seeking and broad usage behaviours named by participants themselves within a U.K. sample. In comparison, Wei et al. looked into the biased assumptions and stereotypes people hold about SP behaviours with a U.S. sample [91], Coopamootoo et al. focused on feelings and behaviour with regards to online tracking only with a U.K. and European sample [20], Oomen & Leenes looked at how risk perceptions associate with behaviour across pre-defined SP strategies such as anonymous remailers within a Dutch sample [64], and Park investigated how privacy behaviour and confidence differ by gender across a U.S. sample [66]. In addition, we look into inequity in both access and outcome, thereby also contributing to the digital divide literature [89]. Second, this research also complements the growing usable SP literature investigating SP advice source [80] and social SP [93], in particular research from (1) Rader et al. about diverse type of SP advice across sources [72], by providing a granular view of how SP advice source choices link to protection; (2) Redmiles et al. about the impact of socio-demographics on advice source preference and the prevalence of a security divide across skills level [78], and Geeng et al. about the barriers to the effectiveness SP advice for the LGBTQ+ community [35], by providing a deep binary gender analysis; (3) Redmiles et al.'s suggestion that individuals may feel that ISC are experts [77], by providing a granular view of the reasoning for choosing ISC advice versus not; and (4) Murthy et al.'s [56] and Nicholson et al.'s [60] qualitative investigation on how social connections support SP protection, by providing a larger sample study with mixed methodology. ## 3 Method We designed and conducted a user study via an online survey methodology in May 2021. ### Survey Design The first page of the survey consisted of an information section, followed by opt-in consent. This was followed with five parts of mandatory questions as described further. **Part 1:** We elicited engagement with SP via an open-ended question "_What privacy and security methods or tools do you most often use online?_", inviting participants to name three to five of them, similar to previous research where participants were found to provide between three to five privacy methods when queried [18]. **Part 2:** We queried participants about the ways they become aware of SP methods and technologies via another open-ended question "_How do you usually become aware of, learn about or find technologies/methods for protecting your privacy and security online?_", inviting participants to name all the sources they use. **Part 3:** We then asked participants if they would approach ISC if they needed SP advice, via a Yes/No response, followed with an open-ended question on their reasoning. The questions were set as "_At any time, if you need advice/help to protect your privacy and security online, do you usually approach your intimate and social connections (such as family, friends, coworkers) for help?_" and "_Please explain why you replied Yes or No to the previous question._" **Part 4:** We further queried into the example and type of advice they had received before, via "_If you have received advice/support from intimate and social connections, about technologies/methods to protect your privacy/security online, please provide examples of these advice/support._" **Part 5:** The last section of the survey consisted of (1) demographic questions on age, gender and computing/IT background; followed with (2) a digital skills questionnaire via Van Deursen et al.'s Internet / digital skills instrument across 5 skill types, as provided in [88], which consists of 35 items, organised across 5 digital skills (operational, information navigation, social, creative and mobile), administered with a 5-point Likert from 'Not at all true of me' to 'Very true of me'. Compared to previous research on the influence of _technical skills_ (measured via technical web skills [40]) on advice source preference [77], our choice of the digital skills instrument [88] follows Helsper's theoretical model of the digital divide [41], with attention to the relationship between digital skills and engagement with ICT, and whose development involved a U.K. sample [41, 88]. (3) We also administered Franke et al.'s Affinity for Technology Interaction (ATI) scale [32] (to our knowledge novel in the SP context), which consists of 9 items, administered with a 6-point Likert from 'completely disagree' to completely agree'. ATI assesses individuals' tendency to actively engage in intensive technology interaction (that is how they would approach, avoid or cope with new technical systems) [32], as a key personal resource for coping with technology. In technology interaction, ATI is manifested as a tendency to approach and explore new systems and functions more actively for problem-solving, versus a tendency to avoid interaction with new systems to prevent experiencing problems with technical systems. We note that Prolific selection criteria was set to women and men only, but the gender survey question was open to 'women, men, non-binary or other'. with only 1 participant identified as non-binary (not considered in analysis). ### Participants We recruited \(N=600+\) participants via Prolific Academic's UK sample pool. The study lasted between 10 -15 minutes. Participants were compensated at a rate of \(\pm\)7.5 per hour. After removing 14 incomplete responses and 1 response self-reporting as non-binary, we ended up with a sample of \(N=604\) participants, with 89% identifying with white ethnicity. Our sample was balanced with the binary gender of men and women, with approximately 50% women and 50% men (more specifically \(n=303\) women and \(n=301\) men), to expand knowledge on the previously identified gender gap which specifically compared women to men [64, 20, 66]. We also balanced our sample across age, with approximately 10% of participants across each of 10 age groups from 18 to 65+, because previous research has shown that age impacts where individuals gain SP advice, with a particular influence on whether they gain advice from family and friends [56, 59, 23, 60]. The sample had a higher proportion of university graduates (at 51.1%) than the UK population (noted at approximately 42% [31, 22]), including 38.2% of university degrees at undergraduate level, 11.8% at masters level and 1.5% at doctorate level. Education level was similar across gender, as shown in Table 5 in the Appendix, where 51.2% of the women group versus 51.9% of the men group, had a university degree (undergraduate to PhD combined). This differs from the U.K. population, where women are more likely to go to university than men [2, 68, 12]. Overall, 16.5% of participants (n=100) reported to have an IT / computing background, that is to have education or to work within the field of IT, computer science or computer engineering, which pertained to approximately 10% of the women group and 23% of the men group. The gender difference in IT / computing background in our sample is much smaller than that in the U.K. population [92]. (Note that in the results reported in Section 4, the gender differences observed were the same for the whole N=604 and the n=504 without IT / computing background.) We also measured participants' digital skills via Van Deursen et al.'s Internet / digital skills instrument [88] (detailed further in Section 3.1) across 5 skill types. There was no difference in information navigation, social and mobile digital skills, but there was a slight difference in operational (mean difference = 1.7) and creative (mean difference = 3.5) digital skills, between gender. The scale reliability Cronbach \(\alpha\) across the 5 digital skills varied between.824 to.910. In addition, employing Franke et al.'s Affinity for Technology Interaction (ATI) scale [32] (detailed further in Section 3.1), we observed a significant difference in ATI between our women (\(M=3.37\), \(SD=.92\)) and men (\(M=3.86\), \(SD=.97\)) participants, with men responding with higher ATI (\(p<.001\)). ATI had a Cronbach \(\alpha\) of.825. ### Ethics Our study protocol was approved by Newcastle University's Ethics Board before the research commenced. We also followed the ethics guidelines of King's College London, where the first author was based, for the full data analysis and write-up phases. We sought participants' opt-in consent for data collection prior to their responding to the questionnaire and did not collect identifying information. Participation in the study was voluntary and anonymous and our participants could drop out of it at any stage. ### Data Analysis We employed both qualitative and quantitative analyses, that we describe in this section. **Qualitative.** The free-form responses collected for parts 2 to 4 of the survey were analysed via a process of inductive content analysis [83, 58], where for each question, we (1) read each response, extracted themes and synthesised responses across categories, (2) developed a codebook which was iteratively refined, (3) coded all the responses with the help of 2 coders, and (4) computed inter-rater reliability (IRR). Part 1's elicitation of SP methods and technologies followed a simple identification of the SP method named. Our coding approach provided evidence of the presence of codes, which does not provide evidence for their absence. We alleviated the potential effects on our findings by specifically asking participants to say whether they do not use or are not aware (that is have no knowledge of having used an SP technology or method), and similarly for advice source, as detailed in Section 3.1 and the questionnaire in the Appendix. _Part 1-SP methods and technology use:_ Overall, we collated participants report of SP usage to a total of 26 distinct tools and methods. We categorised the tools and methods according to _technological_ methods and _non-technological_ methods (summarised in Table 7 of the Appendix). We define _SP as technologies_ that rely on algorithms, or software programming. In other words, this is where the technology itself has the ability to protect one's privacy and security. We define _non-technological SP methods_ as comprising of human behaviours and strategies. _Part 2-Advice Source for Protective SP:_ We find that individuals become aware of, learn about or find technologies/methods to protect their security and privacy via a list of sources, including those named within previous research [72, 77, 78, 79, 93]. We identified (i) a family, friends, co-workers and other social connections category, similar to [61, 56, 93], that we group under _intinate and social connections_ [note that family included partners, parents, children or siblings]; (ii) an online content category (similar to [80, 72]) and (iii) an 'other methods' category, that included news and training, as summarised in Table 1. IRR Cohen \(k=.950\). _Part 3-Motivation for seeking ISC advice:_ We categorised participants' rationale for choosing ISC advice as (i) perception of the technology skills and experience of their ISC, (ii) perception of the qualities (such as trustworthiness, availability, helpfulness) of their ISC, (iii) perception their own skills or (iv) other reasons. We categorised participants' responses for deliberately not choosing ISC advice according to (i) their reference to their own skills such as self-reliance and confidence or (ii) other reasons such as a preference for another source. This complements previous research suggesting that individuals use SP advice sources based on their own education skills [77], their trust and convenience perceptions of these sources [77], and the perceived skills of ISC who may not actually be SP experts [56]. IRR Cohen \(k=.861\). _Part 4-Example Advice from ISC:_ We categorised participants report of SP advice received from ISC as (i) specific to an aspect of SP or (ii) general. Specific SP advice referred to an aspect of SP protection, such as authentication, anti-malware, SP of communication, email and others, as described in Table 11 of the Appendix. Responses in the general advice category did not mention a specific SP aspect, but rather included general SP aspects, such as best practices, how to keep safe online, warnings, installations, or to do it for the participant (Figure 8). IRR Cohen \(k=.907\). **Quantitative.** The occurrence of the themes across gender, advice source, and SP use (methods / technologies) were used in visualisations to depict quantitative differences, \(\chi^{2}\) tests and multivariate analysis to depict associations, and logistic regressions to show predictive influence. We tested regression assumptions, for example computing the collinearity statistics with the list of sources (referring to regression results in Section 4.3.2), where VIF ranges from 1.02 to 1.14 across sources, signifying poor correlation between sources. In addition, given the lack of prior estimates of the effect of gender on SP sources, we used the rule of thumb of >500 participants and at least 10 cases per IV. Further, we note that content analysis has similarly contributed to empirical research [43, 46, 10], including large scale quantitative research in the area of SP [53, 18, 19, 20]. ### Limitations **Participant characteristics:** The sample had a level of digital skill to enable an account on Prolific platform and was more highly educated than the UK population. A different sample may show variation in SP access and engagement. **Data elicitation:** This study relies on self-reports, where the particular SP context may be fraught with gender stereotypes about confidence [91], resulting in over-reporting of technology engagement by men and under-reporting by women. In addition, social desirability bias may have diverging influence between genders, where women admit more to receive advice from ISC, or to more likely remember receiving help from family, thus conforming to help-seeking stereotypes across genders. The impact of stereotypes on our findings is further looked at within the Discussion Section 5. Self-report surveys have nonetheless been a key method of gathering insights into SP experiences over the years, including research on SP advice [77] and SP usage [18], relevant to this research, and widely used to elicit rich insights into human experiences, perceptions or stereotypes in SP research [18, 78, 91, 77]. **Analysis:** Our coding for the presence of codes may be argued to come with limitations, which we discussed in Section 3.4. However we note that our questions were mandatory and had an option for 'don't know', 'none' or 'other'. ## 4 Results ### Advice Source Preference We investigate RQ1, that is, what advice source(s) do individuals use for SP protection, given their gender differences, via 2 stages: (i) we first describe the categories and types of sources named by participants (we refer to responses from women participants as W# and M# from men participants); (ii) we then look into the gender differences in source type preference, number of advice sources used, including the patterns of multi-source usage. #### 4.1.1 Description of Advice Sources Used We summarise the categories and types of advice source in Table 1, describe them below and provide example responses in Table 9 in the Appendix. **Intimate & social connection / face-to-face** 37.9% participants reported the ISC category of advice source, with family and friends most named, as shown in Table 1. 'Family' include asking or receiving advice from participants' children, partner, sibling or parent. Friends and colleagues are self-explanatory. 'From work' include participants speaking about finding out about SP methods and technologies from work, referring to their employers, and workplace practices / recommendations, and 'face-to-face/offline' included participants supported by a known contact outside of their family, friends or work realms. **Online Content** 53.6% participants reported to gain protective SP advice from online content, including general Internet searches, by accessing specialist pages, through online reviews and expert recommendations, advice in online forums and content shared on social media. * General research: (a) using an Internet search (Google) or (b) starting with a general search that then leads to other online contents offering a comparison or review of protective methods, or (c) explained that they research for protective SP when perceiving a threat. * Specialist pages: reports of using technology webpages and blogs, or webpages providing targeted SP protection advice. * Online reviews or recommendations: mentions of expert reviews/recommendations. * Technology adverts and shared company information: (a) technology adverts, (b) targeted emails / information sent by technology companies or service providers, (c) reputable brands or retailers. * Social media content: SP advice shared on social media generally, or shared informally by people they may not personally know or more formally by an organisation such as the police. \begin{table} \begin{tabular}{|l l c|} \hline \hline * Online forums: general mentions of online forums, technology related forums or Reddit in particular. **Other methods** 18.8% participants reported finding protective SP advice via other media such as the news, shows on TV, training, prompts via their software or device, or from consumer magazine. * News & TV shows: refer to newspapers, general or technology focused online / TV news, and TV documentaries or programmes. * Training: (a) attending a course or training at school or as part of their work organisational requirement or (b) have a computing / SP background, (c) from their own (IT/SP related) role at work thereby implying training. * System prompts and settings: (a) to 'wait' to be prompted / be advised by the software / device they use or (b) just look into their privacy/security settings. * Consumer magazine: non-IT magazines targeting consumer such as 'Which?' in the U.K. **None**. Responses were categorised as 'none' when participants claimed to (a) not be aware of SP tools and methods or to not look for any advice; and (b) to not look for SP but to use what is perceived as already integrated in their device. #### 4.1.2 Gender Patterns & Differences We _first_ visualise participants' overall responses about preference for SP advice sources across gender, _second_ we compute a binomial logistic regression depicting gender differences, and _third_ we compare multi-advice usage across gender. Visualisation of Gender Patterns. Figure 1 (supported by Table 6 of the Appendix) shows clear patterns of differences between women and men, namely that * a higher proportion of those reporting sources in the ISC category, that is, their family, friends, offline advice (such as a computing support person they know), colleagues or from work practices, are women compared to men; * a higher proportion of those reporting to find out about SP methods and technologies via the online content category, such as general research, specialist pages, online forums and reviews, technology adverts or shared online contents, are men compared to women; and * among those reporting advice from friends, approximately the same proportion are men and women. In interpreting Figure 1, we note that the proportion depicted is based on the number of participants reporting each advice source (that is the base rate), as shown by Table 6, where for example that more than twice the number of participants reported to consult family (n=101) than to use shared online content (n=48) or online forums (n=46). Although the number of men reporting shared online content versus family, is only n=8 higher, we still note that twice the number of men reported shared online content than women and three times the number of women reported advice from family than men. Regression Model. We compute binary logistic regressions with independent variable (IV) women versus men (categorical variable and men as baseline) and dependent variable (DV) each of the advice sources (categorical variable set at 1 when the advice source is named, and 0 when not named). The models with family, work/colleagues, general research, specialist pages, online forums, shared online content and technology adverts are statistically significant (as highlighted in Table 2), meaning that the model with gender as predictor fits the data significantly better than the intercept only models. Compared to men, women are nearly 4X (OR = 3.93, p<.001) and twice (OR = 1.97, p=.029) more likely to report family and work/colleagues respectively as SP advice source. However, women are also 40% to 75% less likely than men to report advice source from the online content category, in particular general research (p=.007), specialist pages (p<.001), online forums (p<.001), shared online content (p=.014) and technology adverts (p=.018) (that is men are between 2 to 4 times more likely to report these advice sources). These findings loosely corresponds to previous indicative research [53] and compares with previous research linking lower education with lower likelihood of coworkers or government websites as advice source [78], where our gender groups had comparable education level. **Computing/IT Background.** For the subsample _without_ computing/IT background (n=504), gender predicts advice source preference similarly as in Table 2, and in addition: women are significantly more likely to receive advice from training than men. For the subsample _with_ computing/IT background (n=100), the only significant models involve women 5.9X more likely to receive protective SP advice from family than men, who are 6.9X more likely to receive it from online forums than women. **Multi Advice Use.** Table 3 shows that 15.7% of participants report to not be aware or to not use any SP advice source and nearly 50% report to use only 1 source while the rest report to use more than 1 advice source for SP protection. A higher Figure 1: _participant named sources_ across gender % of women than men report to not be aware or to not use any SP advice source, while a higher % of men report to use more than 1 advice source. We visualise women's versus men's usage of multiple advice sources in Figure 2. Again, we notice differences across gender, namely that (1) a higher % of women compared to men gain SP advice **only** from either family (13.9%), friends (4%), an offline source (3.6%), colleagues or work (4.3%); (2) a higher % of men than women gain advice **only** from the advice sources from the 'online content' or 'other methods' categories, except for online reviews - with 13.3% naming general research, 5% specialist pages and 4.7% online forums only; (3) general research is the most used advice source in combination with another advice source for both men and women; (4) general research (only or in combination with another source) is the most used advice source for men, whereas family (only or in combination with another source) is the most used advice source for women, ### Security & Privacy across gender We investigate RQ2, that is, what SP technologies and methods do women versus men use. We provide the full list of SP technologies and methods named by participants in Table 7, Appendix C, and summarise the usage differences between men and women here: in Figure 3, we notice that a higher proportion of men engage with more technological SP, while women have less sophisticated SP behaviour in engaging with builtin type SP and using non-technology methods more than men. We find that men are significantly more likely to engage with technology SP, with \(\chi^{2}(1)=16.746\), \(p<.001\), while women are significantly more likely to report not engaging with SP method, with \(\chi^{2}(1)=11.461\), \(p=.001\). There is no significant difference across gender for non-technology SP. This is supported by the contingency Table 8, Appendix C, that summarises women's versus men's engagement with SP methods. ### Implications of Advice Source Preference We investigate RQ3, that is how does advice source associate with and impact SP usage, given their gender differences. #### 4.3.1 Association of Advice Source and SP usage We investigate the statistical association between advice source and use of SP technologies and non-technological methods across gender via a Correspondence Analysis (CA) [39] and depict the strength of association in the spatial map in Figure 4. We find significant association between SP and advice source, with \(\chi^{2}=551.203\), \(p<.001\), where the first dimension (Dim1) accounts for 43.92% of the variance in the data while the second dimension (Dim2) accounts for 12.70%. Together these two dimensions account for 55.62% of variability. The proximity of same colour points demonstrates their similarity in Figure 4, where advice sources (red points) that are closer together on the map have similar SP technologies and methods usage profiles, than those that are further apart, while the proximity of red-blue points demonstrate their association. Figure 4 intuitively shows a range of advice source (red points) with different qualities from (1) not knowing on the left (depicted by 'don't know'); to (2) socially connected advice sources (depicted by 'family', 'friends', 'from work and colleagues' and 'f2f_offline' sources; to (3) advice sources potentially involving more complex skills (depicted by 'training','specialist pages' and 'online forums') on the far right. The spatial map is intuitive in showing that the 'don't know' end of Dim1 (negative end of x-axis) is more closely associated with not using any SP technologies and method (depicted by 'donotremember-nothing'). In comparison, the socially connected advice sources of Dim1 are closer to 'other non tech', 'other tech', privacy and security settings (usually builtin), other security practice, passwords, builtin security (as well as limit engagement and not opening dodgy emails), and the women gender group. 'News and TV shows' is more closely associated with firewall, 2FA/MFA authentication. Furthermore, the more complex skills end of Dim1 (with training, specialist pages and online forums as advice sources) is closely associated with anti-malware, VPN, anti-tracking \begin{table} \begin{tabular}{l|c c c} \hline \hline **Source** & **OR** & **95\% CI** & **p-value** \\ \hline **family** & **3.93** & **[2.41 - **6.42** - **.001** & *** \\ friends & 1.07 & [0.66 - 1.57] &.940 \\ **from work / colleagues** & **1.97** & **[1.07 - 3.63** & **.029** & * \\ f2f / offline & 1.65 & [0.81 - 3.36] &.168 & \\ **general research** & **0.61** & **[0.42 - 0.87]** & **.007** & **.08** \\ **specialist pages** & **0.27** & **[0.15 - 0.49]** & **.001** & *** \\ **online forums** & **0.25** & **[0.12 - 0.52]** & **.001** & *** \\ online reviews & 0.84 & [0.48 - 1.48] &.552 & \\ **shared online content** & **0.47** & **[0.25 - 0.87]** & **.014** & * \\ **tech ad / brand / mag** & **0.50** & **[0.29 - 0.89]** & **.018** & * \\ news / tv shows & 0.78 & [0.46 - 1.30] &.344 & \\ training & 1.14 & [0.55 - 2.39] &.722 & \\ system prompts & 0.89 & [0.36 - 2.22] &.804 & \\ \hline \hline \end{tabular} * _Note: Significance codes of \({}^{*}\) ***._001, \({}^{*}\) *._01, \({}^{*}\)._05 \end{table} Table 2: Regression results depicting the likelihood of women (compared to men) reporting to use each advice source. Statistically significant models at p<.05 are highlighted. OR refers to odds ratio between women and men (the baseline), CI is the confidence interval. and p-value refers to effects of gender. \begin{table} \begin{tabular}{l|c c c} \hline \hline **\# of advice sources** & **\% Overall** & **\% Women** & **\% Men** \\ \hline Not aware / not use & 15.7 & 19.8 & 11.6 \\ One source only & 47.7 & 47.9 & 47.5 \\ Multiple sources & 36.5 & 32.3 & 40.8 \\ \hline \hline \end{tabular} \end{table} Table 3: % women and men reporting to use zero to multiple advice sources for SP protection or anti-spyware, and the men gender group. #### 4.3.2 Impact of Advice Source on use of SP technologies We investigate how do advice sources predict the use of SP technologies (1) by the type of advice source and (2) the number of advice sources used. **Advice source type.** We choose to focus on SP technologies rather than the combination of technologies and methods, to assess how the different advice sources impact use of technologies in particular. We aim to compute a binomial logistic regression with the dependent variable 'using at least one SP technology' versus 'not using any SP technology' and with advice sources as predictors. We observe the phenomenon of 'quasi-complete separation' [51, 74, 94] in our data, where the explanatory variable almost perfectly predicts the binary outcome variable [3]. In particular, we find that advice sources such as training, online forums, online reviews, specialist pages and system prompts have _only 1 or 2_ participants not using SP technologies, and shared online content has _'less or equal to 5'_ participants not using SP technologies as shown in Figure 7 of the Appendix, while most participants using these sources reported to using an SP technology. Separation is a common problem in applied logistic regression with binary predictors [94], that can be addressed via a bayesian logit regression [36, 37] with a weakly-informative Cauchy (0, 2.5) prior distribution to regularise the coefficients, as suggested by Gelman et al. [36]. We set up a bayesian logit regression using the _bayesglm_ from the _arm_ package in \(r\)[38]. The model is significant with \(R^{2}=12.4\), AIC 538.29. Table 4 reports that while advice source under the social connections category (such as family, friends, colleagues, from work or another offline contact) do not significantly impact use of SP technologies, advice sources under the 'online content' and 'other methods' categories significantly predict the use of SP technologies. For example, those who learn about SP technologies from training, are nearly 10 times more likely to use the SP technology than those who do not learn from training, with \(OR=9.62,p=.007\), those who learn about SP technology from online content such as general research, specialist pages, online forums, online reviews are approximately 3X, 8X, 11X, 6X respectively, more likely to use SP technology, compared to those who do not gain advice from these sources. We also find that individuals benefitting from advice from adverts or targeted information from reputable technology companies, banks or consumer magazines are 5 times more likely to use SP technologies, with \(OR=5.21,p=.004\), compared to those who do not gain advice from these sources. **Number of advice sources.** We compute a binomial logistic regression model with predictor variable the number of SP advice sources used and target variable 'using at least one SP technology' versus 'not using any SP technology'. The model is significant with \(\chi^{2}(3)=57.263\), \(p<.001\). It has a good fit (\(C=.678\)), model accuracy of 80.8% and \(R^{2}\) between 9.0% (Cox & Snell) and 14.5% (Nagelkerke). We find that compared to those not using any SP advice source, those using a single source are 5.4X with p<.001, those using 2 sources are 7.1X with p<.001 and those using at least 3 advice sources Figure 3: proportion of women & men who named particular SP technologies and methods Figure 2: % of multiple advice source usage across gender are 9.1X with p<.001, more likely to use SP technologies. ### Motivation for ISC advice We investigate RQ4, that is, for what reasons do women and men approach ISC for protective SP advice. Overall 63% (n=382) of participants responded _Yes_ to approaching ISC for protective SP information and advice, while 37% (n=222) responded _No_. We _first_ describe the rationales provided by participants and _second_ we look into gender patterns in these rationales. #### 4.4.1 Description of Rationales We categorise and describe the rationales given for approaching or not approaching ISC below and providing example responses in Table 10 in the Appendix. **Yes to approaching ISC for SP**Participants' reasoning for approaching ISC for protective SP is categorised under perception of ISC skills, ISC qualities, participants' own skills, or 'other'. **ISC skills.** 39.3% participants referred to perceived ISC skills, most of whom spoke of general knowledge, skills, experience or IT work, and few to SP skills in particular. * _(General) Knowledge of ISC_: (a) knowledge in general terms, (b) expecting their ISC to have more knowledge than them, or (c) somehow pointing to 'area of expertise' without being specific of SP. * _Technology skills of ISC_: (a) mostly in general, (b) a few named security. * _ISC work in IT_: (a) broad mentions of working in IT / an IT-related field, (b) pointing to an area of computing (but not security), or (c) very few responses including working in security. [**(a-b) do not guarantee SP expertise**] * _Experience of ISC_: (a) perceived ISC experience in broad terms or (b) ISC would have had experience of identifying the best tools / methods in the past. * _Up to date_: ISC aware of the latest in technology development. **ISC qualities.** 16.2% spoke of perceived qualities of ISC including: * _Trustworthy_: (a) trusting ISC or ISC advice in general or (a) trusting ISC because of their perceived skills and experience. * _Easy to access_: ease of access to ISC advice or quick contact. \begin{table} \begin{tabular}{l c c} \hline **Predictors** & **OR** & **p-value** \\ \hline family (vs not) & 1.10 &.734 \\ friends (vs not) & 1.43 &.252 \\ work, colleagues (vs net) & 1.28 &.530 \\ other f2f, offline(vs not) & 1.17 &.717 \\ **general research (vs false)** & **2.99** & **.001** & **.** \\ **specialist pages** & **8.08** & **.002** & **.** \\ **online forms** & **11.13** & **.004** & **.** \\ **online reviews** & **5.61** & **.011** & **.** \\ shared online content & 2.08 &.139 \\ **ad/info from companies \& consumer mag** & **.21** & **.004** & **.** \\ news & 1.05 &.909 \\ **training** & **9.62** & **.007** & **.** \\ system prompts and settings & 3.53 &.076 \\ \hline \end{tabular} _Note: Significance codes of \({}^{*}\)\(\ast\)\(\ast\)\({}^{\prime}\).001, \({}^{*}\)\(\ast\)\({}^{\prime}\).01, \({}^{*}\)\({}^{\prime}\).05_ \end{table} Table 4: Bayesian logistic regression for using protective SP technology versus not. Figure 4: Spatial Map of Association between SP Technologies & Methods _(in blue)_ and Advice Sources _(in red)_ * _Helpful_: (a) ISC being helpful or helps them or (b) offering good / helpful advice. * _Available_: (a) availability of an ISC they can ask for help or (2) that their ISC is (always) available for such things. * _Reliable_: specifically that their ISC is reliable. **Participant's own skills.** 7.1% of participants referred to their own skill level, in particular that (1) they need help or needing particular advice or information that they cannot find by themselves, (2) they lack SP knowledge or (3) have poor confidence. **Other reasons.** 10.6% gave reasons that were different from the categories defined above, such as: * _Hear options/mutual_: (a) to find out about the options available, (b) about the mutual sharing of advice and experiences. * _Reassurance_: approach ISC (a) for a second opinion, (b) for reassurance or to ensure the correctness of their SP practice, or (c) to avoid making mistakes. * Other/none: a small number of participants provided miscellaneous statements that do not fall into the reasons named above. While advice from ISC is not necessarily risky advice, unfortunately the heuristics named by women participants, such as work in IT, trustworthiness, or ease of access (complementing previous research [79]), do not guarantee ISC actually have SP skills to advise and enable learning, with previous research noting that family are not necessarily SP experts [56]. In contrast the men's rationale indicate a more analytical dialogue (such as evaluating options). **No to approaching ISC for SP.** Participants' reasoning for _not_ using their ISC for protective SP information is categorised under perceptions of their skills and other reasons, described below. **Participant's own skills.** 17.5% of participants provided reasons referring to their better skills, their self-reliance or their confidence. * _Self reliance_: (a) they prefer to rely on themselves, (b) that they can figure out SP protection on their own. * _Better skills_: (a) that their own (technology) skills were more advanced than that of ISC, (b) that ISC know less than them / did not have the skills. * _Confidence_: being confident in finding SP information or in their ability. **Other reasons.** 19.5% of participants referred to other reasons to not approaching ISC for SP protection, such as (1) preferring other sources (such as online sources), (2) not needing help or other reasons such as not encountered issues (3) that they are the ones helping others. #### 4.4.2 Gender Patterns & Differences 75.9% of the n=303 women participants and 50.5% of the n=301 men participants responded with _Yes_ to approaching ISC for protective SP information and advice. We looked into the association between gender and approaching versus not approaching ISC via a \(\chi^{2}\) test. We find a significant association between gender and whether individuals usually approach ISC, with \(\chi^{2}(1)=41.939\), \(p<.001\), Cramer \(V=.264\). Of those who reported _Yes_ to approaching ISC, a higher proportion of women than men were motivated by the following perception of their ISC: general knowledge of ISC, ISC being up to date with latest technologies, ISC working in an IT related field, their technology / SP skills or experience as well as their trustworthiness, ease of access and helpfulness, as shown in Figure 5. However, a higher proportion of men than women reasoned to approach ISC to hear various SP options, for reassurance, due to their own poor knowledge or perceived reliability of their ISC. In comparison, Figure 6 shows a higher proportion of men compared to women responding _No_ to approaching their ISC across all rationales (their confidence, better skills, self-reliance or preference for other sources, as well as being the one who helps others). ### Advice Received from ISC We investigate RQ5, that is look into the advice received by the 63% (n=382) of participants who responded _Yes_ to approaching ISC for protective SP. As described in Section 3.4, advice were specific (such as about malware protection or communication privacy) or general (such as how to keep safe online). We provide the list of specific advice, together with example responses, in Table 11 in the Appendix, and summarise the advice received by women versus men in Fig 8. The main observable patterns are that (i) a higher % of men than women receive advice about '_malware / scam_' which included antivirus, virus threat; anti-malware, malware threat; anti-spyware, fraud and 'communication and network privacy'; (ii) a higher % of women receive advice about '_au Figure 5: Proportion of men versus women across rationales for YES to approaching ISC Figure 6: Proportion of men versus women across rationales for NO to approach ISC _thentication_', which included change passwords, use multiple passwords, use / set strong passwords, password manager or use MFA; and 'privacy settings and SNS'. ## 5 Discussion We summarise our findings and discuss how they extend existing literature, and their implications for equitable online safety. **Take-aways.** The findings in broad terms are: (1) there is an SP gender gap depicted by women and men's distinct reports of SP access and technology use patterns; (2) online advice is more associated with and predicts SP technology use than advice from ISC; (3) women and men report distinct rationale for approaching ISC and receive different SP advice. The social role theory explains that diverging social roles give rise to gendered expectations and stereotypes such as communal/supportive traits for women and agency/assertion for men, that in turn lead to different skills learnt and ways to behave [25]. Our findings can be argued to emerge from gender stereotypes (whether personally endorsed or fitting into commonly held stereotypes), in particular men less likely to approach ISC can be explained by masculine attitudes that discourage seeking help [45] and SP stereotypes that men are less likely to ask for help and are over-confident (or women believed to be incompetent in SP and more comfortable to ask for help) [91]. Further, men more likely to report using online advice and engage with SP technology, relates to gender stereotypes in STEM where technology engagement is seen as a _gender-type activity_[82, 52], and further supported by assumptions of men being more interested and skilled in protection overall or the belief that SP is too complex for women who then delegate it to others [91]. This view is supported by the fact that our male participants had higher affinity for new technical systems, compared to the women participants (as reported in Section 3.2). The affinity difference is likely a societal reflection, but points to women being less inclined to self-explore (technological) SP. In addition to their gender, people have racial, cultural, sexual and socio-economic identities that intersect and overlap with their gender identity [21, 17]. These factors confer a certain privilege or disadvantage for SP access, engagement and online safety outcomes (note related works [90, 9]), as well as impact perceptions and attitudes to SP [48] and resulting SP experiences (and needs) [35]. While we did not control for these factors, education level was balanced between gender. The stark SP gender gap observed therefore demonstrates the relative importance of gender identity for online SP access and participation within user populations similar to our sample. Further work is however needed to unpack the potential effects of how intersecting identities in the context of SP. _Implications for equitable online safety._ We add to evidence of a gender gap and un-equal SP experiences between women and men [64, 66, 20], and add a gender analysis to SP advice literature [72, 78, 73, 77]. This gap in protection already demonstrates an online safety divide that questions _the equity of online safety opportunities_, that is whether SP access via online advice and SP technology affordances are appropriately configured for and serving women. Questions about the appropriateness of SP are supported by 'gender blindness' arguments, where technology transformations are influenced by societal norms [7] and the design and meaning of these technologies are created within gender relations and thus reflect pre-existing gender inequalities [69], such as only 17% of technology jobs in Europe are held by women [28]. The large amount and varied type of SP advice online, that is known to overwhelm and lack prioritisation [80], may also present more of a barrier for women's online safety than for men. Although women in developed countries, such as the U.K, have equal opportunities for technology access and a high level of education [2, 12, 68], and the digital divide with respect to gender is said to be decreasing in developed countries [89], this does not translate to equal online safety outcomes [63] or equal access and engagement with SP technologies as seen in this research. The SP gap can be thought to be even worse in countries with wider digital divide. Overall, this research supports reports of women being more at risk online [63] if the access and non-technology strategies that women employ do not result in equal online safety outcomes as men. ## 6 Actionable Recommendations We provide recommendations for stakeholders with an interest in the online safety ecosystem. **Accessible & effective online advice.** Our findings lead to questioning the relevance, accessibility and appropriateness of online safety advice, in particular those pointing to SP technologies as means of protection, for women. _First_ we recommend efforts towards ensuring that the online advice ecosystem is inclusive of the various needs of the wide population of women, in addition to those tailored for specific threat scenarios such as intimate partner violence. The design of online safety advice need to be relevant to diverse women's assessment and response to threats. Trustworthiness (as reported by our participants) and a sense of emotional support need to be designed within SP and digital advice affordances given (a) the affective dimension of SP [19, 85, 29], (b) women's higher likelihood for emotive evaluation of online threat scenarios [20], and (c) how the associated response actions provide a form of emotional coping [16, 44, 65]. This is supported by recommendations made in prior work [35] that communication preceded by emotional support are of higher quality [30]. As practical example, the language used in online safety advice needs to be representative of the women groups it intends to serve, as opposed to being (overly) technical as previously reported [80]. _Second_, based on a complement of our findings and that of previous research raising issues of prioritisation and ac tionability of online SP advice [80], as well as fragmentation across sources [78], we recommend standardising and continued revision of a key set of online sources and priority advice, given the current threat landscape affecting women. Overall, the current lack of evidence on the effectiveness of online SP advice ecosystem for women (and diverse genders) needs to be addressed. **Skills for SP.** Compared to literature on the influence of _technical web skills_[40] on advice source preference [77], our gender groups had similar digital skills across scales for information navigation, social and mobile skills, and differed slightly on operational and creative skills (measured via the internet skills scale [88]), which raises questions about the digital skillset required for online safety. _First_, we recommend assessment and marking of the type and level of digital skill level needed to comprehend and action online safety and SP tech advice, including (in parallel) ways to develop these skills, such that people are supported rather than left to their own means of filling skill gaps. This is linked to developing confidence in SP protection, given we noted women's poorer affinity for technical systems. _Second_, for equity of SP opportunities, we recommend designing advice and SP technology engagement such that anyone can gain optimal protection irrespective of their skill level. **Socially supported SP.** ISC advice, as we evidence, provides a valuable alternative to traditional individualistic SP design. Compared to burdening individual users with problem solving, it provides a collaborative, communal and supported version, which is particularly useful for coping with SP complexity or the (psychological and emotional) aftermath of attacks, including new ones where users may not yet have protection experience. We make recommendations supporting the social SP body of research [84, 93]. _First_ we recommend SP technologies to have a socially-supported version and online safety advice to offer these given user preference. _Second_ we recommend tech features that make it easy for anyone to ask for help or compare notes / options, whether from a known contact or anonymously. We recommend defining a checklist of what to ask and for determining who to ask for SP advice, as the heuristics our women participants provided do not guarantee that ISC have SP skills. _Third_ we recommend learning from women's strategies and the development of methods to sustain in-person dialogue and render it effective in supporting learning (such as bite-size template, multi-modal ways of delivery, protection evaluation and feedback - within online / offline spaces) and a re-envisioning of online SP that taps into and includes the SP patterns adopted by women, as well as for designers to consider the stereotypical cues in technology that cause gender-type digital engagement [15, 55]. **Gender agenda.** We are far from addressing the gender gap and many unknowns remain. We strongly recommend a multi-perspective research agenda focused on understanding the role gender norms, stereotypes and intersecting identities play on SP opportunities, access and outcomes, and the role of gender theory within SP, akin [33]. This requires (1) a concerted effort and dedicated resources, (2) collaboration between multi-disciplinary researchers and stakeholders such as policy makers, community support and SP advocacy groups, and (3) engaging with local and international bodies' who having been calling for action for a safe and equitable digital future and to keep women safe online [87, 62]. **Critical reflection on equity.** Although the digital divide is thought to be growing weaker within developed countries and there are increased possibilities for SP access through online advice, useful and meaningful engagement with these to sustain SP usage and safety outcomes is yet to be addressed, in particular for women as this research shows, and the lack of prioritisation [80] and consensus about good advice [81]. Given our findings point to a gender divide in SP opportunities and participation, demonstrated via gaps in access and use of SP technologies, we strongly recommend critical reflective action for the wider SP community of researchers, developers, technology providers, online safety advocates and policy makers, to address the question _'for whom are we producing SP technology for?_ With online safety considered a social good and its equity advocated by international human rights organisations [87], _'what does gender equity in online safety involve in terms of SP opportunities, access, participation or outcomes?'_ Given women's personal, social and economic realities and their socio-cultural roles compared to men, _'does equality of SP, such as even access methods, distribution and design, provide for equitable online safety outcomes?_ **Assurance of equity.** Since gendered SP access and participation could disadvantage large strata of society, we recommend the development of an _SP equity assurance framework_ and complementary tech policy, that requires for instance (1) that online SP providers demonstrate assurance of equity in development and distribution, such as consideration of the personal, social, and economic realities of women and their online safety needs; and (2) that SP advocates, policy makers, and researchers co-create equity markers and criteria. ## 7 Conclusion This empirical research provides a gender analysis of online safety with a lens across protective SP access and protection outcome, in particular showing that women's distinct SP access preference through ISC means that they cannot enjoy the breadth and quality of _online_ SP advice, that consequently lead to less sophisticated SP technology engagement, with potential impact on protection outcome (thereby demonstrating a gender SP discrepancy) and adding to the few previous evidence of an SP gender gap. In consequence, this research (1) supports arguments of women being more at risk online compared to men, (2) supports local and international calls for action to keep women safe online, and (3) make recommendations for multi-stakeholder actions to ensure their protection.
2301.11773
Automatic Modulation Classification with Deep Neural Networks
Automatic modulation classification is a desired feature in many modern software-defined radios. In recent years, a number of convolutional deep learning architectures have been proposed for automatically classifying the modulation used on observed signal bursts. However, a comprehensive analysis of these differing architectures and importance of each design element has not been carried out. Thus it is unclear what tradeoffs the differing designs of these convolutional neural networks might have. In this research, we investigate numerous architectures for automatic modulation classification and perform a comprehensive ablation study to investigate the impacts of varying hyperparameters and design elements on automatic modulation classification performance. We show that a new state of the art in performance can be achieved using a subset of the studied design elements. In particular, we show that a combination of dilated convolutions, statistics pooling, and squeeze-and-excitation units results in the strongest performing classifier. We further investigate this best performer according to various other criteria, including short signal bursts, common misclassifications, and performance across differing modulation categories and modes.
Clayton Harper, Mitchell Thornton, Eric Larson
2023-01-27T15:16:06Z
http://arxiv.org/abs/2301.11773v1
# Automatic Modulation Classification with Deep Neural Networks ###### Abstract Automatic modulation classification is a desired feature in many modern software-defined radios. In recent years, a number of convolutional deep learning architectures have been proposed for automatically classifying the modulation used on observed signal bursts. However, a comprehensive analysis of these differing architectures and importance of each design element has not been carried out. Thus it is unclear what tradeoffs the differing designs of these convolutional neural networks might have. In this research, we investigate numerous architectures for automatic modulation classification and perform a comprehensive ablation study to investigate the impacts of varying hyperparameters and design elements on automatic modulation classification performance. We show that a new state of the art in performance can be achieved using a subset of the studied design elements. In particular, we show that a combination of dilated convolutions, statistics pooling, and squeeze-and-excitation units results in the strongest performing classifier. We further investigate this best performer according to various other criteria, including short signal bursts, common misclassifications, and performance across differing modulation categories and modes. Automatic modulation classification, deep learning, convolutional neural network. ## I Introduction Automatic modulation classification (AMC) is of particular interest for radio frequency (RF) analysis and in modern software-defined radios to perform numerous tasks including "spectrum interference monitoring, radio fault detection, dynamic spectrum access, opportunistic mesh networking, and numerous regulatory and defense applications" [1]. Upon detection of an RF signal with unknown characteristics, AMC is a crucial initial procedure in order to demodulate the signal. Efficient AMC allows for maximal usage of transmission mediums and can provide resilience in modern cognitive radios. Systems capable of adaptive modulation schemes can monitor current channel conditions with AMC and adjust exercised modulation schemes to maximize usage across the transmission medium. Moreover, for receivers that have a versatile demodulation capability, AMC is a requisite task. The correct demodulation scheme must be applied to recover the modulated message within a detected signal. In systems where the modulation scheme is not known _a priori_, AMC allows for efficient prediction of the employed modulation scheme. Higher performing AMC can increase the throughput and accuracy of these systems; therefore, AMC is currently an important research topic in the fields of machine learning and communication systems, specifically for software-defined radios. Typical benchmarks are constructed on the premise that the AMC model must classify not only the mode of modulation (_e.g._, QAM), but the exact variant of that mode of modulation (_e.g._, 32QAM). While many architectures have proven to be effective at high signal to noise ratios (SNRs), performance degrades significantly at lower SNRs that often occur in real-world applications. Other works have investigated increasing classification performance at lower SNR levels through the use of SNR-specific modulation classifiers [2] and clustering based on SNR ranges [3]. To perform classification, a variety of signal features have been investigated. Historically, AMC has relied upon statistical moments and higher order cumulants [4, 5, 6] derived from the received signal. Recent approaches [1, 7, 8, 9] use raw time-domain in-phase (I) and quadrature (Q) components as features to predict the modulation variant of a signal. Further works have investigated additional features including I/Q constellation plots [10, 11, 12]. After selecting the signal input features, machine learning models are used to determine statistical patterns in the data for the classification task. Support vector machines, decision trees, and neural networks are commonly used classifiers for this application [13, 14, 1, 3, 7, 8, 9, 1]. Residual neural networks (ResNets), along with convolutional neural networks (CNNs), have been shown to achieve high classification performance for AMC [1, 3, 7, 8, 9, 10]. Thus, deep learning based methods in AMC have become more prevalent due to their promising performance and their ability to generalize to large, complex datasets. While other works have contributed to increased AMC performance, the importance of many design elements for AMC remains unclear and a number of architectural elements have yet to be investigated. Therefore, in this work, we aim to formalize the impact of a variety of architectural changes and model design decisions on AMC performance. Numerous modifications to architectures from previous works, including our own [7], and novel combinations of elements applied to AMC are considered. After an initial investigation, we provide a comprehensive ablation study in this work to investigate the performance impact of various architectural modifications. Additionally, we achieve new state-of-the-art classification performance on the RadioML 2018.01A dataset [15]. Using the best performing model, we provide additional analyses that characterize its performance across modulation modes and signal burst duration. ## II Related Work The area of AMC has been investigated by several research groups. We provide a summary of results in AMC to provide context and motivation for our contributions to AMC and the corresponding ablation study described in this paper. Corgan _et al._[8] illustrate that deep convolutional neural networks are able to achieve high classification performance particularly at low SNRs on a dataset comprising 11 different types of modulation. It was found that CNNs exceeded performance over expertly crafted features. Comparing results with architectures in [8] and [1, 16] improved AMC performance utilizing self-supervised contrastive learning. First, an encoder is pre-trained in a self-supervised manner through creating contrastive pairs with data augmentation. By creating different views of the input data through augmentation, contrastive loss is used to maximize the cosine similarity between positive pairs (augmented views of the same input). Once converged, the encoder is frozen (_i.e._, the weights are set to fixed values) and two fully-connected layers are added following the encoder to form the classifier. The classifier is trained using supervised learning to predict the 11 different modulation schemes. Chen _et al._ applied a novel architecture to the same dataset where the input signal is sliced and transformed into a square matrix and apply a residual network to predict the modulation schemes [17]. Other work has investigated empirical and variational mode decomposition to improve few-shot learning for AMC [18]. In our work, we utilize a larger, more complex dataset consisting of 24 modulation schemes, as well as modeling improvements. Spectrograms and I/Q constellation plots in [19] were found to be effective input features to a traditional CNN achieving nearly equivalent performance as the baseline CNN network in [1] which used raw I/Q signals. Further, [10, 11, 12] also used I/Q constellations as an input feature in their machine learning models on a smaller scale of four or eight modulation types. Other features have been used in AMC--[20, 21] utilized statistical features and support vector machines while [22, 23] used fusion methods in CNN classifiers. Mao _et al._ utilized various constellation diagrams at varying symbol timings alleviating symbol timing synchronization concerns [24]. A squeeze-and-excitation [25] inspired architecture was used as an attention mechanism to focus on the most important diagrams. Although spectrograms and constellation plots have shown promise, they require additional processing overhead and have had comparable performance to raw I/Q signals. In addition, models that use raw I/Q signals could be more adept at handling varying-length signals than constellation plots because they are not limited by periodicity constraints for short duration signals (_i.e._, burst transmissions). Consequently, we utilize raw I/Q signals in our work. Tridgell, in his dissertation [26], builds upon these works by investigating these architectures when deployed on resource-limited Field Programmable Gate Arrays (FGPAs). His work stresses the importance of reducing the number of parameters for modulation classifiers because they are typically deployed in resource-constrained embedded systems. Fig. 1: ResNet architecture used in [1]. Each block represents a unit in the network, which may be comprised of several layers and connections as shown on the right of the figure. Dimensions of the tensors on the output of each block are also shown where appropriate. Fig. 2: X-Vector architecture overview. The convolutional activations immediately before pooling are shown. These activations are fed into two statistical pooling layers that collapse the activations over time, creating a fixed-length tensor that can be further processed by fully connected dense layers. In [1], Oshea _et al._ created a dataset with 24 different types of modulation, known as RadioML 2018.01A, and achieved high classification performance using convolutional neural networks--specifically using residual connections (see Figure 1) within the network (ResNet). A total of 6 residual stacks were used in the architecture. A residual stack is defined as a series of a convolutional layers, residual units, and a max pooling operation as shown in Figure 1. The ResNet employed by [1] attained approximately 95% classification accuracy at high SNR values. Harper _et al._ proposed the use of X-Vectors [27] to increase classification performance using CNNs [7]. X-Vectors are traditionally used in speaker recognition and verification systems making use of aggregate statistics. X-Vectors employ statistical moments, specifically mean and variance, across convolutional filter outputs. It can be theorized that taking the mean and variance of the embedding layer helps to eliminate signal-specific information, leaving global, modulation-specific characteristics. Figure 2 illustrates the X-Vector architecture where statistics are computed over the activations from a convolutional layer producing a fixed-length vector. Additionally, this architecture maintains a fully-convolutional structure enabling variable size inputs into the network. Using statistical aggregations allows for this property to be exploited. When using statistical aggregations, the input to the first dense layer is dependent upon the number of filters in the final convolutional layer. The number of filters is a hyperparameter, independent of the length in time of the input signal into the neural network. Without the statistical aggregations, the input signals into a traditional CNN or ResNet would need to be resampled, cropped or padded to a fixed-length in time such that there is not a size mismatch with the final convolutional output and the first dense layer. While the dataset used in this work has uniformly sized signals in terms of duration, (\(1024\times 2\)), this is an architectural advantage in our deployment as received signals may vary in duration. Instead of modifying the inputs to the network via sampling, cropping, padding, etc., the X-Vector architecture can directly operate with variable-length inputs without modifications to the network or input signal. Figure 3 outlines the employed X-Vector architecture in [7] where \(F=[f_{1},f_{2},...,f_{7}]=64\) and \(K=[k_{1},k_{2},...,k_{7}]=3\). Mean and variance pooling are performed on the final convolutional outputs, concatenated, and fed through a series of dense layers creating the fixed-length X-Vector. A maximum of 98% accuracy was achieved at high SNR levels. The work of [7] replicated the ResNet architecture from [1] and compared the results with the X-Vector architectures as seen in Figure 4. Harper _et al._[7] were able to reproduce this architecture achieving a maximum of 93.7% accuracy. The authors attribute the difference in performance to differences in the train and test set separation they used since these parameters were unavailable. As expected, the classifiers perform with a higher accuracy as the SNR value increases. In signals with a low SNR value, noise becomes more dominant and the signal is harder to distinguish. In modern Fig. 4: Accuracy comparison of the reproduced ResNet in [1] and the X-Vector inspired model from [7] over varying SNRs. This accuracy comparison shows the superior performance of the X-Vector architecture, especially at higher SNRs, and supports using this architecture as a baseline for the improvements investigated in this paper. Fig. 3: Proposed CNN Architecture in [7]. This is the first work to employ an X-Vector inspired architecture for AMC showing strong performance. This architecture is used as a baseline for the modifications investigated in this paper. The \(f\) and \(k\) variables shown designate the number of kernels and size of each kernel, respectively, in each layer. These parameters are investigated for optimal sizing in our initial investigation. applications, a high SNR value is not always a given. However, there is still significant improvement compared to random chance, even at low SNR values. Moreover, in systems where the modulation type must be classified quickly, this could become crucially important as fewer demodulation schemes would need to be applied in a trial and error manner to discover the correct scheme. One challenge of AMC is that performance is desired to work well across a large range of SNRs. For instance, Figure 4 illustrates modulation classification performance plateaued in peak performance beyond \(+8\)dB SNR and approached chance classification performance below \(-8\)dB SNR on the RadioML 2018.01A dataset. This range is denoted by the shaded region. Harper _et al._ investigated methods to improve classification performance in this range by employing an SNR regression model to aid separate modulation classifiers (MCs). While other works have trained models to be as resilient as possible under varying SNR conditions, Harper _et al._ employed SNR-specific MCs [2]. Six MCs were created by discretizing the SNR range to ameliorate performance between \(-8\)dB to \(+8\)dB SNR (see Figure 5). These groupings were chosen in order to provide sufficient training data to avoid overfitting the MCs and provide enough resolution so that combining MCs provided more value than a single classifier. By first predicting the SNR of the received signal with a regression model, an SNR-specific MC that was trained on signals with the predicted SNR is applied to make the final prediction. Although the SNR values in the dataset are discrete, SNR is measured on a continuous scale in a deployment scenario and can vary over time. As a result, regression is used over classification to model SNR. Using this approach, different classifiers can tune their feature processing for differing SNR ranges. Each MC in this approach uses the same architecture as that proposed in [7]; however, each MC is trained with signals within each MC's SNR training range (see Table I). Highlighting improvements across varying SNR values, Figure 6 shows the overall performance improvement (in percentage accuracy) using the SNR-assisted architecture compared to the baseline classification architecture described in [7]. While a slight decrease in performance was observed for \(-8\)dB and a larger decrease for \(-2\)dB, improvement is shown under most SNR conditions--particularly in the target range of \(-8\)dB to \(+8\)dB. A possible explanation for the decrease in performance at particular SNRs is that the optimization for a particular MC helped overall performance for a grouping at the expense of a single value in the group. That is, the MC for \([-4,0)\) boosted the overall performance by performing well at \(-4\) and \(0\)dB at the expense of \(-2\)dB. Due to the large size of the testing set, these small percentage gains are impactful because thousands more classifications are correct. All results are statistically significant based on a McNemar's test [28], therefore achieving new state-of-the-art performance at the time. Soltani _et al._[3] found SNR regions of \([-10,-2]\)dB, \([0,8]\)dB, and \([10,30]\)dB having similar classification patterns. Instead of predicting exact modulation variants, the authors group commonly confused variants into a more generic, coarse-grained label. This grouping increases performance of AMC by combining modulation variants that are commonly confused. However, it also decreases the sensitivity of the model to the numerous possible variants. Cai _et al._ utilized a transformer based architecture to aid performance at low SNR levels with relatively few training parameters (approximately 265,0000 parameters) [29]. A multi-scale network along with center loss [30] was used in [31]. It was found that larger kernel sizes improved AMC performance. We further explore kernel size performance impacts in this work. Zhang _et al._ proposed a high-order attention mechanism using the covariance matrix achieving a maximum accuracy of 95.49% [32]. Although many discussed works use the same RadioML 2018.01A dataset, there is a lack of a uniform dataset split to establish a benchmark for papers to report performance. In an effort to make AMC work more reproducible and comparable across publications, we have made our dataset split and accompanying code available on GitHub.1 Footnote 1: [https://github.com/charper/Automatic-Modulation-Classification-with-Deep-Neural-Networks](https://github.com/charper/Automatic-Modulation-Classification-with-Deep-Neural-Networks) While numerous works have investigated architectural improvements, we aim to improve upon these works by introducing additional modifications as well as a comprehensive ablation study that illustrates the improvement of each modification. With the new modifications, we achieve new state-of-the-art AMC performance. ## III Dataset To evaluate different machine learning architectures, we use the RadioML 2018.01A dataset that is comprised of 24 Fig. 5: The architecture using SNR regression and SNR-specific classifiers from [2]. Each MC block shown employs the same architecture as the baseline from [7], but specifically trained to perform AMC within a more narrow range of SNRs (denoted as dB ranges in each block). different modulation types [1, 15]. Due to the complexity and variety of modulation schemes in the dataset, it is fairly representative of typically encountered modulation schemes. Moreover, this variety increases the likelihood that AMC models will generalize to more exotic or non-existing modulation schemes in the training data that are derived from these traditional variants. There are a total of 2.56 million labeled signals, \(S(T)\), each consisting of 1024 time domain digitized intermediate frequency (IF) samples of in-phase (\(I\)) and quadrature (\(Q\)) signal components where \(S(T)=I(T)+jQ(T)\). The data was collected at a 900MHz IF with an assumed sampling rate of 1MS/sec such that each 1024 time domain digitized I/Q sample is 1.024 ms [33]. The 24 modulation types and the representative groups that we chose for each are listed as follows: * **Amplitude**: OOK, 4ASK, 8ASK, AM-SSB-SC, AM-SSB-WC, AM-DSB-WC, and AM-DSB-SC * **Phase**: BPSK, QPSK, 8PSK, 16PSK, 32PSK, and OQPSK * **Amplitude and Phase**: 16APSK, 32APSK, 64APSK, 128APSK, 16QAM, 32QAM, 64QAM, 128QAM, and 256QAM * **Frequency**: FM and GMSK Each modulation type includes a total of \(106,496\) observations ranging from \(-20\)dB to \(+30\)dB SNR in \(2\)dB steps for a total of 26 different SNR values. SNR is assumed to be consistent over the same window length as the I/Q sample window. For evaluation, we divided the dataset into 1 million different training observations and 1.5 million testing observations under a random shuffle split, stratified across modulation type and SNR. Because of this balance, the expected performance for a random chance classifier is 1/24 or 4.2%. With varying SNR levels across the dataset, it is expected that the classifier would perform with a higher degree of accuracy as the SNR value is increased. For consistency, each model investigated in this work was trained and evaluated on the same train and test set splits. ## IV Initial Investigation In this work, we use the architecture described in [7] as the baseline architecture. We note that [2] improved upon the baseline; however, each individual MC used the baseline architecture except trained on specific SNR ranges. Therefore, the base architectural elements were similar to [7], but separated for different SNRs. In this work, our focus is to improve upon the employed CNN architecture for an individual MC rather than the use of several MCs. Therefore, we use the architecture from [7] as our baseline. Before exploring an ablation study, we make a few notable changes from the baseline architecture in an effort to increase AMC performance. This initial exploration is for clarity as it reserves the ablation study that follows from requiring an inordinate number of models. It also introduces the general training procedures that assist and orient the reader in following the ablation study--the ablation study mirrors these procedures. We first provide an initial investigation exploring these notable changes. We train each model using the Adam optimizer [34] with an initial learning rate _lr_ = 0.0001, a decay factor of 0.1 if the validation loss does not decrease for 12 epochs, and a minimum learning rate of 1e-7. If the validation loss does not decrease after 20 epochs, training is terminated and the models are deemed converged. For all experiments, mini-batches of size 32 are used. As has been established in most programming packages for neural networks, we refer to fully connected neural network layers as _dense_ layers, which are typically followed by an activation function. ### _Architectural Changes_ A common property of neural networks is using fewer but larger kernels in the early layers of the network, and an increase of smaller kernels are used in the later layers than the baseline architecture. This is commonly referred to as the information distillation pipeline [35]. By utilizing a smaller number of large kernels in early layers, we are able to increase the temporal context of the convolutional features without dramatically increasing the number of trainable parameters. Numerous, but smaller kernels are used in later convolutional layers to create more abstract features. Configuring the network in this manner is especially popular in image classification where later layers represent more abstract, class-specific features. We investigate this modification in three stages, using the baseline architecture described in Figure 3[7]. We denote number of filters in the network and the filter sizes as \(F=[f_{1},f_{2},...,f_{7}]\) and \(K=[k_{1},k_{2},...k_{7}]\) in Figure 3. The baseline architecture used \(f=64\) (for all layers) and \(k=3\) (consistent kernel size for all layers). Our first modification to the baseline architecture is \(F=[32,48,64,72,84,96,108]\), but keeping \(k=3\) for all layers. Second, we use the baseline architecture, but change the size of filters in the network where \(f=64\) (same as baseline) and \(K=[7,5,7,5,3,3,3]\). Third, we make both modifications and compare the result to the baseline model where \(F=[32,48,64,72,84,96,108]\) and \(K=[7,5,7,5,3,3,3]\). These modifications are not exhaustive Fig. 6: Summary of residual improvement in accuracy over [7] that was first published in [2]. This work showed how the baseline architecture could be tuned to specific SNR ranges. Positive improvement is observed for most SNR ranges. searches; rather, these modifications are meant to guide future changes to the network by understanding the influence of filter quantity and filter size in a limited context. ### _Initial Investigation Results_ As shown in Table II, increasing the size of the filters in earlier layers increases both average and maximum test accuracy over [7]; but, at the cost of additional parameters. A possible explanation for the increase in performance is the increase in temporal context due to the larger kernel sizes. Increasing the number of filters without increasing temporal context decreases performance. This is possibly because it increases the complexity of the model without adding additional signal context. Figure 7 illustrates the change in accuracy with varying SNR. The combined model, utilizing various kernel sizes and numbers of filters, consistently outperforms the other architectures across changing SNR conditions. Although increasing the number of filters decreases performance alone, combining the approach with larger kernel sizes yields the best performance in our initial investigation. Increasing the temporal context may have allowed additional filters to better characterize the input signal. Because increased temporal context improves AMC performance, we are inspired to investigate additional methods such as squeeze-and-excitation blocks and dilated convolutions that can increase global and local context [25, 36]. ## V Ablation Study Architecture Background Building upon our findings from our initial investigation, we make additional modifications to the baseline architecture. For the MCs, we introduce dilated convolutions, squeeze-and-excitation blocks, self-attention, and other architectural changes. We also investigate various kernel sizes and the quantity of kernels employed from the initial investigation. Our goal is to improve upon existing architectures while investigating the impact of each modification on classification accuracy through an ablation study. In this section, we describe each modification performed. ### _Squeeze-and-Excitation Networks_ Squeeze-and-Excitation (SE) blocks introduce a channel-wise attention mechanism first proposed in [25]. Due to the limited receptive field of each convolutional filter, SE blocks propose a recalibration step based on global statistics across channels (average pooling) to provide global context. Although initially utilized for image classification tasks [25, 37, 38], we argue the use of SE blocks can provide meaningful global context to the convolutional network used for AMC over the time domain. Figure 8 depicts an SE block. The squeeze operation is defined as temporal global average pooling across convolutional filters. For an individual channel, \(c\), the squeeze operation is defined as: \[z_{c}=F_{sq}(x_{c})=\frac{1}{T}\sum_{i=1}^{T}x_{i,c} \tag{1}\] where \(X\in\mathbb{R}^{T\times C}=[x_{1},x_{2},...,x_{C}]\), \(Z\in\mathbb{R}^{1\times C}=[z_{1},z_{2},...,z_{C}]\), \(T\) is the number of samples in time, and \(C\) is the total number of channels. To model nonlinear interactions between channel-wise statistics, \(Z\) is fed into a series of dense layers followed by nonlinear activation functions: \[s=F_{ex}(z,W)=\sigma(g(z,W))=\sigma(W_{2}\delta(W_{1}z)) \tag{2}\] where \(\delta\) is the rectified linear (ReLU) activation function, \(W_{1}\in\mathbb{R}^{\frac{C}{T}\times C}\), \(W_{2}\in\mathbb{R}^{C\times\frac{C}{T}}\), \(r\) is a dimensionality reduction ratio, and \(\sigma\) is the sigmoid activation function. The sigmoid function is chosen as opposed to the softmax function so that multiple channels can be accentuated and are not mutually-exclusive. That is, the normalization term in the softmax can cause dependencies among channels, so the sigmoid activation is preferred. \(W_{1}\) imposes a bottleneck to improve generalization performance and reduce parameter counts while \(W_{2}\) increases the dimensionality back to the original number of channels for the recalibration operation. In our work, we Fig. 8: Squeeze-and-Excitation block proposed in [25]. One SE block is shown applied to a single layer convolutional output activation. Two paths are shown, a scaling path and an identity path. The scaling vector is applied across channels to the identity path of the activations. Fig. 7: SNR vs. accuracy comparison of the initial investigation using the baseline architecture. Noticeable improvements can be observed across all SNRs. use \(r=2\) for all SE blocks to ensure a reasonable number of trainable parameters without over-squashing the embedding size. The final operation in the SE block, scaling or recalibration, is obtained by scaling the the input \(X\) by \(s\): \[\hat{x_{c}}=F_{scale}(x_{c},s_{c})=s_{c}x_{c} \tag{3}\] where \(\hat{X}\in\mathbb{R}^{T\times C}=[\hat{x_{1}},\hat{x_{2}},...,\hat{x_{C}}]\). ### _Dilated Convolutions_ Proposed in [36], Figure 10 depicts dilated convolutions where the convolutional kernels are denoted by the colored components. In a traditional convolution, the dilation rate is equal to 1. Dilated convolutions build temporal context by increasing the receptive field of the convolutional kernels without increasing parameter counts as the number of entries in the kernel remains the same. Dilated convolutions also do not downsample the signals like strided convolutions. Instead, the output of a dilated convolution can be the exact size of the input after properly handling edge effects at the beginning and end of the signal. ### _Final Convolutional Activation_ We also investigate the impact of using an activation function (ReLU) after the last convolutional layer, just before statistics pooling. Because ReLU transforms the input sequence to be non-negative, the distribution characterized by the pooling statistics may become skewed. In [7] and [2], no activation was applied after the final convolutional layer as shown in Figure 3. We investigate if this transformation impacts classification performance. ### _Self-Attention_ Self-attention allows the convolutional outputs to interact with one another enabling the network to learn to focus on important outputs. Self-attention before statistics pooling essentially creates a weighted summation over the convolutional outputs weighting their importance similarly to [39, 40, 41]. We use the attention mechanism described by Vaswani _et al._ in [42] where each output element is a weighted sum of the linearly transformed input where the dimensionality of \(K\) is \(d_{k}\) as seen in Equation (4). \[Attention(Q,K,V)=softmax\left(\frac{QK^{T}}{|\sqrt{d_{k}}|}\right)V \tag{4}\] In the case of self-attention, \(Q\), \(K\), and \(V\) are equal. A scaling factor of \(\frac{1}{|\sqrt{d_{k}}|}\) is applied to counteract vanishing gradients in the softmax output when \(d_{k}\) is large. ## VI Ablation Study Architecture Applying the specified modifications to the architecture in [7], Figure 9 illustrates the proposed architecture with every modification included in the graphic. Each colored block represents an optional change to the architecture that will be investigated in the ablation study. That is, each combination of network modifications are analyzed to aid understanding of each modification's impact on the network. Each convolutional layer has the following parameters: number of filters, kernel size, and dilation rate. The asterisk next to each dilation rate represents the changing of dilation rates in the ablation study. If dilated convolutions are used, Fig. 10: Dilated convolutions diagram. The top shows a traditional kernel applied to sequential time series points. The middle and bottom diagram illustrate dilation rates of two and three, respectively. These dilations serve to increase the receptive field of the filter without increasing the number of trainable variables in the kernel. Fig. 9: Proposed architecture with modifications including SENets, dilated convolutions, optional ReLU activation before statistics pooling, and self-attention. The output tensor sizes are also shown for each unit in the diagram. An * denotes where the sizes differ from the baseline architecture. then the dilation rate value in the graphic is used. If dilated convolutions are not used, each dilation rate is set to 1. That is, a traditional convolution is applied. All convolutions use a stride of 1, and the same training procedure from the initial investigation is used. ## VII Evaluation Metrics We present several evaluation metrics to compare the different architectures considered in the ablation study. In this section, we will discuss each evaluation technique used in the results section. Due to the varying levels of SNRs in the employed dataset, we plot classification accuracy over each true SNR value. This allows for a visualization of the tradeoff in performance as noise becomes more or less dominant in the received signals. Additionally, we report average accuracy and maximum accuracy across the entire test set for each model. While we note that average accuracy is not indicative of the model's performance, as accuracy is highly correlated to the SNR of the input signal, we share this result to give other researchers the ability to reproduce and compare works. As discussed in [26], AMC is often implemented on resource-constrained devices. In these systems, using larger models in terms of parameter counts may not be feasible. We report the number of parameters for each model in the ablation study to examine the tradeoff in AMC performance and model size. Additional analyses are also carried out. However, due to the large number of models investigated in this study, we will select the best performing model from the ablation study for brevity and analyze the performance of this model in greater detail. For example, confusion matrices for the best performing model from the ablation study are provided to show common misclassifications for each modulation type. Additionally, there exist several use-cases where relatively short signal bursts are received. For example, a wide-band scanning receiver may only detect a short signal burst. Therefore, signal duration in the time domain versus AMC performance is investigated to determine the robustness of the best performing model when short signal bursts are received. ## VIII Ablation Results ### _Overall Performance_ Table III lists the maximum and average accuracy performance for each model in the ablation study. A binary naming convention is used to indicate the various methods used for each architecture. Similarly to the result found in Section IV, increasing the temporal context typically results in increased performance. Models that incorporate dilated convolutions tended to have higher average accuracies than models without dilated convolutions. The best performing model, in terms of average accuracy across all SNR conditions included SE blocks, dilated convolutions, and a ReLU activation prior to statistics pooling (model 1110) with an average accuracy of approximately 63.7%. This model also achieved the highest maximum accuracy of about 98.9% at a 22dB level. SE blocks did not increase performance compared to model 0000 with the exception of models 1110 and 1111. However, SE blocks were incorporated in the best performing model, 1110. Self-attention was not found to aid classification performance in general with the proposed architecture. Self-attention introduces a large number of trainable parameters possibly forming a complex loss space. Table IV lists the performances of single modification (from baseline) architectures. Each component of the ablation study, with the exception of dilated convolutions, decreased performance when applied individually. When combined, however, the best performing model was found. Therefore, we conclude that each component could possibly aid the optimization of each other--and, in general, dilated convolutions tend to have the most dramatic performance increases. ### _Accuracy Over Varying SNR_ Figure 11 summarizes the ablation study in terms of classification accuracy over varying SNR levels. We add this figure for completeness and reproducibility for other researchers. The accuracy within each SNR band is shown along with the modifications used, similar to Table III. The coloring in the figure denotes the accuracy in each SNR band. Performance follows a trend similar to that of a sigmoid function, where the rate at which peak classification accuracy is achieved is the most distinguishing feature between the different models. With the improved architectures, a maximum of 99% accuracy is achieved at high SNR levels (starting around 12dB SNR). While the proposed changes to the architectures generally improve performance at higher SNR levels, the largest improvements occur between \(-12\)dB and \(12\)dB compared to the baseline model in [7]. For example, at \(4\)dB, the performance increases from 75% up to 82%. Incorporating these modifications to the network may prove to be critical in real-world situations where noisy signals are likely to be obtained. Improving AMC performance at lower SNR ranges (\(<-12\)dB) is still an open research topic, with accuracies near chance level. One observation is the best performing model can vary with SNR. In systems that have available memory and processing power, an approach similar to [2] may be used to utilize several models and intelligently chose predictions based on estimated SNR conditions. That is, if the SNR of the signal of interest is known, a model can be tuned to increase performance slightly, as shown in [2]. Using the results presented here, researchers could also choose the architecture differences that perform best for a given SNR range (although performance differences are subtle). ### _Parameter Count Tradeoff_ An overview of each model's complexity and overall performance across the entire testing set is shown in Table III. This information is also shown graphically in Figure 12 for the maximum accuracy over SNR and the average accuracy across all SNRs. Whether looking at the maximum or the average measures of performance, the conclusions are similar. The previously described binary model name also appears in the figure. We found a slight correlation between the number of model parameters and overall model performance; however, with the architectures explored, there was a general parameter count where performance peaked. Models with parameter counts between approximately 170k to 205k generally performed better than smaller and larger models. We note that the Fig. 11: Ablation study results in terms of classification accuracy across SNR ranges. The best performing model is in the second to last row and displays strong performance across SNR values. Fig. 12: Ablation study parameter count tradeoff. The x-axis shows the number of trainable variables in each model and the y-axis shows max or average accuracy. The callout for each point denotes the model name as shown in Table III. models with more than 205k parameters included self-attention which was found to decrease model performance with the proposed architectures. This implies that one possible reason self-attention did not perform as well as other modifications is because of the increase in parameters, resulting in a more difficult loss space from which to optimize. ## IX Best Performing Model Investigation Due to the large volume of models, we focus upon the best performing model, (model 1110), for the remainder of this work. As previously mentioned, this model employs all modifications except self-attention. ### _Top-K Accuracy_ As discussed, in systems where the modulation schemes must be classified quickly, it is advantageous to apply fewer demodulation schemes in a trial and error fashion. This is particularly significant at lower SNR values where accuracy is mediocre. Top-k accuracy allows an in-depth view on the expected number of trials before finding the correct modulation scheme. Although traditional accuracy (top-1 accuracy) characterizes the performance of the model in terms of classifying the exact variant, top-k accuracy characterizes the percentage of the classifier predicting the correct variant among the top-k predictions (sorted by descending class probabilities). We plot the top-1, top-2, and top-5 classification accuracy over varying SNR conditions for each modulation grouping defined in Section III in Figure 13. Although performance decays to approximately random chance for the overall (all modulation schemes) performance curves for each top-k accuracy, it is notable that some modulation group performances drop below random chance. The models are trained to maximize the overall model performance. This could explain why certain modulation groups dip below random chance but the overall performance and other modulation groups remain at or above random chance. Using the proposed method greatly reduces the correct modulation scheme search space. While high performance in top-1 accuracy is increasingly difficult to achieve with low SNR signals, top-2 and top-5 accuracy converge to higher values at a much faster rate. This indicates our proposed method greatly reduces the search space from 24 modulation candidates to fewer candidate types when employing trial and error methods to determine the correct modulation scheme. Further, if the group of modulation is known (_e.g._, FM), one can view a more specific tradeoff curve in terms of SNR and top-k accuracy given in Figure 13. ### _Short Duration Signal Bursts_ Due to the rapid scanning characteristic of some modern software-defined radios, we investigate the performance trade-off of varying signal duration and AMC performance. This analysis is meant to emulate the situation wherein a receiver only detects a short RF signal burst. We investigate signal burst durations of 1.024 ms (full length signal from original dataset), 512 \(\mu\)s, 256 \(\mu\)s, 128 \(\mu\)s, 64 \(\mu\)s, 32 \(\mu\)s, and 16 \(\mu\)s. We assume the same 1MS/sec sampling rate as in the previous analyses such that 16 \(\mu\)s burst is captured in 16 I/Q samples. In this section, we use the same test set as our other investigations; however, a uniformly random starting point is Fig. 14: Tradeoff in accuracy for various signal lengths across SNR, grouped by modulation category for the best performing model 1110. The top plot shows the baseline performance using the full sequence. Subsequent plots show the same information using increasingly smaller signal lengths for classification. Fig. 13: Accuracy over varying SNR conditions for model 1110 with (a), (b), and (c) showing the top-1, top-2, and top-5 accuracy respectively. Random chance for each is defined as 1/24, 2/24, and 5/24. determined for each signal such that a contiguous sample of the desired duration, starting at the random point, is chosen. Thus, the chosen segment from a test set sample is randomly assigned. We also note that, although the sample length for the evaluation is changed, the best performing model is the same architecture with the exact same trained weights because this model uses statistics pooling from the X-Vector inspired modification. A significant benefit to the X-Vector inspired architecture is its ability to handle variable-length inputs without the need of padding, retraining, or other network modifications. This is achieved by taking global statistics across convolutional channels producing a fixed-length vector, regardless of signal duration. Due to this flexibility, the same model (model 1110) weights are used for each duration experiment. This fact also emphasizes the desirability of using X-vector inspired AMC architectures for receivers that are deployed in an environment where short-burst and variable duration signals are anticipated to be present. For each signal duration in the time domain, we plot the overall classification accuracy over varying SNR conditions as well as the accuracy for each modulation grouping defined in Section III. Figure 14 demonstrates the tradeoff for various signal durations where \(n\) is the number of samples from the time domain I/Q signal. The first observation is, as we would expect, that classification performance degrades with decreased signal duration. For example, the maximum accuracy begins to degrade at 256 \(\mu\)s and is more noticeable at 128 \(\mu\)s. This is likely a result of using sample statistics that result in unstable or biased estimates for short signal lengths since the number of received signal data points are insufficient to characterize the sample statistics used during training. Random classification accuracy is approximately 4% and is shown in the black dotted line in Figure 14. Although classification performance decreases with decreased duration, we are still able to achieve significantly higher classification accuracy than random chance down to 16 \(\mu\)s of signal capture. FM (frequency modulation) signals were typically more resilient to noise interference than AM (amplitude modulation) and AM-PM (amplitude and phase modulation) signals in our AMC. This was observed across all signal burst durations and our top-k accuracy analysis. This behavior indicates that the performance of our AMC for short bursts, in the presence of increasing amounts of noise, is more robust for signals modulated by changes in the carrier frequency and is more sensitive to signals modulated by varying the carrier amplitude. We attribute this behavior to our AMC architecture, the architecture of the receiver, or a combination of both of the AMC and receiver. ### _Confusion Matrices_ While classification accuracy provides a holistic view of model performance, it lacks the granularity to investigate where misclassifications are occurring. Confusion matrices are used to analyze the distribution of classifications for each given class. For each true label, the proportion of correctly classified samples is calculated along with the proportion of incorrect predictions for each opposing class. In this way, we can see which classes the model is struggling to distinguish from one another. A perfect classifier would be the identity matrix where the diagonal values represent the true class matches the predicted class. Each matrix value represents the percentage of classifications for the true label and each row sums to 1 (100%). Figure 15 illustrates the class confusion matrices for SNR levels greater than or equal to 0dB for models 1110, the reproduced ResNet architecture from [1], and the baseline X-Vector architecture from [7] respectively. Shown in [7], the X-Vector architecture was able to distinguish PSK and AMSSB variants to a higher degree and performed better overall than [1]. Both architectures struggled to differentiate QAM variants. Model 1110 improved upon these prior results for QAM signals and in general has higher diagonal components than the other architectures. This again supports a conclusion that model 1110 achieves a new state-of-the-art in AMC performance. ## X Conclusion A comprehensive ablation study was carried out with regard to AMC architectural features using the extensive RadioML 2018.01A dataset. This ablation study built upon a strong performance of a new baseline model that was also introduced in the initial investigation of this study. This initial investigation informed the design of a number of AMC architecture modifications--specifically, the use of X-Vectors, dilated convolutions, and SE blocks. With the combined Fig. 15: Confusion matrices for (a) model 1110 (best performing model from this work), (b) the reproduced ResNet model from [1], and (c) the X-Vector inspired model from [19] with SNR \(\geq\) 0dB. modifications, we achieved a new state-of-the-art in AMC performance. Among these modifications, dilated convolutions were found to be the most critical architectural feature for model performance. Self-attention was also investigated but was not found to increase performance--although increased temporal context improved upon prior works.
2302.07441
Geometric Phases of Nonlinear Elastic $N$-Rotors via Cartan's Moving Frames
We study the geometric phases of nonlinear elastic $N$-rotors with continuous rotational symmetry. In the Hamiltonian framework, the geometric structure of the phase space is a principal fiber bundle, i.e., a base, or shape manifold~$\mathcal{B}$, and fibers $\mathcal{F}$ along the symmetry direction attached to it. The symplectic structure of the Hamiltonian dynamics determines the connection and curvature forms of the shape manifold. Using Cartan's structural equations with zero torsion we find an intrinsic (pseudo) Riemannian metric for the shape manifold. One has the freedom to define the rotation sign of the total angular momentum of the elastic rotors as either positive or negative, e.g., counterclockwise or clockwise, respectively, or viceversa. This endows the base manifold~$\mathcal{B}$ with two distinct metrics both compatible with the geometric phase. In particular, the metric is pseudo-Riemannian if $\mathsf{A}<0$, and the shape manifold is a $2$D~Robertson-Walker spacetime with positive curvature. For $\mathsf{A}>0$, the shape manifold is the hyperbolic plane $\mathbb{H}^2$ with negative curvature. We then generalize our results to free elastic $N$-rotors. We show that the associated shape manifold~$\mathcal{B}$ is reducible to the product manifold of $(N-1)$ hyperbolic planes $\mathbb{H}^2$~($\mathsf{A}>0$), or $2$D~Robertson-Walker spacetimes~($\mathsf{A}<0$) depending on the convection used to define the rotation sign of the total angular momentum. We then consider elastic $N$-rotors subject to time-dependent self-equilibrated moments. The $N$-dimensional shape manifold of the extended autonomous system has a structure similar to that of the $(N-1)$-dimensional shape manifold of free elastic rotors. The Riemannian structure of the shape manifold provides an intrinsic measure of the closeness of one shape to another in terms of curvature, or induced geometric phase.
Francesco Fedele, Arash Yavari
2023-02-15T03:16:26Z
http://arxiv.org/abs/2302.07441v3
# Geometric Phases of Nonlinear Planar \(N\)-Pendula ###### Abstract We study the geometric phases of nonlinear planar \(N\)-pendula with continuous rotational symmetry. In the Hamiltonian framework, the geometric structure of the phase space is a principal fiber bundle, i.e., a base, or shape manifold \(\mathcal{B}\), and fibers \(\mathcal{F}\) along the symmetry direction attached to it. The symplectic structure of the Hamiltonian dynamics determines the connection and curvature forms of the shape manifold. Using Cartan's structural equations with zero torsion we find an intrinsic (pseudo) Riemannian metric for the shape manifold. For a double pendulum the metric is pseudo-Riemannian if the total angular momentum \(\mathsf{A}<0\), and the shape manifold is an expanding spacetime with the Robertson-Walker metric and positive curvature. For \(\mathsf{A}>0\), the shape manifold is the hyperbolic plane \(\mathbb{H}^{2}\) with negative curvature. We then generalize our results to free \(N\)-pendula. We show that the associated shape manifold \(\mathcal{B}\) is reducible to the product manifold of \((N-1)\) hyperbolic planes \(\mathbb{H}^{2}\) (\(\mathsf{A}>0\)), or Robertson-Walker 2D spacetimes (\(\mathsf{A}<0\)). We then consider \(N\)-pendula subject to time-dependent self-equilibrated moments. The extended autonomous Hamiltonian system is considered. The associated shape manifold is a product space of \(N\) hyperbolic planes \(\mathbb{H}^{2}\) (\(\mathsf{A}>0\)), or Robertson-Walker 2D spacetimes (\(\mathsf{A}<0\)). In either case, the geometric phase follows by integrating a 2-form given by the sum of the sectional curvature forms of \(\mathcal{B}\). The Riemannian structure of the shape manifold provides an intrinsic measure of the closeness of one shape to another in terms of curvature, or induced geometric phase. **Keywords:** Geometric phase, Berry's phase, geometric drift, Cartan's moving frames. ###### Contents * 1 Introduction * 2 Differential geometry via Cartan's moving frames * 2.1 Non-metricity * 2.2 Torsion * 2.3 Curvature * 2.4 Pseudo-Riemannian manifolds * 2.5 Riemannian product spaces * 2.6 Cartan's curvature 2-forms of an \(N\)-dimensional pseudo-Riemannian manifold with a diagonal metric * 2.7 Curvature 2-forms of a 2-dimensional pseudo-Riemannian manifold with a diagonal metric Dynamics of (free) nonlinear planar double pendula * 3.1 The Hamiltonian structure * 3.2 Hamiltonian reduction and geometric phases * 3.3 Curvature and intrinsic metric of the shape manifold * 3.4 Geodesics of the metric * 3.4.1 Negative angular momentum: Robertson-Walker spacetime * 3.4.2 Positive angular momentum: The hyperbolic plane \(\mathbb{H}^{2}\) * 3.4.3 The set of all geodesics * 4 Dynamics of (free) nonlinear planar \(N\)-pendula * 4.1 Curvature and intrinsic metric of the shape manifold * 4.2 Negative angular momentum: Multi-universe * 4.3 Positive angular momentum: The hyperbolic product space \(\mathbb{H}^{2(N-1)}\) * 5 Dynamics of nonlinear planar \(N\)-pendula under self-equilibrated external moments * 5.1 Extended autonomous Hamiltonian system * 5.2 Curvature and intrinsic metric of the shape manifold * 6 Conclusions ## 1 Introduction A classical example in which geometric phases arise is the parallel transport of a vector tangent to a sphere. The change in the vector direction is equal to the solid angle of the closed path spanned by the vector and it can be described by Hannay's angles (Hannay, 1985). The rate at which the angle, or geometric phase, changes in time is the geometric phase velocity. In physics, the rotation of Foucault's pendulum can also be explained by means of geometric phases. Pancharatnam (1956) discovered their effects in polarized light, and later Berry (1984a) rediscovered it for quantum-mechanical systems (see also (Berry, 1990; Simon, 1983a; Aharonov and Anandan, 1987; Garrison and Chiao, 1988)). Berry (1984b, 1990) showed that a quantum mechanical system that undergoes an adiabatic evolution acquires a phase factor that is purely geometric. Another example drawn from classical mechanics is the spinning body in a dissipationless medium, which has a rotational symmetry with respect to the axis of rotation. The associated angular, or geometric phase velocity \(\Omega\) follows from the conservation of angular momentum \(I\Omega^{2}\), where \(I\) is the mass moment of inertia. If the body changes shape, \(I\) varies over time and so does the angular speed \(\Omega\). In the frame rotating at that speed, one only observes the body shape-changing dynamics and the rotational symmetry is reduced. In a fixed frame one cannot distinguish between the body deformation and spinning motion. In general, geometric phases are observed in classical mechanical systems with internal variables that rule their shape deformations, and variables that rule their rigid translation of the system as a whole. A cyclic motion of the shape variables can induce a rigid translation if the total momentum is conserved. In classical and quantum mechanics the key geometrical structure is the symplectic form of a Hamiltonian. The Riemannian structure and a metric are traditionally associated to the theories of General Relativity and gravitation. In quantum mechanics, the scalar product on the Hilbert space induces naturally a distance between quantum states, but the interest is not in the local properties of the manifold of states. The physically relevant quantities are transition probability amplitudes between quantum states, which do not depend on their relative distance. However, Provost and Vallee (1980) argued that for macroscopic systems exhibiting collective behaviour, the possibility of going from one state to another is not described by a direct transition amplitude (scalar product in Hilbert space) but rather through a succession of infinitesimal steps on the manifold of collective states. The relevant distance between distinct states is then the distance measured along geodesics on the manifold. In quantum mechanics, the Riemannian metric is the Fubini-Study metric of complex projective spaces (Provost and Vallee, 1980; Anandan, 1991). The importance of the associated geodesic curves stems from the fact that Berry's phase between two quantum states can be expressed by integrating the associated connection form along the geodesic between the two states (Samuel and Bhandari, 1988; Wilczek and Shapere, 1989). As a matter of fact, the quantum metric provides the infinitesimal distance between two nearby states differing by a Berry phase. Such a distance measures the quantum fluctuations between the two states (Provost and Vallee, 1980). In fluid mechanics, the motion of a swimmer at low Reynolds numbers can be explained in terms of geometric phases (Shapere and Wilczek, 1987, 1989). Swimmers can cyclically change their shape (internal variables) to move forward (translation variables). Since inertia is neglected the swimmer's velocity is uniquely determined by the geometry of the sequence of its body's shapes, which lead to a net translation, i.e., the geometric phase. A fixed observer sees the swimmer drifting as its body shape cyclically changes over time, but it is hard to distinguish between the two motions. On the contrary, an observer moving with the swimmer sees only its body deformations and translation symmetry is reduced in the (symmetry-reduced) moving frame. In wave mechanics, the slowdown of large oceanic wave groups can be explained in terms of geometric phases (Fedele, 2014; Banner et al., 2014; Fedele et al., 2020). Channel flow turbulence governed by the Navier-Stokes equations admits a continuous translation symmetry. Vortical structures, i.e., packets of vorticity, advect downstream at a speed that depends on their intrinsic inertia (dynamical phase) and on the way their _shape_ varies over time (geometric phase). Fedele et al. (2015) showed that the geometric phase component of the vortex speed can be interpreted as a _self-propulsion_ velocity induced by the shape-changing vortex deformations similar to the motion of a swimmer at low Reynolds numbers (Shapere and Wilczek, 1989). In the literature, geometric phases have been understood in terms of holonomy of connections on vector bundles (Simon, 1983). In this paper we study geometric phases of nonlinear \(N\)-pendula in the Hamiltonian framework (Marsden et al., 1990) exploiting Cartan's moving frames to characterize the Riemannian structure of the reduced dynamics. We first present a complete analysis of the geometric phases of a coupled planar double pendulum, which conserves total angular momentum. This problem was discussed by Marsden et al. (1990) to introduce the approach of Hamiltonian reduction for mechanical systems with a continuous Lie symmetry. Such a symmetry implies that the associated phase space has the structure of a principal fiber bundle, i.e., a shape manifold and transversal fibers attached to it. The symplectic form of the Hamiltonian dynamics yields the connection form on the shape manifold, which thus determines the horizontal transport through the fiber bundle. A cyclic flow on the shape manifold induces a drift along the fibers. This includes dynamic and geometric phases. The dynamic phase increases with the time spent by the flow to wander around the phase space and answers the question: "How long did your trip take?" (Berry, 1984a). On the contrary, the geometric phase is independent of time and it depends only upon the curvature of the shape manifold, and answers the question: "Where have you been?" (Berry, 1984a). The geometric phase is defined by the connection form. Marsden et al. (1990) define the associated geometric phases and relate them to the curvature form of the shape manifold. Here, we present a new analysis exploiting Cartan's first structural equations with zero torsion and derive the intrinsic Riemannian structure of the shape manifold, which to the best of our knowledge, has not been investigated to this date. The use of Cartan's moving frames in studying the geometric phases of nonlinear \(N\)-pendula is motivated by the success of the applications of Cartan's machinery in the analysis of distributed defects in nonlinear solids by the second author and co-workers (Yavari and Goriely, 2012, 2013, 2014; Yavari and Goriely, 2012, 2014). This paper is organized as follows. We first review the theory of Cartan's moving frames and associated connection and curvature forms. The theory is then applied to pseudo-Riemannian manifolds. As a special case we derive the Cartan curvature forms of an \(N\)-dimensional manifold with a diagonal metric. We then introduce the problem of a planar two-pendulum in the Hamiltonian setting. The geometric phases of the system are then studied and an intrinsic metric of the shape manifold is derived. We then extend our study to the geometric phases of free nonlinear \(N\)-pendula and \(N\)-pendula subject to self-equilibrating external moments. Finally, we discuss the physical relevance of the intrinsic metric for applications, and in particular, to fluid turbulence. ## 2 Differential geometry via Cartan's moving frames Given an \(N\)-manifold \(\mathcal{B}\) with a metric \(\mathbf{G}\) and an affine connection \(\nabla\), \((\mathcal{B},\nabla,\mathbf{G})\) is called a metric-affine manifold [Gordeeva et al., 2010]. Here we mainly follow Hehl and Obukhov [2003] and Sternberg [2013]. Let us consider an orthonormal frame field \(\{\mathbf{e}_{1}(X),\ldots,\mathbf{e}_{N}(X)\}\) that at every point \(X\in\mathcal{B}\) forms a basis for the tangent space \(T_{X}\mathcal{B}\). A moving frame is, in general, a non-coordinate basis for the tangent space. The moving frame field \(\{\mathbf{e}_{\alpha}\}\) defines the moving co-frame field \(\{\vartheta^{1},\ldots,\vartheta^{N}\}\) such that \(\vartheta^{\alpha}(\mathbf{e}_{\beta})=\delta^{\alpha}_{\beta}\), where \(\delta^{\alpha}_{\beta}\) is the Kronecker delta. As the moving frame is assumed to be orthonormal, i.e., \(\left\langle\!\left\{\mathbf{e}_{\alpha},\mathbf{e}_{\beta}\right\}\!\right\rangle _{\mathbf{G}}=\delta_{\alpha\beta}\), where \(\left\langle\!\left\{.,.\right\rangle\!\right\}_{\mathbf{G}}\) is the inner product induced by the metric \(\mathbf{G}\), with respect to the moving frame the metric has the representation \[\mathbf{G}=\delta_{\alpha\beta}\,\vartheta^{\alpha}\otimes\vartheta^{\beta}\,, \tag{2.1}\] where summation over repeated indices is assumed. An affine (linear) connection is an operation \(\nabla:\mathcal{X}(\mathcal{B})\times\mathcal{X}(\mathcal{B})\rightarrow \mathcal{X}(\mathcal{B})\), where \(\mathcal{X}(\mathcal{B})\) is the set of vector fields on \(\mathcal{B}\), with certain properties, namely, a) \(\nabla_{f_{1}\mathbf{X}_{1}+f_{2}\mathbf{X}_{2}}\mathbf{Y}=f_{1}\nabla_{ \mathbf{X}_{1}}\mathbf{Y}+f_{2}\nabla_{\mathbf{X}_{2}}\mathbf{Y}\), b) \(\nabla_{f_{1}\mathbf{X}_{1}+f_{2}\mathbf{X}_{2}}\mathbf{Y}=f_{1}\nabla_{ \mathbf{X}_{1}}\mathbf{Y}+f_{2}\nabla_{\mathbf{X}_{2}}\mathbf{Y}\), and c) \(\nabla_{\mathbf{X}}(f\mathbf{Y})=f\nabla_{\mathbf{X}}\mathbf{Y}+(\mathbf{X}f) \mathbf{Y}\), where \(\mathbf{X}\), \(\mathbf{Y}\), \(\mathbf{X}_{1}\), \(\mathbf{X}_{2}\), \(\mathbf{Y}_{1}\), and \(\mathbf{Y}_{2}\) are arbitrary vector fields, \(f,f_{1},f_{2}\) are arbitrary functions, and \(a_{1},a_{2}\) are arbitrary scalars. The vector \(\nabla_{\mathbf{X}}\mathbf{Y}\) is the covariant derivative of \(\mathbf{Y}\) along \(\mathbf{X}\). Given the connection \(\nabla\), the connection 1-forms are defined as \[\nabla\mathbf{e}_{\alpha}=\mathbf{e}_{\gamma}\otimes\omega^{\gamma}{}_{\alpha}\,. \tag{2.2}\] The connection coefficients are defined as \(\nabla_{\mathbf{e}_{\beta}}\mathbf{e}_{\alpha}=\left\langle\omega^{\gamma}{}_{ \alpha},\mathbf{e}_{\beta}\right\rangle\mathbf{e}_{\gamma}=\omega^{\gamma}{}_ {\beta\alpha}\,\mathbf{e}_{\gamma}\).1 Thus, the connection 1-forms have the representation \(\omega^{\gamma}{}_{\alpha}=\omega^{\gamma}{}_{\beta\alpha}\,\vartheta^{\beta}\). It is straightforward to show that \(\nabla\vartheta^{\alpha}=-\omega^{\alpha}{}_{\gamma}\,\vartheta^{\gamma}\), and \(\nabla_{\mathbf{e}_{\beta}}\vartheta^{\alpha}=-\omega^{\alpha}{}_{\beta\gamma} \,\vartheta^{\gamma}\). Footnote 1: \(\left\langle.,.\right\rangle\) is the natural pairing of 1-forms and vectors. A coordinate chart \(\{X^{A}\}\) for \(\mathcal{B}\) defines a coordinate basis \(\left\{\partial_{A}=\frac{\partial}{\partial X^{A}}\right\}\) for \(T_{X}\mathcal{B}\). The moving frame field \(\{\mathbf{e}_{\alpha}\}\) is related to the coordinate basis by a \(GL(N,\mathbb{R})\)-rotation: \(\mathbf{e}_{\alpha}=\mathsf{F}_{\alpha}{}^{A}\,\partial_{A}\). In order to preserve orientation, it is assumed that \(\det\mathsf{F}_{\alpha}{}^{A}>0\). The relation between the moving and coordinate co-frames is \(\vartheta^{\alpha}=\mathsf{F}^{\alpha}{}_{A}\,dX^{A}\), where \([\mathsf{F}^{\alpha}{}_{A}]\) is the inverse of \([\mathsf{F}_{\alpha}{}^{A}]\). For the coordinate frame \([\partial_{A},\partial_{B}]=0\), where \([\mathbf{X},\mathbf{Y}]=\mathbf{X}\mathbf{Y}-\mathbf{Y}\mathbf{X}\) is the Lie bracket (commutator) of the vector fields \(\mathbf{X}\) and \(\mathbf{Y}\). For an arbitrary scalar field \(f\), \([\mathbf{X},\mathbf{Y}][f]=\mathbf{X}[f]\mathbf{Y}-\mathbf{Y}[f]\mathbf{X}\). For the moving frame field one has \[[\mathbf{e}_{\alpha},\mathbf{e}_{\beta}]=-c^{\gamma}{}_{\alpha\beta}\,\mathbf{ e}_{\gamma}\,, \tag{2.3}\] where \(c^{\gamma}{}_{\alpha\beta}\) are components of the _object of anholonomy_\(c^{\gamma}=d\vartheta^{\gamma}\). Noting that \[c^{\gamma}=d\left(\mathsf{F}^{\gamma}{}_{B}\,dX^{B}\right)=\sum_{\alpha<\beta}c^ {\gamma}{}_{\alpha\beta}\,\vartheta^{\alpha}\wedge\vartheta^{\beta}\,, \tag{2.4}\] one can show that \[c^{\gamma}{}_{\alpha\beta}=\mathsf{F}_{\alpha}{}^{A}\,\mathsf{F}_{\beta}{}^{B} \left(\partial_{A}\mathsf{F}^{\gamma}{}_{B}-\partial_{B}\mathsf{F}^{\gamma}{}_{A} \right)\,. \tag{2.5}\] In the local chart \(\{X^{A}\}\), \(\nabla_{\partial_{A}}\partial_{B}=\Gamma^{C}{}_{AB}\,\partial_{C}\), where \(\Gamma^{C}{}_{AB}\) are the Christoffel symbols of the connection. ### Non-metricity For a metric-affine manifold \((\mathcal{B},\nabla,\mathbf{G})\), non-metricity \(\boldsymbol{\mathcal{Q}}:\mathcal{X}(\mathcal{B})\times\mathcal{X}(\mathcal{B}) \times\mathcal{X}(\mathcal{B})\rightarrow\mathcal{X}(\mathcal{B})\) is defined as \[\boldsymbol{\mathcal{Q}}(\mathbf{X},\mathbf{Y},\mathbf{Z})=\left\langle\!\left\langle \nabla_{\mathbf{X}}\mathbf{Y},\mathbf{Z}\right\rangle\!\right\rangle_{\!\! \mathbf{G}}+\left\langle\!\left\langle\mathbf{Y},\nabla_{\mathbf{X}}\mathbf{Z} \right\rangle\!\right\rangle_{\!\!\mathbf{G}}-\mathbf{X}\!\left[\left\langle \mathbf{Y},\mathbf{Z}\right\rangle\!\right\rangle_{\!\!\mathbf{G}}\right]. \tag{2.6}\] In the moving frame \(\{\mathbf{e}_{\alpha}\}\), \(\mathcal{Q}_{\gamma\alpha\beta}=\boldsymbol{\mathcal{Q}}(\mathbf{e}_{\gamma}, \mathbf{e}_{\alpha},\mathbf{e}_{\beta})\). Non-metricity 1-forms are defined as \(\mathcal{Q}_{\alpha\beta}=\mathcal{Q}_{\gamma\alpha\beta}\,\vartheta^{\gamma}\). One can show that \(\mathcal{Q}_{\gamma\alpha\beta}=\omega^{\xi}{}_{\gamma\alpha}\,G_{\xi\beta}+ \omega^{\xi}{}_{\gamma\beta}\,G_{\xi\alpha}-\left\langle\!dG_{\alpha\beta },\mathbf{e}_{\gamma}\right\rangle=\omega_{\beta\gamma\alpha}+\omega_{\alpha \gamma\beta}-\left\langle\!dG_{\alpha\beta},\mathbf{e}_{\gamma}\right\rangle\), where \(d\) is the exterior derivative. Hence \[\mathcal{Q}_{\alpha\beta}=\omega_{\alpha\beta}+\omega_{\beta\alpha}-dG_{\alpha \beta}\,. \tag{2.7}\] This is _Cartan's zeroth structural equation_. For an orthonormal frame \(G_{\alpha\beta}=\delta_{\alpha\beta}\) and hence \[\mathcal{Q}_{\alpha\beta}=\omega_{\alpha\beta}+\omega_{\beta\alpha}. \tag{2.8}\] The connection \(\nabla\) is compatible with the metric \(\mathbf{G}\) if non-metricity vanishes, i.e., \[\nabla_{\mathbf{X}}\!\left<\!\left<\mathbf{Y},\mathbf{Z}\right>\!\right>_{ \mathbf{G}}=\left<\!\left<\nabla_{\mathbf{X}}\mathbf{Y},\mathbf{Z}\right>\! \right>_{\mathbf{G}}+\left<\!\left<\mathbf{Y},\nabla_{\mathbf{X}}\mathbf{Z} \right>\!\right>_{\mathbf{G}}. \tag{2.9}\] This is equivalent to \(\nabla\mathbf{G}=\mathbf{0}\), which in a coordinate chart reads \(G_{AB|C}=G_{AB,C}-\Gamma^{D}{}_{CA}G_{DB}-\Gamma^{D}{}_{CB}G_{AD}=0\). With respect to the moving frame, \(\omega_{\alpha\beta}+\omega_{\beta\alpha}=0\), i.e., the connection 1-forms of a metric-compatible connection are anti-symmetric. ### Torsion Torsion \(\boldsymbol{T}:\mathcal{X}(\mathcal{B})\times\mathcal{X}(\mathcal{B}) \rightarrow\mathcal{X}(\mathcal{B})\) of the connection \(\nabla\) is defined as \[\boldsymbol{T}(\mathbf{X},\mathbf{Y})=\nabla_{\mathbf{X}}\mathbf{Y}-\nabla_{ \mathbf{Y}}\mathbf{X}-\left[\mathbf{X},\mathbf{Y}\right]. \tag{2.10}\] In a local chart \(\{X^{A}\}\), torsion has components \(T^{A}{}_{BC}=\Gamma^{A}{}_{BC}-\Gamma^{A}{}_{CB}\). With respect to the moving frame torsion has the components \(T^{\alpha}{}_{\beta\gamma}=\omega^{\alpha}{}_{\beta\gamma}-\omega^{\alpha}{}_ {\gamma\beta}+c^{\alpha}{}_{\beta\gamma}\). The torsion 2-forms have the following relations with the connection 1-forms \[\mathcal{T}^{\alpha}=d\vartheta^{\alpha}+\omega^{\alpha}{}_{\beta}\wedge \vartheta^{\beta}\,. \tag{2.11}\] These are called _Cartan's first structural equations_. The connection \(\nabla\) is symmetric if it is torsion-free, i.e., \(\nabla_{\mathbf{X}}\mathbf{Y}-\nabla_{\mathbf{Y}}\mathbf{X}=\left[\mathbf{X}, \mathbf{Y}\right]\). With respect to the moving frame, \(d\vartheta^{\alpha}+\omega^{\alpha}{}_{\beta}\wedge\vartheta^{\beta}=0\). ### Curvature The curvature \(\boldsymbol{\mathcal{R}}:\mathcal{X}(\mathcal{B})\times\mathcal{X}(\mathcal{B })\times\mathcal{X}(\mathcal{B})\rightarrow\mathcal{X}(\mathcal{B})\) of the affine connection \(\nabla\) is defined as \[\boldsymbol{\mathcal{R}}(\mathbf{X},\mathbf{Y})\mathbf{Z}=\nabla_{\mathbf{X}} \nabla_{\mathbf{Y}}\mathbf{Z}-\nabla_{\mathbf{Y}}\nabla_{\mathbf{X}}\mathbf{Z }-\nabla_{[\mathbf{X},\mathbf{Y}]}\mathbf{Z}\,. \tag{2.12}\] In a coordinate chart, \(\mathcal{R}^{A}{}_{BCD}=\Gamma^{A}{}_{CD,B}-\Gamma^{A}{}_{BD,C}+\Gamma^{A}{}_ {BM}\,\Gamma^{M}{}_{CD}-\Gamma^{A}{}_{CM}\,\Gamma^{M}{}_{BD}\). With respect to the moving frame, the curvature tensor has the components \(\mathcal{R}^{\alpha}{}_{\beta\lambda\mu}=\partial_{\beta}\omega^{\alpha}{}_{ \lambda\mu}-\partial_{\lambda}\omega^{\alpha}{}_{\beta\mu}+\omega^{\alpha}{}_{ \beta\xi}\,\omega^{\xi}{}_{\lambda\mu}-\omega^{\alpha}{}_{\lambda\xi}\,\omega ^{\xi}{}_{\beta\mu}+\omega^{\alpha}{}_{\xi\mu}\,c^{\xi}{}_{\beta\lambda}\). The curvature 2-forms are defined as \[\mathcal{R}^{\alpha}{}_{\beta}=d\omega^{\alpha}{}_{\beta}+\omega^{\alpha}{}_{ \gamma}\wedge\omega^{\gamma}{}_{\beta}\,. \tag{2.13}\] These are called _Cartan's second structural equations_. Requiring that \(\nabla\) be both metric compatible and torsion free determines it uniquely. This is the Levi-Civita connection. With respect to a coordinate chart \(\{X^{A}\}\) it has the connection coefficients (Christoffel symbols) \(\Gamma^{C}{}_{AB}=\frac{1}{2}G^{CD}(G_{BD,A}+G_{AD,B}-G_{AB,D})\). The Levi-Civita connection 1-forms can be explicitly calculated [14]. Using Cartan's first structural equations \(d\vartheta^{\alpha}=-\omega^{\alpha}{}_{\beta}\wedge\vartheta^{\beta}\). Thus \[d\vartheta^{\alpha}(\mathbf{e}_{\beta},\mathbf{e}_{\gamma})=-(\omega^{\alpha}{} _{\beta}\wedge\vartheta^{\beta})(\mathbf{e}_{\beta},\mathbf{e}_{\gamma})=- \omega^{\alpha}{}_{\beta}(\mathbf{e}_{\beta})\,\vartheta^{\beta}(\mathbf{e}_{ \gamma})+\omega^{\alpha}{}_{\beta}(\mathbf{e}_{\gamma})\,\vartheta^{\beta}( \mathbf{e}_{\beta})=-\omega^{\alpha}{}_{\beta\gamma}+\omega^{\alpha}{}_{\gamma \beta}\,. \tag{2.14}\] Similarly, \[d\vartheta^{\beta}(\mathbf{e}_{\gamma},\mathbf{e}_{\alpha})=-\omega^{\beta}{}_ {\gamma\alpha}+\omega^{\beta}{}_{\alpha\gamma}\,,\quad d\vartheta^{\gamma}( \mathbf{e}_{\alpha},\mathbf{e}_{\beta})=-\omega^{\gamma}{}_{\beta\alpha}+ \omega^{\gamma}{}_{\alpha\beta}\,. \tag{2.15}\] Thus \[d\vartheta^{\alpha}(\mathbf{e}_{\beta},\mathbf{e}_{\gamma})+d\vartheta^{\beta}( \mathbf{e}_{\gamma},\mathbf{e}_{\alpha})-d\vartheta^{\gamma}(\mathbf{e}_{ \alpha},\mathbf{e}_{\beta})=2\,\omega^{\alpha}{}_{\gamma\beta}\,, \tag{2.16}\] where use was made of the fact that for a metric compatible connection \(\omega^{\alpha}{}_{\gamma\beta}+\omega^{\beta}{}_{\gamma\alpha}=0\). Thus \[\omega^{\alpha}{}_{\gamma\beta}=\frac{1}{2}\left[d\vartheta^{\alpha}(\mathbf{e}_ {\beta},\mathbf{e}_{\gamma})+d\vartheta^{\beta}(\mathbf{e}_{\gamma},\mathbf{e}_{ \alpha})-d\vartheta^{\gamma}(\mathbf{e}_{\alpha},\mathbf{e}_{\beta})\right]\,. \tag{2.17}\] The components of the Riemann curvature and the Ricci tensor are related to the curvature 2-forms as \[\text{Riem}^{\alpha}\,_{\beta\xi\eta}=\mathcal{R}^{\alpha}\,_{\beta}(\mathbf{e}_{ \xi},\mathbf{e}_{\eta})\,,\qquad\text{Ric}_{\alpha\beta}=\mathcal{R}^{\gamma}\, _{\alpha}(\mathbf{e}_{\gamma},\mathbf{e}_{\beta})\,. \tag{2.18}\] The Ricci scalar is defined as \(\text{R}=\text{Ric}_{\alpha\beta}\,\delta^{\alpha\beta}\). Note that with respect to the moving frame \(g_{\alpha\beta}=\delta_{\alpha\beta}\), and hence \(g^{\alpha\beta}=\delta^{\alpha\beta}\). In the coordinate chart \(\{X^{A}\}\) metric has the components \(G_{AB}=\mathsf{F}_{A}{}^{\alpha}\,\mathsf{F}_{B}{}^{\beta}\,\delta_{\alpha\beta}\) and the Riemann and Ricci tensors given in (2.18) have the following components \[\text{Riem}^{A}\,_{BCD}=\mathsf{F}_{\alpha}{}^{A}\,\mathsf{F}_{B}{}^{\beta}\, \mathsf{F}_{C}{}^{\xi}\,\mathsf{F}_{D}{}^{\eta}\,\,\text{Riem}^{\alpha}\,_{ \beta\xi\eta}\,,\qquad\text{Ric}_{AB}=\mathsf{F}_{A}{}^{\alpha}\,\mathsf{F}_{B }{}^{\beta}\,\,\text{Ric}_{\alpha\beta}\,, \tag{2.19}\] where \(\mathsf{F}_{A}{}^{\gamma}\,\mathsf{F}_{\gamma}{}^{B}=\delta_{A}^{B}\). The Ricci scalar reads \(\text{R}=\text{Ric}_{AB}\ G^{AB}\), where \(G^{AB}=\mathsf{F}_{\alpha}{}^{A}\,\mathsf{F}_{\beta}{}^{B}\,\delta^{\alpha\beta}\) is the inverse of the metric \(G_{AB}\) in the coordinate frame. Since the Ricci scalar is an invariant, its value is the same in any frame. As a matter of fact, \(\text{R}=\text{Ric}_{AB}\ G^{AB}=\mathsf{F}_{A}{}^{\alpha}\,\mathsf{F}_{B}{}^ {\beta}\,\text{Ric}_{\alpha\beta}\ \mathsf{F}_{\gamma}{}^{A}\,\mathsf{F}_{\rho}{}^{B}\,\delta^{\gamma\rho}=( \mathsf{F}_{A}{}^{\alpha}\,\mathsf{F}_{\gamma}{}^{A})(\mathsf{F}_{B}{}^{\beta }\,\mathsf{F}_{\rho}{}^{B})\,\text{Ric}_{\alpha\beta}\,\delta^{\alpha\rho}= \delta_{\gamma}^{\alpha}\,\delta_{\rho}^{\beta}\,\,\text{Ric}_{\alpha\beta} \,\,\delta^{\gamma\rho}=\text{Ric}_{\alpha\beta}\,\,\delta^{\alpha\beta}\). ### Pseudo-Riemannian manifolds For a pseudo-Riemannian manifold, in the Cartan's moving frame the metric \(\mathbf{G}\) in Eq. (2.1) generalizes to [16, Sternberg, 2013] \[\mathbf{G}=\sum_{\alpha=1}^{N}\epsilon_{\alpha}\,\vartheta^{\alpha}\otimes \vartheta^{\alpha}\,, \tag{2.20}\] where \(\epsilon_{\alpha}=\pm 1\), and \((\epsilon_{1},\ldots,\epsilon_{N})\) is the signature of the manifold. The orthonormality of the moving frame field implies that \(\left\langle\mathbf{e}_{\alpha},\mathbf{e}_{\beta}\right\rangle_{\!\mathbf{G }}=\delta_{\alpha\beta}\,\epsilon_{\alpha}\) (no summation on \(\alpha\)). If the connection is metric compatible, one has \[\omega^{\gamma}{}_{\alpha}\,\delta_{\gamma\beta}\,\epsilon_{\beta}+\omega^{ \gamma}{}_{\beta}\,\delta_{\gamma\alpha}\,\epsilon_{\alpha}=0\quad\text{(no-summation on $\alpha$ or $\beta$)}\,, \tag{2.21}\] or \[\omega_{\alpha\beta}+\omega_{\beta\alpha}=0\,. \tag{2.22}\] Thus \[\epsilon_{\alpha}\,\omega^{\alpha}{}_{\beta}+\epsilon_{\beta}\,\omega^{\beta }{}_{\alpha}=0\quad\text{(no-summation on $\alpha$ or $\beta$)}\,, \tag{2.23}\] which is equivalent to \[\omega^{\alpha}{}_{\beta}=-\epsilon_{\alpha}\,\epsilon_{\beta}\,\omega^{\beta }{}_{\alpha}\quad\text{(no-summation on $\alpha$ or $\beta$)}\,. \tag{2.24}\] The first and the second structural equations remain unchanged. The expressions for the Riemann and Ricci curvatures remain unaltered as well. The Ricci scalar has the following expression \[\text{R}=\text{Ric}_{\alpha\beta}\,G^{\alpha\beta}=\sum_{\alpha=1}^{N}\text{ Ric}_{\alpha\alpha}\ \epsilon_{\alpha}\,. \tag{2.25}\] ### Riemannian product spaces Let \((\mathcal{B}_{2},\mathbf{G}_{2})\),..., and \((\mathcal{B}_{N},\mathbf{G}_{N})\) be Riemannian manifolds and \(\mathcal{B}_{2}\times\cdots\times\mathcal{B}_{N}\) be their product manifold. At any point \((X_{2},\cdots,X_{N})\in\mathcal{B}_{2}\times\cdots\times\mathcal{B}_{N}\), one has the direct sum \(T_{(X_{2},\ldots,X_{N})}(\mathcal{B}_{2}\times\cdots\times\mathcal{B}_{N}) \cong T_{X_{2}}\mathcal{B}_{2}\oplus\cdots\oplus T_{X_{N}}\mathcal{B}_{N}\), where \(\cong\) means "isomorphic to". The product metric \(\mathbf{G}_{2}\times\cdots\times\mathbf{G}_{N}\) on \(\mathcal{B}_{1}\times\cdots\times\mathcal{B}_{N}\) is defined as \[\mathbf{G}_{2}\times\cdots\times\mathbf{G}_{N}\big{|}_{(X_{2},\ldots,X_{N})}= \mathbf{G}_{2}\big{|}_{X_{2}}+\cdots+\mathbf{G}_{N}\big{|}_{X_{N}}\,,\quad \forall X_{2}\in\mathcal{B}_{2},\cdots,X_{N}\in\mathcal{B}_{N}\,. \tag{2.26}\] The Riemannian manifold \((\mathcal{B}_{2}\times\cdots\times\mathcal{B}_{N},\mathbf{G}_{2}\times\cdots \times\mathbf{G}_{N})\) is called a Riemannian product space [Joyce, 2007]. If a Riemannian manifold is isometric to a Riemannian product space, it is called reducible (decomposable). Otherwise, it is irreducible (indecomposable). It should be noted that for a Riemannian product space, \[\nabla^{\mathbf{G}_{2}\times\cdots\times\mathbf{G}_{N}}_{(\mathbf{U}_{2},\cdots, \mathbf{U}_{N})}(\mathbf{W}_{2},\cdots,\mathbf{W}_{N})=\left(\nabla^{\mathbf{G} _{2}}_{\mathbf{U}_{2}}\mathbf{W}_{2},\cdots,\nabla^{\mathbf{G}_{N}}_{\mathbf{U}_{N }}\mathbf{W}_{N}\right)\,. \tag{2.27}\] In particular, the Ricci curvature of the Riemannian product space is written as \[\text{Ric}\left((\mathbf{U}_{2},\cdots,\mathbf{U}_{N}),(\mathbf{W}_{2},\cdots, \mathbf{W}_{N})\right)=\text{Ric}_{2}(\mathbf{U}_{2},\mathbf{W}_{2})+\cdots+ \text{Ric}_{N}(\mathbf{U}_{N},\mathbf{W}_{N})\,. \tag{2.28}\] Cartan's curvature \(2\)-forms of an \(N\)-dimensional pseudo-Riemannian manifold with a diagonal metric Consider an \(N\)-dimensional pseudo-Riemannian manifold \(\mathcal{B}\) with a diagonal metric \[\mathbf{G}=\sum_{A=1}^{N}\epsilon_{A}\,\mathsf{G}_{A}\,dX^{A}\otimes dX^{A}= \sum_{A=1}^{N}\epsilon_{A}\,\sqrt{\mathsf{G}_{A}}\,dX^{A}\otimes\sqrt{\mathsf{ G}_{A}}\,dX^{A}\,. \tag{2.29}\] Let us define the co-frame field \[E^{*}=\left\{\vartheta^{A}=\sqrt{\mathsf{G}_{A}}\,dX^{A}\right\}\quad\text{( no summation on $A$)}\,, \tag{2.30}\] and its dual moving frame field \[E=\left\{\mathbf{e}_{A}=\frac{1}{\sqrt{\mathsf{G}_{A}}}\,\partial_{A}\right\} \quad\text{(no summation on $A$)}\,, \tag{2.31}\] which by construction \(\vartheta^{A}(\mathbf{e}_{B})=\delta^{A}_{B}\). Then, the metric in the moving frame \(E\) is simply written as \[\mathbf{G}=\sum_{A=1}^{N}\epsilon_{A}\,\vartheta^{A}\otimes\vartheta^{A}\,. \tag{2.32}\] Note that \[d\vartheta^{A}=\sum_{B=1}^{N}\frac{\partial_{B}\mathsf{G}_{A}}{2\mathsf{G}_{A }\sqrt{\mathsf{G}_{B}}}\,\vartheta^{B}\wedge\vartheta^{A}\quad\text{(no summation on $A$)}\,. \tag{2.33}\] We next calculate the Levi-Civita connection \(1\)-forms for which \(\mathcal{T}^{A}=0\). Note that \(\omega_{BA}=-\omega_{AB}\) and there are \(N(N-1)/2\) connection \(1\)-forms to be determined. Cartan's first structural equations read \(d\vartheta^{A}+\omega^{A}{}_{B}\,\wedge\,\vartheta^{B}=0\). Note that one can use (2.17). However, there is an easier approach for calculating the connection forms. Recalling that \(\omega^{A}{}_{B}=\omega^{A}{}_{CB}\,\vartheta^{C}\), we have \[\sum_{B=1}^{N}\frac{\mathsf{G}_{A,B}}{2\mathsf{G}_{A}\sqrt{\mathsf{G}_{B}}}\, \vartheta^{B}\wedge\vartheta^{A}+\omega^{A}{}_{CB}\,\vartheta^{C}\wedge \vartheta^{B}=0\quad\text{(no summation on $A$)}\,, \tag{2.34}\] where \(\mathsf{G}_{A,B}=\partial_{B}\mathsf{G}_{A}\). Thus \[\sum_{B=1}^{N}\left(\omega^{A}{}_{CB}\,\vartheta^{C}-\frac{\mathsf{G}_{A,B}}{ 2\mathsf{G}_{A}\sqrt{\mathsf{G}_{B}}}\,\vartheta^{A}\right)\wedge\vartheta^ {B}=0\quad\text{(no summation on $A$)}\,. \tag{2.35}\] Cartan's lemma implies that [10] \[\omega^{A}{}_{CB}\,\vartheta^{C}-\frac{\mathsf{G}_{A,B}}{2\mathsf{G}_{A}\sqrt {\mathsf{G}_{B}}}\,\vartheta^{A}=\xi^{A}{}_{BC}\,\vartheta^{C}\quad\text{( no summation on $A$ or $B$)}\,, \tag{2.36}\] where \(\xi^{A}{}_{BC}(X)=\xi^{A}{}_{CB}(X)\) are \(\frac{N^{2}(N+1)}{2}\) arbitrary functions. Thus \[\omega^{A}{}_{B}=\frac{\mathsf{G}_{A,B}}{2\mathsf{G}_{A}\sqrt{\mathsf{G}_{B}}} \,\vartheta^{A}+\xi^{A}{}_{BC}\,\vartheta^{C}\quad\text{(no summation on $A$ or $B$)}\,. \tag{2.37}\] Hence \[\omega_{AB}=\epsilon_{A}\,\frac{\mathsf{G}_{A,B}}{2\mathsf{G}_{A}\sqrt{\mathsf{ G}_{B}}}\,\vartheta^{A}+\epsilon_{A}\,\xi^{A}{}_{BC}\,\vartheta^{C}\quad\text{(no summation on $A$ or $B$)}\,. \tag{2.38}\] Knowing that \(\omega_{AB}+\omega_{BA}=0\), one can guess that \[\omega_{AB}=\epsilon_{A}\,\frac{\mathsf{G}_{A,B}}{2\mathsf{G}_{A}\sqrt{\mathsf{ G}_{B}}}\,\vartheta^{A}-\epsilon_{B}\,\frac{\mathsf{G}_{B,A}}{2\mathsf{G}_{B} \sqrt{\mathsf{G}_{A}}}\,\vartheta^{B}\quad\text{(no summation on $A$ or $B$)}\,. \tag{2.39}\] Thus \[\omega^{A}{}_{B}=\mathsf{L}_{AB}\,\vartheta^{A}-\epsilon_{A}\,\epsilon_{B}\, \mathsf{L}_{BA}\,\vartheta^{B}\quad\text{(no summation on $A$ or $B$)}\,, \tag{2.40}\] where \[\mathsf{L}_{AB}=\frac{\mathsf{G}_{A,B}}{2\mathsf{G}_{A}\sqrt{\mathsf{G}_{B}}}\,. \tag{2.41}\] It is straightforward to check that the 1-forms given in (2.40) satisfy Cartan's first structural equations, and hence, are the unique Levi-Civita connection 1-forms. Note that \(\omega^{A}{}_{B}=-\epsilon_{A}\,\epsilon_{B}\,\omega^{B}{}_{A}\), and \[\begin{split} d\omega^{A}{}_{B}&=\sum_{C}(\mathsf{L }_{AB,C}+\mathsf{L}_{AB}\mathsf{L}_{AC})\,\vartheta^{C}\wedge\vartheta^{A}\\ &\quad-\sum_{C}\epsilon_{A}\,\epsilon_{B}\,(\mathsf{L}_{BA,C}+ \mathsf{L}_{BA}\mathsf{L}_{BC})\,\vartheta^{C}\wedge\vartheta^{B}\quad\text{(no summation on $A$ or $B$)}\,,\end{split} \tag{2.42}\] where from Eq. (2.33) the relation \(d\vartheta^{A}=\mathsf{L}_{AC}\,\vartheta^{C}\wedge\vartheta^{A}\) (no summation on \(A\)) has been used. Cartan's second structural equations read \[\mathcal{R}^{A}{}_{B}=d\omega^{A}{}_{B}+\omega^{A}{}_{C}\wedge\omega^{C}{}_{B }\,, \tag{2.43}\] where curvature 2-forms are (pseudo) anti-symmetric, i.e., \(\mathcal{R}^{A}{}_{B}+\epsilon_{A}\,\epsilon_{B}\,\mathcal{R}^{B}{}_{A}=0\). More explicitly, \[\begin{split}\mathcal{R}^{A}{}_{B}&=\sum_{C}( \mathsf{L}_{AB,C}+\mathsf{L}_{AB}\mathsf{L}_{AC}-\mathsf{L}_{AC}\mathsf{L}_{ CB})\,\vartheta^{C}\wedge\vartheta^{A}\\ &-\sum_{C}\epsilon_{A}\,\epsilon_{B}\,(\mathsf{L}_{BA,C}+ \mathsf{L}_{BA}\mathsf{L}_{BC}-\mathsf{L}_{BC}\mathsf{L}_{CA})\,\vartheta^{C} \wedge\vartheta^{B}\\ &-\sum_{C}\epsilon_{C}\,\epsilon_{B}\,\mathsf{L}_{AC}\,\mathsf{L} _{BC}\,\vartheta^{A}\wedge\vartheta^{B}\qquad\qquad\text{(no summation on $A$ or $B$)}\,.\end{split} \tag{2.44}\] ### Curvature \(2\)-forms of a \(2\)-dimensional pseudo-Riemannian manifold with a diagonal metric Consider a two-dimensional pseudo-Riemannian manifold and a coordinate chart \(U=\{X_{1},X_{2}\}\). Assume a diagonal metric in the coordinate frame \[ds^{2}=\epsilon_{1}\,\mathsf{G}_{1}(dX^{1})^{2}+\epsilon_{2}\,\mathsf{G}_{2}( dX^{1})^{2}\,. \tag{2.45}\] From Eq. (2.40), there is only one connection 1-form \(\omega^{1}{}_{2}\) given as \[\omega^{1}{}_{2}=\frac{\mathsf{G}_{1,2}}{2\sqrt{\mathsf{G}_{1}\mathsf{G}_{2}} }\,dX^{1}-\epsilon_{1}\epsilon_{2}\,\frac{\mathsf{G}_{2,1}}{2\sqrt{\mathsf{G }_{1}\mathsf{G}_{2}}}\,dX^{2}\,, \tag{2.46}\] where \(\mathsf{G}_{1,2}=\partial_{X_{2}}\mathsf{G}_{1}\). Alternatively, \[\omega^{1}{}_{2}=\frac{\mathsf{G}_{1,2}}{2\mathsf{G}_{1}\sqrt{\mathsf{G}_{2}} }\,\vartheta^{1}-\epsilon_{1}\epsilon_{2}\,\frac{\mathsf{G}_{2,1}}{2\mathsf{ G}_{2}\sqrt{\mathsf{G}_{1}}}\,\vartheta^{2}\,. \tag{2.47}\] From Cartan's second structural equations (2.43), there is only one curvature 2-form \(\mathcal{R}^{1}{}_{2}\), which reads \[\mathcal{R}^{1}{}_{2}=d\omega^{1}{}_{2}=-\frac{1}{2}\left[\epsilon_{1} \epsilon_{2}\left(\frac{\mathsf{G}_{2,1}}{\sqrt{\mathsf{G}_{1}\mathsf{G}_{2}} }\right)_{,1}+\left(\frac{\mathsf{G}_{1,2}}{\sqrt{\mathsf{G}_{1}\mathsf{G}_{2}} }\right)_{,2}\right]dX^{1}\wedge dX^{2}\,. \tag{2.48}\] Alternatively, \[\mathcal{R}^{1}{}_{2}=K\,\vartheta^{1}\wedge\vartheta^{2}\,, \tag{2.49}\] where the Gaussian curvature \(K={\cal R}^{1}{}_{2}({\bf e}_{1},{\bf e}_{2})\) is written as \[K=-\frac{1}{2\sqrt{{\sf G}_{1}{\sf G}_{2}}}\left[\epsilon_{1}\epsilon_{2}\left( \frac{{\sf G}_{2,1}}{\sqrt{{\sf G}_{1}{\sf G}_{2}}}\right)_{,1}+\left(\frac{{ \sf G}_{1,2}}{\sqrt{{\sf G}_{1}{\sf G}_{2}}}\right)_{,2}\right]\,. \tag{2.50}\] From Eq. (2.24) it follows that \(d\omega^{1}{}_{2}=-\epsilon_{1}\epsilon_{2}\,d\omega^{2}{}_{1}\) and hence \[{\cal R}^{2}{}_{1}=-\epsilon_{1}\epsilon_{2}\,{\cal R}^{1}{}_{2}\,. \tag{2.51}\] The Ricci tensor is calculated using (2.18)\({}_{2}\) as \[{\rm Ric}_{\alpha\beta}={\cal R}^{\gamma}{}_{\alpha}({\bf e}_{\gamma},{\bf e}_ {\beta})={\cal R}^{1}{}_{\alpha}({\bf e}_{1},{\bf e}_{\beta})+{\cal R}^{2}{}_{ \alpha}({\bf e}_{2},{\bf e}_{\beta})\,. \tag{2.52}\] In particular, \[{\rm Ric}_{11}={\cal R}^{2}{}_{1}({\bf e}_{2},{\bf e}_{1})=\epsilon_{1} \epsilon_{2}\,{\cal R}^{1}{}_{2}({\bf e}_{1},{\bf e}_{2})=\epsilon_{1}\epsilon _{2}\,K\,,\qquad{\rm Ric}_{22}={\cal R}^{1}{}_{2}({\bf e}_{1},{\bf e}_{2})=K\,, \tag{2.53}\] and \({\rm Ric}_{12}={\rm Ric}_{21}=0\). The Ricci scalar is calculated as \[{\rm R}=\epsilon_{1}\,{\rm Ric}_{11}+\epsilon_{2}\,{\rm Ric}_{22}=\epsilon_{1 }^{2}\epsilon_{2}K+\epsilon_{2}K=2\epsilon_{2}K=-\frac{1}{\sqrt{{\sf G}_{1}{ \sf G}_{2}}}\left[\epsilon_{1}\left(\frac{{\sf G}_{2,1}}{\sqrt{{\sf G}_{1}{ \sf G}_{2}}}\right)_{,1}+\epsilon_{2}\left(\frac{{\sf G}_{1,2}}{\sqrt{{\sf G}_ {1}{\sf G}_{2}}}\right)_{,2}\right]\,. \tag{2.54}\] ## 3 Dynamics of (free) nonlinear planar double pendula In this section we will study the geometric phase of a coupled planar double pendulum, which conserves total angular momentum. This problem was discussed by Marsden et al. (1990) to introduce the Hamiltonian reduction technique for mechanical systems with symmetries. The continuous symmetry implies that the associated phase space has the structure of a principal fiber bundle, i.e., a shape manifold and transversal fibers attached to it. Marsden et al. (1990) defined the associated geometric phases and related them to the curvature form of the shape manifold. Hereafter, we present a new analysis exploiting Cartan's structural equations with zero torsion and find the Riemannian structure of the shape manifold, which was not investigated in Marsden et al. (1990). Consider the planar double pendulum depicted in Fig. 1. The associated Lagrangian is written as \[{\cal L}=\frac{1}{2}I_{1}\,\dot{\theta}_{1}^{2}+\frac{1}{2}I_{2}\,\dot{\theta} _{2}^{2}-\Pi(\theta_{1},\theta_{2})\,, \tag{3.1}\] where the Lagrangian coordinates \(\theta_{j}\) are the angular positions of the two bobs of masses \(m_{1}\) and \(m_{2}\) as indicated in Fig. 1. \(I_{j}=m_{j}L_{j}^{2}\) are the mass moments of inertia of the two bobs, where \(L_{j}\) is the length of the \(j\)th rigid bar. The potential \(\Pi(\theta_{1},\theta_{2})\) describes conservative moments \(M_{j}=-\partial_{\theta_{j}}\Pi\), which are in equilibrium, that is \[M_{1}+M_{2}=-\frac{\partial\Pi}{\partial\theta_{1}}-\frac{\partial\Pi}{ \partial\theta_{2}}=0\,. \tag{3.2}\] Thus, the potential must be a function of the Lagrangian coordinate difference, i.e., \(\Pi(\theta_{2}-\theta_{1})\), which is the potential of a nonlinear spring, see Fig. 1. Minimizing the action \(\int{\cal L}dt\) yields the following dynamical equations \[\frac{d}{dt}\left(\frac{\partial{\cal L}}{\partial\dot{\theta}_{j}}\right)- \frac{\partial{\cal L}}{\partial\theta_{j}}=I_{j}\ddot{\theta}_{j}+\frac{ \partial\Pi}{\partial\theta_{j}}=0\,,\quad j=1,2\,. \tag{3.3}\] From (3.2) potential moments are in equilibrium and summing up equations (3.3) yields \[I_{1}\ddot{\theta}_{1}+I_{2}\ddot{\theta}_{2}=M_{1}+M_{2}=0\,. \tag{3.4}\] Thus, the total angular momentum \[{\sf A}=I_{1}\dot{\theta}_{1}+I_{2}\dot{\theta}_{2}\,, \tag{3.5}\] is conserved. In the following, we assume that \(\mathbf{A}\neq 0\). Such an invariant endows the system with a continuous Lie symmetry: if the pair \(\mathbf{Z}=(\theta_{1}(t),\theta_{2}(t))\) is a solution of the Lagrangian equations, so is \[G_{\alpha}(\mathbf{Z})=(\theta_{1}(t)+\alpha,\theta_{2}(t)+\alpha)\,, \tag{3.6}\] for any angle \(\alpha\in\mathbb{R}\). In the following, we will use this symmetry in the Hamiltonian setting to reveal the geometric structure of the phase space as that of a principal fiber bundle. Then, the associated Riemannian structure follows from Cartan's structural equations as described in SS2. ### The Hamiltonian structure The conjugate momenta follow from the Lagrangian (3.56) as \[p_{j}=\frac{\partial\mathcal{L}}{\partial\dot{\theta}_{j}}=I_{j}\dot{\theta}_{ j}\,,\quad j=1,2\,, \tag{3.7}\] and \(\theta_{j}=p_{j}/I_{j}\). Then, the Legendre transform of \(\mathcal{L}\) gives the Hamiltonian \[\mathcal{H}=p_{1}\,\dot{\theta}_{1}+p_{2}\,\dot{\theta}_{2}-\mathcal{L}=\frac {1}{2}\frac{p_{1}^{2}}{I_{1}}+\frac{1}{2}\frac{p_{2}^{2}}{I_{2}}+\Pi\left( \theta_{2}-\theta_{1}\right)\,. \tag{3.8}\] The configuration space is a 2-torus \(Q=\mathbb{T}^{2}\), which has the local chart \(\{\theta_{1},\theta_{2}\}\), and the phase space is \(T^{*}Q\) with local coordinates \(\{\theta_{1},\theta_{2},p_{1},p_{2}\}\), where \(T^{*}Q\) is the cotangent bundle of \(Q\). Let us define the vector \[\mathbf{X}=\begin{bmatrix}\theta_{1}\\ \theta_{2}\\ p_{1}\\ p_{2}\end{bmatrix}\,. \tag{3.9}\] The dynamics is governed by \[\dot{\mathbf{X}}=\mathbf{J}\nabla_{\mathbf{X}}\mathcal{H}\,, \tag{3.10}\] where \[\nabla_{\mathbf{X}}=\begin{bmatrix}\partial_{\theta_{1}}\\ \partial_{\theta_{2}}\\ \partial_{p_{1}}\\ \partial_{p_{2}}\end{bmatrix}\,, \tag{3.11}\] Figure 1: _A planar double pendulum with a nonlinear spring. The mass moments of inertia are given as \(I_{j}=m_{j}L_{j}^{2}\), \(j=1,2\), where \(m_{j}\) is the mass of the \(jth\) bob and \(L_{j}\) is the length of the \(jth\) rigid bar._ and \({\bf J}\) is the following \(4\times 4\) symplectic matrix \[{\bf J}=\left[\begin{array}{cc}{\bf O}_{2}&{\bf I}_{2}\\ -{\bf I}_{2}&{\bf O}_{2}\end{array}\right]=\left[\begin{array}{ccc}0&0&1&0\\ 0&0&0&1\\ -1&0&0&0\\ 0&-1&0&0\end{array}\right]\,. \tag{3.12}\] \({\bf I}_{2}=[\delta_{ij}]\) is the \(2\times 2\) identity matrix, \({\bf O}_{2}\) is the \(2\times 2\) null matrix, and \(\delta_{ij}\) is the Kronecker tensor. From Eq. (3.10), \[\dot{\theta}_{j}=\frac{\partial{\cal H}}{\partial p_{j}}\,,\quad\dot{p}_{j}=- \frac{\partial{\cal H}}{\partial\theta_{j}}\,,\quad j=1,2\,, \tag{3.13}\] or \[\dot{\theta}_{j}=\frac{p_{j}}{I_{j}}\,,\quad\dot{p}_{j}=-\frac{\partial\Pi}{ \partial\theta_{j}}\,,\quad j=1,2\,. \tag{3.14}\] The Hamiltonian \({\cal H}\) and the total angular momentum \({\sf A}=p_{1}+p_{2}\) given in (3.5) are invariants of motion. The Hamiltonian system inherits the continuous Lie-group symmetry in (3.6), that is \[G_{\alpha}({\bf X})=(\theta_{1}+\alpha,\theta_{2}+\alpha,p_{1},p_{2}), \tag{3.15}\] for any angle \(\alpha\in\mathbb{R}\). The associated 1-form is \[\alpha=p_{1}d\theta_{1}+p_{2}d\theta_{2}\,, \tag{3.16}\] and the symplectic 2-form is defined as \[d\alpha=dp_{1}\wedge d\theta_{1}+dp_{2}\wedge d\theta_{2}\,. \tag{3.17}\] Figure 2: _Fiber bundle structure of the state space \({\cal P}=T^{*}Q/{\sf A}\). \({\cal B}\) is the base (shape) manifold and \({\cal F}\) is a generic fiber._ ### Hamiltonian reduction and geometric phases To reveal the geometric nature of the dynamics, we consider another configuration space \(Q_{s}\) with Lagrangian coordinates \(\{\theta_{1},\psi_{2}\}\), where the shape parameter \(\psi_{2}=\theta_{2}-\theta_{1}\) represents the relative angular displacement of the two bobs. Since the total angular momentum \(\mathsf{A}=p_{1}+p_{2}\) must be conserved, \(p_{1}=\mathsf{A}-p_{2}\) and the motion must occur on the subspace \(T^{*}Q_{s}/\mathsf{A}\) with the coordinate chart \(\{\theta_{1},\psi_{2},p_{2}\}\). The 1-form in Eq. (3.16) reduces to \[\alpha=p_{1}\,d\theta_{1}+p_{2}\,d\theta_{2}=(\mathsf{A}-p_{2})\,d\theta_{1}+p _{2}\,d\theta_{2}\,, \tag{3.18}\] and since \(\psi_{2}=\theta_{2}-\theta_{1}\), one has \[\alpha=\mathsf{A}\,d\theta_{1}+p_{2}\,d\psi_{2}\,. \tag{3.19}\] The associated symplectic 2-form reads \[d\alpha=dp_{2}\wedge d\psi_{2}\,. \tag{3.20}\] The reduced state space \(\mathcal{P}=T^{*}Q_{s}/\mathsf{A}\) has the geometric structure of a principal fiber bundle characterized by the quadruplet \((\mathcal{P},\mathcal{B},G_{\alpha},\pi)\). The flow \(\dot{\mathbf{X}}_{s}\in TQ_{s}\) in the state space \(\mathcal{P}\) can be decomposed as the sum \(\dot{\mathbf{X}}_{s}=\dot{\mathbf{X}}_{\mathcal{B}}+\dot{\mathbf{X}}_{\mathcal{F}}\) of a Hamiltonian flow \[\dot{\mathbf{X}}_{\mathcal{B}}=\begin{bmatrix}\dot{p}_{2}\\ \dot{\psi}_{2}\end{bmatrix}\in T\mathcal{B}\,, \tag{3.21}\] on a two-dimensional shape, or base manifold \(\mathcal{B}\) with the coordinate chart \(\{p_{2},\psi_{2}\}\), and a drift flow \(\dot{\mathbf{X}}_{\mathcal{F}}=\begin{bmatrix}\dot{\theta}_{1}\end{bmatrix} \in T\mathcal{F}\) along the one-dimensional fibers \(\mathcal{F}\) with the coordinate chart \(\{\theta_{1}\}\) as depicted in Fig. 2. The map \(\pi:\mathcal{P}\rightarrow\mathcal{S}\) projects an element \(\mathbf{X}_{s}\) of the state space \(\mathcal{P}\) and all the elements of the fiber, or group orbit \(G_{\alpha}(\mathbf{X}_{s})\), into the same point \(\pi(\mathbf{X}_{s})\) of the base manifold \(\mathcal{B}\), viz. \(\pi(\mathbf{X}_{s})=\pi(G_{\alpha}(\mathbf{X}_{s}))\), with \(\alpha\in\mathbb{R}\). In particular, \[\pi(\mathbf{X}_{s})=\mathbf{X}_{\mathcal{B}}=\begin{bmatrix}p_{2}\\ \psi_{2}\end{bmatrix}\,. \tag{3.22}\] The reduced Hamiltonian flow \(\dot{\mathbf{X}}_{\mathcal{B}}\) on \(\mathcal{B}\) is governed by the following equation \[\dot{\mathbf{X}}_{\mathcal{B}}=\mathbf{J}_{R}\nabla_{\mathbf{X}_{\mathcal{B}}} \mathcal{H}_{R}\,, \tag{3.23}\] where the reduced Hamiltonian is written as \[\mathcal{H}_{R}=\frac{1}{2}\frac{\left(p_{2}-\mathsf{A}\right)^{2}}{I_{1}}+ \frac{1}{2}\frac{p_{2}^{2}}{I_{2}}+\Pi\left(\psi_{2}\right)\,. \tag{3.24}\] \(\mathbf{J}_{R}\) is the canonical symplectic matrix \[\mathbf{J}_{R}=\begin{bmatrix}0&1\\ -1&0\end{bmatrix}\,,\qquad\nabla_{\mathbf{X}_{\mathcal{B}}}=\begin{bmatrix} \partial_{p_{2}}\\ \partial_{\psi_{2}}\end{bmatrix}\,. \tag{3.25}\] The flow \(\dot{\mathbf{X}}_{\mathcal{B}}\) on the shape manifold \(\mathcal{B}\) is independent from the flow \(\dot{\mathbf{X}}_{\mathcal{F}}\) along the fiber. Indeed, from Eq. (3.23) \[\dot{\psi}_{2}=\left(\frac{1}{I_{1}}+\frac{1}{I_{2}}\right)p_{2}-\frac{\mathsf{ A}}{I_{1}}\,,\qquad\dot{p}_{2}=-\frac{\partial\Pi}{\partial\psi_{2}}\,. \tag{3.26}\] On the contrary, \(\dot{\mathbf{X}}_{\mathcal{F}}\) depends on \(\dot{\mathbf{X}}_{\mathcal{B}}\) since \[\dot{\theta}_{1}=\frac{\mathsf{A}-p_{2}}{I_{1}}\,. \tag{3.27}\] In simple words, the flow \(\dot{\mathbf{X}}_{s}\) in the state space \(\mathcal{P}\) decouples in a symmetry-free Hamiltonian flow \(\dot{\mathbf{X}}_{\mathcal{B}}\) on the shape manifold \(\mathcal{B}\), which induces a rotation drift \(\dot{\mathbf{X}}_{\mathcal{F}}\) along the fibers. Physically, the reduced flow on \(\mathcal{B}\) is the shape-changing evolution of the connected bobs induced by the internal elastic moments. Such a shape dynamics makes the two bobs to rigidly rotate together by the time-dependent angle \(\theta_{1}\). Thus, given a symmetry-free motion \({\bf X}_{\cal B}\) on \({\cal B}\). The full motion in \({\cal P}\) follows by shifting \({\bf X}_{\cal B}\) along the fibers by \(\theta_{1}\), that is \({\bf X}_{s}=G_{\theta_{1}}({\bf X}_{\cal B})\), as depicted in Fig. 2. To evaluate such a rotation drift, from Eq. (3.19), we define the 1-form \[\widetilde{\alpha}=\frac{\alpha}{{\sf A}}=d\theta_{1}+\frac{p_{2}}{{\sf A}}\, d\psi_{2}\,. \tag{3.28}\] Then the total rotation drift \(\theta_{1}\) along the fiber follows by integrating the form \(d\theta_{1}=\widetilde{\alpha}-\frac{p_{2}}{{\sf A}}d\psi_{2}\), i.e., \[\theta_{1}=\int d\theta_{1}=\int_{0}^{t}\widetilde{\alpha}\,dt-\frac{1}{{\sf A }}\int_{\gamma}p_{2}\,d\psi_{2}\,, \tag{3.29}\] where \(\gamma\) is a closed trajectory of the motion up to time \(t\) in the shape manifold \({\cal B}\). Thus, \[\theta_{1}=\theta_{\rm dyn}+\theta_{\rm geom}\,, \tag{3.30}\] where the dynamical and geometric rotation drifts are defined as \[\theta_{\rm dyn}(t)=\int_{0}^{t}\widetilde{\alpha}\,dt\,,\qquad\theta_{\rm geom }=-\frac{1}{{\sf A}}\int_{\gamma}p_{2}\,d\psi_{2}\,. \tag{3.31}\] Here, the dynamical rotation drift \(\theta_{\rm dyn}(t)\) depends on the inertia of the two bobs. Using Eqs. (3.26)-(3.27) it can written as \[\theta_{\rm dyn}(t)=\frac{2}{{\sf A}}\int_{0}^{t}{\sf K}(t)\,dt\,,\qquad{\sf K }=\frac{1}{2}\left(\frac{p_{1}^{2}}{I_{1}}+\frac{p_{2}^{2}}{I_{2}}\right)\,, \tag{3.32}\] where \({\sf K}\) is the total kinetic energy and \({\sf A}\) is the non-zero total angular momentum, which is conserved. If the two bobs are rigidly connected and cannot change their'shape', i.e., \(\dot{\psi}_{2}=0\) (no flow on the base manifold \({\cal B}\)), then the rotation drift \(\theta_{\rm dyn}\) is simply the manifestation of the inertia of the entire system treated as a whole with angular momentum \({\sf A}\) and total kinetic energy \({\sf K}\). The two bobs can also undergo a change in shape due to the internal elastic moments. As a result, the angle \(\psi_{2}\) varies over time and the flow on \({\cal B}\) induces also the geometric rotation drift, which from Eq.(3.40) can be written as \[\theta_{\rm geom}=\int_{S(\gamma)}d\,\widetilde{\alpha}=-\frac{1}{{\sf A}} \int_{S(\gamma)}dp_{2}\wedge d\psi_{2}\,. \tag{3.33}\] Thus, the geometric rotation drift is proportional to the area \(S(\gamma)\) enclosed by the trajectory of the motion \(\gamma\) in the shape manifold \({\cal B}\). Such a rotation drift is purely geometric since it does not depend on the time it takes for the two bobs to undergo a cyclic shape change. One can define an effective moment of inertia \(I_{\rm eff}\) for an equivalent system with the total angular momentum \({\sf A}=p_{1}+p_{2}\) as \[\frac{1}{I_{\rm eff}}=\frac{\dot{\theta}_{1}}{p_{1}+p_{2}}=\frac{\dot{\theta}_ {\rm dyn}+\dot{\theta}_{\rm geom}}{p_{1}+p_{2}}=\frac{2{\sf K}}{{\sf A}^{2}}- \frac{p_{2}\,\dot{\psi}_{2}}{{\sf A}^{2}}\,. \tag{3.34}\] Using Eq. (3.26), \[\frac{1}{I_{\rm eff}}=\frac{2{\sf K}}{{\sf A}^{2}}-\frac{p_{2}^{2}}{{\sf A}^{2} }+\frac{p_{2}}{{\sf A}\,I_{1}}\,. \tag{3.35}\] Thus, the shape-changing motion of the two bobs can slow down or speed up the rotation drift. In particular, if the two bobs tend to expand in opposite directions (\(\dot{\psi}_{2}>0\)) the effective moment of inertia increases slowing down the rotation as the angular speed \(\dot{\theta}_{1}\) reduces. On the contrary, if the two bobs tend to contract (\(\dot{\psi}_{2}<0\)) the angular speed increases as \(I_{\rm eff}\) reduces. This is the analogue of spinning dancers that can increase their spinning rate by pulling their arms close to their bodies, and to decrease it by letting their arms out. Thus, the geometric phase component in Eq. (3.35) can be interpreted as the moment of inertia of an _added mass_ in analogy with the fish self-propulsion (Shapere and Wilczek, 1987, 1989). As an example, consider the double pendulum with parameters \(I_{1}=I_{2}=1\,{\rm mass}\times{\rm length}^{2}\), \({\sf A}=\frac{1}{2}\,{\rm mass}\times{\rm length}^{2}{\rm time}^{-1}\), and potential \(\Pi(\psi)=\psi^{4}\). Note that the total angular momentum is positive and the initial conditions are chosen so that both bobs have positive (counter-clockwise) rotational speed. The left panel of Fig. 3 depicts the fiber bundle structure of the state space \(\mathcal{P}=T^{*}Q/\mathsf{A}\) of the double pendulum, the full path \(\mathbf{X}_{s}(t)\) (black curve) and the reduced path \(\mathbf{X}_{\mathcal{B}}\) (green curve) on the base manifold \(\mathcal{B}\) are shown. The lifted path \(G_{\theta_{\text{dyn}}}(\mathbf{X}_{s})\) by the positive dynamical rotation drift \(\theta_{\text{dyn}}\) (red curve) does not coincide with the path \(\mathbf{X}_{s}\) (black curve). This is because the total rotational drift \(\theta_{1}=\theta_{\text{dyn}}+\theta_{\text{geom}}\) includes also a negative geometric component \(\theta_{\text{geom}}\), as is seen in the right panel of the same figure. The positive dynamical rotation drift is induced by the inertia of the system that has a positive angular momentum. However, the two bobs undergo changes in shape inducing a clockwise rotation that balances the counter-clockwise dynamical drift rotation. **Remark 3.1**.: Given the total angular momentum \(\mathsf{A}\), the associated dynamical variables \(\{\theta_{1}(t),\psi_{2}(t),p_{2}(t)\}\) satisfy Eqs. (3.26) and (3.27). Let \(\{\bar{\theta}_{1}(t),\bar{\psi}_{2}(t),\bar{p}_{2}(t)\}\) be the dynamical variables that correspond to \(-\mathsf{A}\), where \(\bar{\psi}_{2}=\bar{\theta}_{2}-\bar{\theta}_{1}\). Then, these satisfy \[\breve{\theta}_{1}=\frac{-\mathsf{A}-\bar{p}_{2}}{I_{1}}\,,\quad\breve{\psi}_ {2}=\left(\frac{1}{I_{1}}+\frac{1}{I_{2}}\right)\bar{p}_{2}+\frac{\mathsf{A}}{ I_{1}}\,,\qquad\dot{\bar{p}}_{2}=-\frac{\partial\Pi}{\partial\bar{\psi}_{2}}\,. \tag{3.36}\] Assume the initial conditions \(\bar{\theta}_{1}(0)=-\theta_{1}(0)\), \(\bar{\psi}_{2}(0)=-\psi_{2}(0)\), and \(\bar{p}_{2}(0)=-p_{2}(0)\). Then, from Eqs. (3.26) and (3.27) we have that \[\{\bar{\theta}_{1}(t),\bar{\psi}_{2}(t),\bar{p}_{2}(t)\}=\{-\theta_{1}(t),- \psi_{2}(t),-p_{2}(t)\}\,, \tag{3.37}\] is a solution of the above system of first-order ODEs. The associated symplectic form follows from Eq. (3.19) as \[\bar{\alpha}=-\mathsf{A}\,d\bar{\theta}_{1}+\bar{p}_{2}\,d\bar{\psi}_{2}\,, \qquad d\bar{\alpha}=d\bar{p}_{2}\wedge d\bar{\psi}_{2}\,. \tag{3.38}\] Moreover, \[\widetilde{\widetilde{\alpha}}=\frac{\bar{\alpha}}{-\mathsf{A}}=-\widetilde{ \alpha}\,. \tag{3.39}\] Therefore \[\bar{\theta}_{\text{dyn}}(t)=\int_{0}^{t}\widetilde{\widetilde{\alpha}}\,dt=- \theta_{\text{dyn}}(t)\,,\qquad\bar{\theta}_{\text{geom}}=\frac{1}{\mathsf{A }}\int_{\gamma}\bar{p}_{2}\,d\bar{\psi}_{2}=-\theta_{\text{geom}}\,. \tag{3.40}\] Thus, when changing the sign of the angular momentum (and the initial conditions), both the dynamic and geometric phases change sign. ### Curvature and intrinsic metric of the shape manifold One can interpret the geometric drift as the curvature of the shape manifold \(\mathcal{B}\) equipped with a specific metric. As a matter of fact, drawing on Cartan's structural equations the 2-form \(d\,\widetilde{\alpha}\) in (3.33) can be interpreted as the curvature form of a connection on \(\mathcal{B}\). We further require that the symplectic form be compatible with the volume 2-form \(\mathsf{vol}_{\mathsf{G}}\) of the metric \(\mathbf{G}\) as is shown next. The geometric drift \(\theta_{\text{geom}}\) given in Eq. (3.33) is associated to the symplectic 1-form \[\widetilde{\alpha}=-\frac{p_{2}}{\mathsf{A}}\,d\psi_{2}\,, \tag{3.41}\] since \(\theta_{\text{geom}}=\int d\,\widetilde{\alpha}\). The 1-form \(\widetilde{\alpha}\) can be interpreted as the connection 1-form \(\omega^{1}{}_{2}\) of a 2-manifold represented by the coordinate charts \(\{X^{1},X^{2}\}=\{p_{2},\psi_{2}\}\) and with the metric \[ds^{2}=\epsilon_{1}\,\mathsf{G}_{1}\,dp_{2}^{2}+\epsilon_{2}\,\mathsf{G}_{2} \,d\psi_{2}^{2}\,. \tag{3.42}\] We want to find the metric coefficients \(\mathsf{G}_{1}\) and \(\mathsf{G}_{2}\) so that1 Footnote 1: We can add to \(\omega^{1}{}_{2}\) an arbitrary closed 1-form \(\xi\), i.e., \(d\xi=0\). This form can be neglected because it does not contribute to the geometric phase as its integral over any closed curve vanishes. Thus, the freedom to add an arbitrary closed form is physically inconsequential. \[\omega^{1}{}_{2}=\widetilde{\alpha}\,. \tag{3.43}\] From Eq. (2.46) we then have \[\frac{\mathsf{G}_{1,2}}{2\sqrt{\mathsf{G}_{1}\mathsf{G}_{2}}}\,dp_{2}- \epsilon_{1}\epsilon_{2}\,\frac{\mathsf{G}_{2,1}}{2\sqrt{\mathsf{G}_{1}\mathsf{ G}_{2}}}\,d\psi_{2}=-\frac{p_{2}}{\mathsf{A}}\,d\psi_{2}\,. \tag{3.44}\] This implies that \[\frac{\mathsf{G}_{1,\psi_{2}}}{2\sqrt{\mathsf{G}_{1}\mathsf{G}_{2}}}=0\,, \qquad\epsilon_{1}\epsilon_{2}\,\frac{\mathsf{G}_{2,p_{2}}}{2\sqrt{\mathsf{G }_{1}\mathsf{G}_{2}}}=\frac{p_{2}}{\mathsf{A}}\,. \tag{3.45}\] The first equation implies that \(\mathsf{G}_{1,\psi_{2}}=0\), and hence \(\mathsf{G}_{1}=\mathsf{G}_{1}(p_{2})\). The Gaussian curvature is calculated from Eq. (2.50) as \[K=-\frac{1}{\mathsf{A}\sqrt{\mathsf{G}_{1}\mathsf{G}_{2}}}\,. \tag{3.46}\] Eqs. (3.45) also imply that the symplectic 2-form \(d\,\widetilde{\alpha}\) equals the curvature 2-form \(d\omega^{1}{}_{2}\), that is \(d\,\widetilde{\alpha}=d\omega^{1}{}_{2}\), or explicitly \[d\,\widetilde{\alpha}=-\frac{1}{\mathsf{A}}\,dp_{2}\wedge d\psi_{2}=K\sqrt{ \mathsf{G}_{1}\mathsf{G}_{2}}\,dp_{2}\wedge d\psi_{2}\,. \tag{3.47}\] We now further require that the symplectic 2-form \(d\,\widetilde{\alpha}\) is compatible with the (pseudo) Riemannian volume (area) 2-form \(\mathsf{vol}_{\mathsf{G}}=\sqrt{\mathsf{G}_{1}\mathsf{G}_{2}}\,dp_{2}\wedge d \psi_{2}\) in the sense that the absolute value of the geometric rotational drift over a closed trajectory \(\gamma\) in the shape manifold is equal to the volume (area) of the region \(S(\gamma)\) it encloses using \(\mathsf{vol}_{\mathsf{G}}\), that is, \[\frac{1}{|\mathsf{A}|}\,dp_{2}\wedge d\psi_{2}=\sqrt{\mathsf{G}_{1}\mathsf{G}_ {2}}\,dp_{2}\wedge d\psi_{2}\,, \tag{3.48}\] which is equivalent to \[\sqrt{\mathsf{G}_{1}\mathsf{G}_{2}}=\frac{1}{|\mathsf{A}|}\,, \tag{3.49}\] or \(\mathsf{A}^{2}\,\mathsf{G}_{1}\,\mathsf{G}_{2}=1\). This, in particular, implies that \(G_{2}=G_{2}(p_{2})\). We can now solve for \(G_{2}\). Substituting (3.49) into (3.45)\({}_{2}\) yields \[\mathsf{G}_{2,p_{2}}=\epsilon_{1}\epsilon_{2}\,\frac{2\,p_{2}}{\mathsf{A}| \mathsf{A}|}\,, \tag{3.50}\] and hence \[\mathsf{G}_{2}=\epsilon_{1}\epsilon_{2}\,\frac{p_{2}^{2}+C_{2}}{\mathsf{A}| \mathsf{A}|}\,, \tag{3.51}\] where \(C_{2}\) is an arbitrary constant, and \(G_{1}\) follows from Eq. (3.49). Since we must have \(G_{2}\geq 0\), we set \(C_{2}=\mu^{2}\), \(\mu\in\mathbb{R}\), and \(\operatorname{sgn}(\epsilon_{1}\epsilon_{2})=\operatorname{sgn}(\mathsf{A})\). We choose \(\epsilon_{1}=\operatorname{sgn}(\mathsf{A})\) and \(\epsilon_{2}=1\) so that \[\mathsf{G}_{1}=\frac{1}{p_{2}^{2}+\mu^{2}}\,,\qquad\mathsf{G}_{2}=\frac{p_{2 }^{2}+\mu^{2}}{\mathsf{A}^{2}}\,, \tag{3.52}\] and the family of metrics (3.42) specializes to \[\mathbf{G}=\frac{\operatorname{sgn}(\mathsf{A})}{p_{2}^{2}+\mu^{2}}\,dp_{2} \otimes dp_{2}+\frac{p_{2}^{2}+\mu^{2}}{\mathsf{A}^{2}}\,d\psi_{2}\otimes d \psi_{2}\,. \tag{3.53}\] From Eqs. (3.46) and (3.49), the corresponding Gaussian curvature \(K=-\operatorname{sgn}(\mathsf{A})\) and the Ricci scalar \(R=2K=-2\operatorname{sign}(\mathsf{A})\) depend on the sign of \(\mathsf{A}\). The metric structure and curvature of the shape manifold depends on the sign of \(\mathsf{A}\). In the following we will show that for \(\mathsf{A}<0\) the shape manifold is an expanding spacetime with positive curvature equipped with the Robertson-Walker metric, whereas for \(\mathsf{A}>0\) the shape manifold is the hyperbolic plane with negative curvature. Clearly, the Riemannian structure of the shape manifold depends on the sign of the total momentum. Nevertheless, the geometric phase is evaluated by the same 2-form given by the sectional curvature form of \(\mathcal{B}\) (see Eq. (3.33)). Hernandez-Garduno and Shashikanth (2018) studied the geometric phases of three inviscid point vortices and also found that the curvature of the associated shape manifold depends on the sign of a parameter related to the strengths of the three vortices. In particular, their shape manifold is a sphere or the hyperbolic plane. We note that the authors do not explain why the character of the manifold changes with the sign of vortex strength parameter (see also Shashikanth and Marsden (2003)). **Remark 3.2**.: The metric \(\mathbf{G}\) depends on the sign of \(\mathsf{A}\). This reflects a symmetry of the Hamiltonian dynamics. As a matter of fact, in Remark 3.1 It was shown that the dynamics on the shape manifold \(\mathcal{B}\) for given \(\mathsf{A}\) is the same as the dynamics with opposite momentum \(-\mathsf{A}\) if one makes the coordinate change \(\{\theta_{1},\psi_{2},p_{2}\}\to\{-\theta_{1},-\psi_{2},-p_{2}\}\). However, the geometric rotation drift (3.33) depends on the sign of \(\mathsf{A}\) since \(\theta_{\operatorname{geom}}(\mathsf{A})=-\theta_{\operatorname{geom}}(- \mathsf{A})\). Thus, the geometric phase of the double pendulum with an initial imparted angular momentum \(\mathsf{A}>0\) on \(\mathcal{B}=(\psi_{2},p_{2})\) endowed with the Riemannian metric (3.53) is equal and opposite to the geometric phase of a double pendulum with \(\mathsf{A}<0\) on the transformed shape manifold \(\mathcal{B}^{\prime}=(-\psi_{2},-p_{2})\) with the corresponding pseudo-Riemannian metric. This suggests that we can always consider either of the two metrics. **Remark 3.3** (Physical significance of the metric).: The metric (3.53) defined on the shape, or base manifold \(\mathcal{B}\) characterizes the kinematically admissible shape deformations of the double pendulum. An orbit on \(\mathcal{B}\) is a succession of infinitesimal changes in the shape of the pendulum from an initial configuration to another. If the pendulum returns to its initial shape, the orbit is closed and the area (or curvature) spanned by it measures the induced rotation drift. Any curve, or orbit on the base manifold is a kinematically admissible shape evolution, i.e., a sequence of changing shapes. The orbit is also dynamically admissible if it is consistent with the Hamiltonian flow (3.26). The metric allows quantifying the similarity of a shape \(S_{1}\) to another shape \(S_{2}\), by measuring the intrinsic distance between the corresponding points on the shape manifold \(\mathcal{B}\). Other non-intrinsic distances are misleading as they do not account for the curvature, or induced geometric drift. ### Geodesics of the metric #### 3.4.1 Negative angular momentum: Robertson-Walker spacetime For \(\mathsf{A}<0\), the metric in Eq. (3.53) is pseudo-Riemannian with \(p_{2}\) as a time-like coordinate and \(\psi_{2}\) as space-like, \[ds^{2}=-\frac{1}{p_{2}^{2}+\mu^{2}}\,(dp_{2})^{2}+\frac{p_{2}^{2}+\mu^{2}}{ \mathsf{A}^{2}}\,(d\psi_{2})^{2}\,, \tag{3.54}\] and positive Gaussian curvature \(\mathsf{K}=1\). The geodesic equations follow by minimizing the action \(\int ds^{2}=\int L\,d\lambda\) with Lagrangian density \[L(p_{2}(\lambda),\psi_{2}(\lambda))=-\frac{(p_{2}^{\prime})^{2}}{p_{2}^{2}+\mu^ {2}}+\frac{p_{2}^{2}+\mu^{2}}{\mathsf{A}^{2}}(\psi_{2}^{\prime})^{2}\,, \tag{3.55}\] where \(f^{\prime}\) denotes derivative with respect to \(\lambda\), which parameterizes the geodesics. Trivial geodesics are the straight lines \(\psi_{2}=\text{const}\) and \(p_{2}=\text{const}\) for which \(L\) is stationary. A family of non-trivial geodesics can be easily found by choosing the parametrization \(\lambda=p_{2}\). Then, \[L(\psi_{2}(p_{2}))=-\frac{1}{p_{2}^{2}+\mu^{2}}+\frac{p_{2}^{2}+\mu^{2}}{ \mathsf{A}^{2}}\left(\frac{d\psi_{2}}{dp_{2}}\right)^{2}\,. \tag{3.56}\] Variational differentation yields \[\frac{d}{dp_{2}}\left(\frac{p_{2}^{2}+\mu^{2}}{\mathsf{A}^{2}}\,\frac{d\psi_{ 2}}{dp_{2}}\right)=0\,, \tag{3.57}\] from which \[\frac{d\psi_{2}}{dp_{2}}=\frac{\mathsf{A}^{2}C_{1}}{p_{2}^{2}+\mu^{2}}\,, \tag{3.58}\] where \(C_{1}\) is an arbitrary constant. Integration yields \[\psi_{2}=\frac{\mathsf{A}^{2}C_{1}}{\mu}\,\tan^{-1}\left(\frac{p_{2}}{\mu} \right)+C_{2}\,, \tag{3.59}\] where \(C_{2}\) is another arbitrary constant, which together with \(C_{1}\) parameterize the family of geodesics. The Lagrangian density in Eq. (3.56) simplifies to read \[L=\frac{-1+\mathsf{A}^{2}C_{1}^{2}}{p_{2}^{2}+\mu^{2}}\,, \tag{3.60}\] and the null-geodesics are given by Eq. (3.59) with \(C_{1}=1/\mathsf{A}\) since \(L=0\). Fig. 4 depicts the null-geodesics (thin curves) and a few geodesics (bold curves) for the metric with \(\mathsf{A}=-5,\mu=2\). **Remark 3.4**.: Drawing on General Relativity (Misner et al., 1973, Carroll, 2003), \(p_{2}\) is a time-like coordinate and \(\psi_{2}\) is space-like, and the metric \(\mathbf{G}\) represents the analogue of a space-time where null-geodesics (thin curves of Fig. 4) are the trajectories of massless light photons. The associated light cones tend to close up as \(p_{2}\to\pm\infty\) and light slows down. In the same figure, the depicted geodesics (bold curves) are always inside the light cones they intersect along their path. Thus, they are the 'time-like' trajectories of a massive particle traveling at a speed less than the speed of light. Moreover, geodesics tend to converge in as an indication of the positive Gaussian curvature. **Remark 3.5**.: The metric (3.54) describes the analogue of a disguised spacetime of an expanding universe (Misner et al., 1973, Carroll, 2003). As a matter of fact, for \(\mu\neq 0\), one can define new coordinate chart \[t=\tanh^{-1}\left[\frac{p_{2}}{\sqrt{p_{2}^{2}+\mu^{2}}}\right],\qquad x=\psi_ {2}\,. \tag{3.61}\] Then one has \(dt=dp_{2}/\sqrt{p_{2}^{2}+\mu^{2}}\), \(dx=d\psi_{2}\), and the metric (3.54) transforms to \[ds^{2}=-dt^{2}+\frac{\mu^{2}(\cosh t)^{2}}{\mathsf{A}^{2}}\,dx^{2}\,, \tag{3.62}\] which is the 2-dimensional analogue of the Robertson-Walker (RW) metric in General Relativity (Misner et al., 1973, Carroll, 2003). The metric describes an expanding universe with the scale factor \(a(t)=\cosh t\) and Hubble constant \(H=\mathring{a}/a=\tanh(t)\)(Carroll, 2003). For \(\mu=0\), the new coordinates are \[t=\mathrm{e}^{p_{2}},\qquad x=\psi_{2}\,, \tag{3.63}\] where \(dt=dp_{2}/p_{2}\) and \(dx=d\psi_{2}\), and the metric transforms to \[ds^{2}=-dt^{2}+\frac{\mathrm{e}^{2t}}{\mathsf{A}^{2}}\,dx^{2}\,, \tag{3.64}\] which is still a Robertson-Walker metric with the scale factor \(a(t)=\mathrm{e}^{t}\) and Hubble constant \(H=\dot{a}/a=1\)[13]. **Remark 3.6**.: The analogy of the shape manifold \(\mathcal{B}\) being like an expanding universe implies that a point on \(\mathcal{B}\), or shape \(S_{1}\), appears'red-shifted' by another point, or shape \(S_{2}\), as the momentum \(p_{2}\) (time-like coordinate) increases. Thus, the low-momentum shapes with small geometric rotation drift are far apart from the high-momentum shapes with large geometric drift. So the analogy with the expanding universe impllies that different shapes, or points, can be very far away from each other on the shape manifold and correspond to very different geometric rotation drifts. The extrinsic Euclidean metric would give a smaller distance between the two points misleading them as similar shapes. #### 3.4.2 Positive angular momentum: The hyperbolic plane \(\mathbb{H}^{2}\) For \(\mathsf{A}>0\), the shape manifold has negative Gaussian curvature \(K=-1\) and the metric in Eq. (3.53) is Riemannian \[ds^{2}=\frac{1}{p_{2}^{2}+\mu^{2}}\,(dp_{2})^{2}+\frac{p_{2}^{2}+\mu^{2}}{ \mathsf{A}^{2}}\,(d\psi_{2})^{2}\,. \tag{3.65}\] To reveal the nature of the geodesics, we can still use the coordinate transformations (3.61), (3.63). As an example, for \(\mu=0\) the metric (3.65) transforms to \[ds^{2}=dt^{2}+R^{2}(t)\,dx^{2}\,, \tag{3.66}\] where \(R(t)=\mathrm{e}^{2t}/\mathsf{A}^{2}\). This is a disguised metric of the hyperbolic plane as the change of coordinates \(\tilde{x}=x/\sqrt{\mathsf{A}}\) and \(\tilde{y}=\mathsf{A}\exp(-t)\) transforms it to \[ds^{2}=\frac{d\tilde{x}^{2}+d\tilde{y}^{2}}{\tilde{y}^{2}}\,. \tag{3.67}\] **Remark 3.7**.: A point on \(\mathcal{B}\), or shape \(S_{1}\), appears far away from another point, or shape \(S_{2}\), as the momentum \(p_{2}\) reduces because of the hyperbolic character of the metric. Thus, the low-momentum shapes with small geometric rotation drift are far apart from the high-momentum shapes with large geometric drift. If one uses the Euclidean metric instead, the two points would appear closer than they are. The Euclidean metric is misleading in the sense that far away shapes appear as similar shapes when looking at them through Euclidean lens. #### 3.4.3 The set of all geodesics More generally, let us assume that the geodesics of the metric in Eq. (3.53) are parameterized by \(\lambda\). Minimizing the action \(\int ds^{2}=\int L\,d\lambda\) with Lagrangian density \[L(p_{2}(\lambda),\psi_{2}(\lambda))=\mathrm{sgn}(\mathsf{A})\frac{(p_{2}^{ \prime})^{2}}{p_{2}^{2}+\mu^{2}}+\frac{p_{2}^{2}+\mu^{2}}{\mathsf{A}^{2}}( \psi_{2}^{\prime})^{2}\,, \tag{3.68}\] gives \[\left(\frac{p_{2}^{\prime}}{p_{2}^{2}+\mu^{2}}\right)^{\prime}+\mathrm{sgn}( \mathsf{A})\frac{p_{2}}{\mathsf{A}^{2}}\,\psi_{2}^{\prime\,2}-\frac{p_{2}p_{2} ^{\prime}}{p_{2}^{2}+\mu^{2}}=0\,,\qquad\psi_{2}^{\prime}=\frac{\mathsf{A}^{2 }c_{1}}{p_{2}^{2}+\mu^{2}}\,, \tag{3.69}\] where \(f^{\prime}\) denotes derivative with respect to \(\lambda\) and \(c_{1}\) is a constant. Then, the first equation for \(p_{2}\) can be written as \[p_{2}^{\prime\prime}=p_{2}\frac{-\mathrm{sgn}(\mathsf{A})\mathsf{A}^{2}c_{1}^ {2}+p_{2}^{\prime}}{p_{2}^{2}+\mu^{2}}\,. \tag{3.70}\] This ODE can be solved for by using the substitution \(p_{2}^{\prime\prime}=F(p_{2})\). Notice that \(p_{2}^{\prime\prime}=\frac{dF}{dp_{2}}p_{2}^{\prime}=F\,\frac{dF}{dp_{2}}\) and \[\frac{dF}{dp_{2}}F=p_{2}\frac{-\text{sgn}(\mathsf{A})\mathsf{A}^{2}c_{1}^{2}+F}{ p_{2}^{2}+\mu^{2}}\,, \tag{3.71}\] which can be easily integrated to solve for \(F\): \[p_{2}^{\prime}=F(p_{2})=\pm\sqrt{c_{2}\left(p_{2}^{2}+\mu^{2}\right)+\text{sgn }(\mathsf{A})\mathsf{A}^{2}c_{1}^{2}}\,, \tag{3.72}\] where \(c_{2}\) is another constant. Thus, the geodesic equations (3.69) are reduced to the following first order system \[p_{2}^{\prime}=\pm\sqrt{c_{2}\left(p_{2}^{2}+\mu^{2}\right)+\text{sgn}( \mathsf{A})\mathsf{A}^{2}c_{1}^{2}}\,,\qquad\quad\psi_{2}^{\prime}=\frac{ \mathsf{A}^{2}c_{1}}{p_{2}^{2}+\mu^{2}}\,. \tag{3.73}\] Integrating the first equation yields \[-\frac{1}{\sqrt{c_{2}}}\,\log\left(-p_{2}\sqrt{c_{2}}+\sqrt{c_{2}\left(p_{2}^ {2}+\mu^{2}\right)+\text{sgn}(\mathsf{A})\mathsf{A}^{2}c_{1}^{2}}\right)=\pm \lambda+\lambda_{0}\,, \tag{3.74}\] which is valid in the range of values of \(\lambda\) for which the argument under the square root is non-negative, \(\lambda_{0}\) is a constant, and \(c_{2}\geq 0\). Thus, the geodesics are parameterized by \[p_{2}(\lambda)=\frac{\text{e}^{-\sqrt{c_{2}}(\pm\lambda+\lambda_{0})}\left[-1+ \left(c_{2}\mu^{2}+\text{sgn}(\mathsf{A})\mathsf{A}^{2}c_{1}^{2}\right)\text{ e}^{2\sqrt{c_{2}}(\pm\lambda+\lambda_{0})}\right]}{2\sqrt{c_{2}}}\,, \tag{3.75}\] and \[\psi_{2}(\lambda)=\left\{\begin{array}{ll}\frac{\mathsf{A}c_{1}}{\mu}\tanh^ {-1}\left[\frac{2\mathsf{A}\,\mu\,c_{1}\sqrt{c_{2}}\,e^{2\sqrt{c_{2}}(\pm \lambda+\lambda_{0})}}{1+\left(\mathsf{A}^{2}\,c_{1}^{2}+c_{2}\,\mu^{2} \right)\text{e}^{2\sqrt{c_{2}}(\pm\lambda+\lambda_{0})}}\right],&\mathsf{A}<0 \,,\\ \\ \frac{\mathsf{A}c_{1}}{\mu}\Bigg{\{}\tanh^{-1}\left[\frac{A\,c_{1}+\left(A^{2 }c_{1}^{2}+c_{2}\,\mu^{2}\right)\text{e}^{\sqrt{c_{2}}(\pm\lambda+\lambda_{0} )}}{\sqrt{c_{2}}\,\mu}\right]-\tanh^{-1}\left[\frac{Ac_{1}-\left(A^{2}c_{1}^{2 }+c_{2}\,\mu^{2}\right)\text{e}^{\sqrt{c_{2}}(\pm\lambda+\lambda_{0})}}{\sqrt{ c_{2}}\mu}\right]\Bigg{\}}\,,&\mathsf{A}>0\,.\end{array}\right. \tag{3.76}\] ## 4 Dynamics of (free) nonlinear planar \(N\)-pendula We next generalize the double pendulum system described above to an \(N\)-pendulum with \(N\) bobs with mass moments of inertia \((I_{1},I_{2},...I_{N})\). The Lagrangian coordinates \(\theta_{j}\) are the angular positions of the rigid bars as depicted in Fig. 5. For the specific problem we consider the action of \(N-1\) nonlinear springs on the bars as depicted in Figure 5. The associated potential depends on the angle differences of adjacent bars \[\Pi\left(\theta_{2}-\theta_{1},\theta_{3}-\theta_{2},...,\theta_{N}-\theta_{N- 1}\right)=\Pi_{2}(\theta_{2}-\theta_{1})+\ldots+\Pi_{N}(\theta_{N}-\theta_{N-1 })\,, \tag{4.1}\] and describes internal conservative moments \(M_{j}=-\partial_{\theta_{j}}\Pi\) acting on the bars, which are in equilibrium, that is \[M_{1}+M_{2}+\cdots+M_{N}=-\sum_{j=1}^{N}\frac{\partial\Pi}{\partial\theta_{j}}= 0\,. \tag{4.2}\] The associated Lagrangian is written as \[\mathcal{L}=\sum_{n=1}^{N}\frac{1}{2}I_{n}\,\dot{\theta}_{n}^{2}-\Pi\left( \theta_{2}-\theta_{1},\theta_{3}-\theta_{2},...,\theta_{N}-\theta_{N-1} \right)\,. \tag{4.3}\] Minimizing the action \(\int\mathcal{L}dt\) yields the following dynamical equations \[\frac{d}{dt}\left(\frac{\partial\mathcal{L}}{\partial\dot{\theta}_{j}}\right)- \frac{\partial\mathcal{L}}{\partial\theta_{j}}=I_{j}\ddot{\theta}_{j}+\frac{ \partial\Pi}{\partial\theta_{j}}=0\,,\qquad j=1,\cdots N\,. \tag{4.4}\] Figure 4: _Geodesics of the pseudo Riemannian metric \(-\frac{1}{p_{2}^{2}+\mu^{2}}\,dp_{2}\otimes dp_{2}+\frac{p_{2}^{2}+\mu^{2}}{ \text{A}^{2}}\,d\psi_{2}\otimes d\psi_{2}\). Null-geodesics are the thin black curves and geodesics are the bold blue curves._ Figure 5: _A planar \(N\)-pendulum with \(N-1\) nonlinear springs. Note that the moments of inertia \(I_{j}=m_{j}L_{j}^{2}\), \(j=1,\cdots,N\), where \(m_{j}\) is the mass of the \(j\)th bob and \(L_{j}\) is the length if the \(j\)th rigid bar. The applied time-dependent moments are self equilibrated, i.e., \(\sum_{j=1}^{N}\mathsf{M}_{j}^{e}(t)=0\)._ From (4.2) potential moments are in equilibrium and summing up equations (5.3) yields \[\frac{d}{dt}\left(\sum_{j=1}^{N}I_{j}\dot{\theta}_{j}(t)\right)=0\,. \tag{4.5}\] Thus, the total angular momentum of the N-pendulum \[I_{1}\dot{\theta}_{1}(t)+I_{2}\dot{\theta}_{2}(t)+\cdots+I_{N}\theta_{N}^{ \lx@overaccentset{{\hbox{\large.}}}{\hbox{\large.}}}(t)={\sf A}\,, \tag{4.6}\] is conserved over time and \({\sf A}=I_{1}\dot{\theta}_{1}(0)+\cdots+I_{N}\theta_{N}^{\lx@overaccentset{{ \hbox{\large.}}}{\hbox{\large.}}}(0)\) is the initial momentum imparted by the angular velocities \(\dot{\theta}_{j}\) at time \(t=0\). We can associate a Hamiltonian system on the cotangent space \(T^{*}Q\) of the configuration space \(Q=\mathbb{T}^{N}\), that is the \(N\)-torus with a coordinate chart \(\{\theta_{1},\theta_{2},\cdots,\theta_{N}\}\). The conjugate momenta of the angles \(\theta_{j}\) are \(p_{j}=\partial_{\dot{\theta}_{j}}{\cal L}=I_{j}\dot{\theta}_{j}\). Thus, the phase space \(T^{*}Q\) has the coordinate chart \(\{\theta_{1},\theta_{2},...,\theta_{N},p_{1},p_{2},...,p_{N}\}\) and the Hamiltonian is given by \[{\cal H}=\frac{1}{2}\sum_{j=1}^{N}\frac{p_{j}^{2}}{I_{j}}+\Pi\left(\theta_{2} -\theta_{1},\theta_{3}-\theta_{2},...,\theta_{N}-\theta_{N-1}\right)\,. \tag{4.7}\] The dynamical equations follow from the Hamiltonian and read \(\dot{\bf X}={\bf J}\nabla_{\bf X}{\cal H}\), where \[{\bf X}=\begin{bmatrix}\theta_{1}\\ \theta_{2}\\ \vdots\\ p_{1}\\ p_{2}\\ \vdots\end{bmatrix}\,, \tag{4.8}\] and \({\bf J}\) is the following symplectic matrix \[{\bf J}=\begin{bmatrix}{\bf O}_{N}&{\bf I}_{N}\\ -{\bf I}_{N}&{\bf O}_{N}\end{bmatrix}\,. \tag{4.9}\] \({\bf I}_{N}=[\delta_{ij}]\) is the \(N\times N\) identity matrix, \({\bf O}_{N}\) is the \(N\times N\) null matrix, and \(\delta_{ij}\) is the Kronecker tensor. In particular, \[\dot{\theta}_{j}=\frac{p_{j}}{I_{j}}\,,\qquad\dot{p}_{j}=-\frac{\partial\Pi}{ \partial\theta_{j}}\,,\quad j=1,\ldots N\,, \tag{4.10}\] and from Eq. (5.5) the conserved angular momentum is written as \[{\sf A}=\sum_{j=1}^{N}p_{j}(t)\,. \tag{4.11}\] The associated symplectic 1 and 2-forms are written as \[\alpha=\sum_{j=1}^{N}p_{j}\,d\theta_{j},\qquad d\alpha=\sum_{j=1}^{N}dp_{j} \wedge d\theta_{j}\,. \tag{4.12}\] The total kinetic energy of the N-pendulum is given by integrating the 1-form \(\alpha\): \[{\sf E}(t)=\int_{0}^{t}\,\alpha\,d\tau=\sum_{j=1}^{N}\int_{0}^{t}\,p_{j}(\tau )\,\dot{\theta}_{j}(\tau)\,d\tau=\sum_{j=1}^{N}\frac{1}{2}I_{j}\dot{\theta}_{ j}^{2}(t)\,. \tag{4.13}\] To reveal the geometric nature of the dynamics, we consider the shape configuration space \(Q_{s}\), which has the coordinate chart \(\{\theta_{1},\psi_{2},\psi_{3},...\psi_{N}\}\), where the shape parameters \(\psi_{j}=\theta_{j}-\theta_{1}\) represent the relative angular displacement of the \(N-1\) bars with respect to the first bar. Since the total angular momentum \(p_{1}+p_{2}+...+p_{N}=\mathsf{A}\) is known a priori, then \(p_{1}=\mathsf{A}-p_{2}-p_{3}-\ldots-p_{N}\) and the motion must occur on the subspace \(T^{*}Q/\mathsf{A}\), which has the coordinate chart \(\{\theta_{1},\psi_{2},p_{2},\psi_{3},p_{3},\ldots,\psi_{N},p_{N}\}\), where \((p_{j},\psi_{j})\) are a pair of conjugate variables. The 1-form in Eq. (4.12) reduces to read \[\alpha=\mathsf{A}\,d\theta_{1}+\sum_{j=2}^{N}p_{j}\,d\psi_{j}\,, \tag{4.14}\] and the associated symplectic 2-form is written as \[d\alpha=\sum_{j=2}^{N}dp_{j}\wedge d\psi_{j}\,. \tag{4.15}\] The reduced phase space \(\mathcal{P}=T^{*}Q/\mathsf{A}\) has the geometric structure of a principal fiber bundle: the \(2(N-1)\)-dimensional shape manifold \(\mathcal{B}\) with a coordinate chart \(\{\psi_{2},p_{2},\cdots,\psi_{N},p_{N}\}\) and transversal one-dimensional fibers \(\mathcal{F}\) with coordinate chart \(\{\theta_{1}\}\). The vector field \(\dot{\mathbf{X}}_{s}=\dot{\mathbf{X}}_{\mathcal{B}}+\dot{\mathbf{X}}_{ \mathcal{F}}\) can be decomposed as the sum of the flow \[\dot{\mathbf{X}}_{\mathcal{B}}=\begin{bmatrix}\dot{\psi}_{2}\\ \dot{p}_{2}\\ \vdots\\ \dot{\psi}_{N}\\ \dot{p}_{N}\end{bmatrix}\,, \tag{4.16}\] on the shape manifold \(\mathcal{B}\) and the flow \(\dot{\mathbf{X}}_{\mathcal{F}}=\dot{\theta}_{1}\) along the fiber \(\mathcal{F}\). Note that the motion on the shape manifold \(\mathcal{B}\) is independent from that along the fiber. As a matter of fact, \(\dot{\mathbf{X}}_{\mathcal{B}}\) does not depend on \(\dot{\theta}_{1}\): \[\dot{\psi}_{j}=p_{j}\left(\frac{1}{I_{1}}+\frac{1}{I_{j}}\right)-\frac{1}{I_ {1}}\left(\mathsf{A}-\sum_{k=2}^{N}p_{k}\right)\,,\qquad\dot{p}_{j}=\frac{ \partial\hat{\Pi}}{\partial\psi_{j}}\,, \tag{4.17}\] and the associated reduced Hamiltonian is given by \[\mathcal{H}_{R}(\psi_{2},\cdots,\psi_{N},p_{2},\cdots,p_{N})=\frac{1}{2I_{1}} \left(\mathsf{A}-\sum_{k=2}^{N}p_{k}\right)^{2}+\sum_{j=2}^{N}\frac{p_{j}^{2} }{2I_{j}}+\Pi\left(\psi_{2},\psi_{3},...,\psi_{N}\right)\,, \tag{4.18}\] where the potential is now given by \[\hat{\Pi}\left(\psi_{2},\psi_{3},\cdots,\psi_{N}\right)=\Pi_{2}(\psi_{2})+\Pi _{3}(\psi_{2}-\psi_{1})+\cdots+\Pi_{N}(\psi_{N}-\psi_{N-1})\,. \tag{4.19}\] On the contrary, the motion along the fiber depends on \(\dot{X}_{\mathcal{B}}\) since \[\dot{\theta}_{1}=\frac{1}{I_{1}}\left(\mathsf{A}-\sum_{k=2}^{N}p_{k}\right)\,. \tag{4.20}\] The motion in the reduced state space \(\mathcal{P}=T^{*}Q/\mathsf{A}\) decouples in a reduced motion \(\dot{\mathbf{X}}_{\mathcal{B}}\) on the shape manifold \(\mathcal{B}\) and a drift \(\dot{\mathbf{X}}_{\mathcal{F}}\) along the fibers. The reduced motion on \(\mathcal{B}\) is the shape-changing evolution of the connected bobs. Such a shape dynamics induces the bobs to rigidly rotate together by the varying angle \(\theta_{1}\). From Eq. (4.14), we define the 1-form \(\widetilde{\alpha}=\alpha/\mathsf{A}\) and the total drift \(\theta_{1}\) along the fiber follows by integrating the 1-form \[d\theta_{1}=\widetilde{\alpha}-\sum_{k=2}^{N}\frac{p_{k}}{\mathsf{A}}\,d\psi_{ k}\,, \tag{4.21}\] that is \[\theta_{1}=\int d\theta_{1}=\int_{0}^{t}\widetilde{\alpha}\,dt-\sum_{k=2}^{N} \int_{\gamma}\frac{p_{k}}{\mathsf{A}}\,d\psi_{k}\,, \tag{4.22}\] where \(\gamma\) is a closed trajectory of the motion up to time \(t\) in the shape manifold \(\mathcal{B}\). Thus, \[\theta_{1}(t)=\theta_{\text{dyn}}(t)+\theta_{\text{geom}}(t)\,, \tag{4.23}\] where the dynamical and geometric rotation drifts are defined as \[\theta_{\text{dyn}}(t)=\int_{0}^{t}\widetilde{\alpha}\,d\tau,\qquad\theta_{ \text{geom}}(t)=-\sum_{k=2}^{N}\int_{\gamma}\frac{p_{k}}{\mathsf{A}}\,d\psi_{k }\,. \tag{4.24}\] Here, the dynamical rotation drift \(\theta_{\text{dyn}}\) depends on the inertia of the bobs and can be written as \[\theta_{\text{dyn}}(t)=2\int_{0}^{t}\frac{\mathsf{K}(\tau)}{\mathsf{A}}\,d \tau\,,\qquad\mathsf{K}(t)=\frac{1}{2}\sum_{j=1}^{N}\frac{p_{j}^{2}}{I_{j}}\,, \tag{4.25}\] where \(\mathsf{K}(t)\) is the total kinetic energy and \(\mathsf{A}\) is the total angular momentum. If all the bobs are rigidly connected and cannot change their shape, i.e., \(\dot{\psi}_{j}=0\) and so no motion on the base manifold \(\mathcal{B}\), then the rotation drift is solely due to the inertia of the system measured by the total angular momentum. If the bobs undergo changes in shape, i.e., the angles \(\psi_{j}\) vary over time, then the motion on \(\mathcal{B}\) induces the geometric rotation drift, which from Eq.(4.24) can be written as \[\theta_{\text{geom}}=-\sum_{k=2}^{N}\int_{\gamma}\frac{p_{k}}{\mathsf{A}}\,d \psi_{k}=-\sum_{k=2}^{N}\int_{S(\gamma)}\frac{1}{\mathsf{A}}\,dp_{k}\wedge\,d \psi_{k}\,. \tag{4.26}\] Such a rotation drift is proportional to the area \(S(\gamma)\) enclosed by the path \(\gamma\) spanned by the motion on the shape manifold \(\mathcal{B}\). Thus, it is purely geometric since it does not depend on the time it takes for the bobs to undergo a cyclic shape change, or to span the closed path \(\gamma\) on \(\mathcal{B}\). The part of kinetic energy that arises from the same cyclic shape change \[\mathsf{E}_{\text{geom}}(t)=-\int_{0}^{t}\mathsf{A}\,\dot{\theta}_{\text{ geom}}(\tau)\,d\tau=\sum_{k=2}^{N}\int_{\gamma}p_{k}\,d\psi_{k}=\sum_{k=2}^{N} \int_{S(\gamma)}dp_{k}\wedge\,d\psi_{k}\,, \tag{4.27}\] does not depend on the duration of the cyclic change either. Note that the energy difference \[\mathsf{E}(t)-\mathsf{E}_{\text{geom}}(t)=\int_{0}^{t}\mathsf{A}\,\dot{\theta }_{1}\,d\tau\,, \tag{4.28}\] is that relative to the total rotation drift \(\theta_{1}\). In the following, we will show that the base manifold can be endowed with a Riemannian structure. ### Curvature and intrinsic metric of the shape manifold The geometric rotation drift can be interpreted as the curvature of the shape manifold \(\mathcal{B}\) equipped with a pseudo-Riemannian diagonal metric of the form \[ds^{2}=\sum_{j=2}^{N}\left[\epsilon_{p_{j}}G_{p_{j}}(dp_{j})^{2}+\epsilon_{ \psi_{j}}G_{\psi_{j}}(d\psi_{j})^{2}\right]\,, \tag{4.29}\] where the \(2(N-1)\) positive metric coefficients depend on the coordinates \(\{p_{2},\psi_{2},\cdots,p_{N},\psi_{N}\}\), in general, and \((\epsilon_{p_{2}},\epsilon_{\psi_{2}},\cdots)\) is the signature of the manifold. The metric coefficients will be calculated using Cartan's structural equations as follows. From Eq. (4.26) the geometric drift follows by integrating the 2-form \[d\,\widetilde{\alpha}=\sum_{j=2}^{N}d\,\widetilde{\alpha}_{j}\,, \tag{4.30}\] where \[\widetilde{\alpha}_{j}=-\frac{p_{j}}{\mathsf{A}}\,d\psi_{j}\,,\qquad d\, \widetilde{\alpha}_{j}=-\frac{1}{\mathsf{A}}\,dp_{j}\wedge\,d\psi_{j},\quad j=2, \cdots,N\,. \tag{4.31}\] Drawing on Cartan's second structural equations (2.43), the collection of the \((N-1)\) 2-forms \(d\,\widetilde{\alpha}_{j}\) are interpreted as the non-zero curvature 2-forms of a \(2(N-1)\)-dimensional manifold, and the associated connections 1-forms are \(\widetilde{\alpha}_{j}\). For a metric-compatible connection on the \(M=2(N-1)\)-dimensional shape manifold there are \(M(M-1)/2=(N-1)(2N-3)\) connection 1-forms and as many curvature 2-forms. In particular, \[\omega^{p_{j}}{}_{p_{k}}\,,\ \omega^{\psi_{j}}{}_{\psi_{k}}\,,\quad j<k=2, \ldots,N\,, \tag{4.32}\] are \(2\times\frac{(N-1)(N-2)}{2}=(N-1)(N-2)\) connection 1-forms. The remaining \((N-1)^{2}\) connection 1-forms are \[\omega^{\psi_{k}}{}_{p_{j}}\,,\quad j,\,k=2,\ldots,N\,. \tag{4.33}\] Therefore, in total we have \((N-1)(N-2)+(N-1)^{2}=(N-1)(2N-3)\) connection 1-forms and as many curvature 2-forms given by \[\begin{split}\mathcal{R}^{p_{j}}{}_{p_{k}}&= \mathcal{R}^{\psi_{j}}{}_{\psi_{k}}=0\,,\qquad\qquad\qquad j<k=2,\ldots,N\,,\\ \mathcal{R}^{\psi_{k}}{}_{p_{j}}&=0\,,\qquad\qquad \qquad\qquad j\neq k\,,\\ \mathcal{R}^{\psi_{j}}{}_{p_{j}}&=d\,\widetilde{ \alpha}_{j}=-\frac{1}{\mathsf{A}}\,dp_{j}\wedge\,d\psi_{j}\,,\quad j=2,\ldots, N\,.\end{split} \tag{4.34}\] The unknown connection 1-forms satisfy Cartan's second structural equations (2.43) \[\begin{split}\mathcal{R}^{p_{j}}{}_{p_{k}}&=0=d \omega^{p_{j}}{}_{p_{k}}+\omega^{p_{j}}{}_{\gamma}\wedge\omega^{\gamma}{}_{p _{k}}\,,\qquad\qquad\qquad j<k=2,\ldots,N\,,\\ \mathcal{R}^{\psi_{j}}{}_{\psi_{k}}&=0=d\omega^{ \psi_{j}}{}_{\psi_{k}}+\omega^{\psi_{j}}{}_{\gamma}\wedge\omega^{\gamma}{}_{ \psi_{k}}\,,\qquad\qquad\qquad j<k=2,\ldots,N\,,\\ \mathcal{R}^{\psi_{k}}{}_{p_{j}}&=0=d\omega^{\psi_{k }}{}_{p_{j}}+\omega^{\psi_{k}}{}_{\gamma}\wedge\omega^{\gamma}{}_{p_{j}}\,, \qquad\qquad j\neq k\,,\\ \mathcal{R}^{\psi_{j}}{}_{p_{j}}&=-\frac{1}{\mathsf{ A}}\,dp_{j}\wedge\,d\psi_{j}=d\omega^{\psi_{j}}{}_{p_{j}}+\omega^{\psi_{j}}{}_{ \gamma}\wedge\omega^{\gamma}{}_{p_{j}}\,,\quad j=2,\ldots,N\,.\end{split} \tag{4.35}\] These are more explicitly written as \[\begin{split}& d\omega^{p_{j}}{}_{p_{k}}+\sum_{i=2}^{N}\omega^{p _{j}}{}_{\psi_{i}}\wedge\omega^{\psi_{i}}{}_{p_{k}}+\sum_{i=2}^{N}\omega^{p_{j }}{}_{p_{i}}\wedge\omega^{p_{i}}{}_{p_{k}}=0\,,\qquad\qquad\qquad j<k=2,\ldots, N\,,\\ & d\omega^{\psi_{j}}{}_{\psi_{k}}+\sum_{i=2}^{N}\omega^{\psi_{j}}{ }_{\psi_{i}}\wedge\omega^{\psi_{i}}{}_{\psi_{k}}+\sum_{i=2}^{N}\omega^{\psi_{j }}{}_{p_{i}}\wedge\omega^{p_{i}}{}_{\psi_{k}}=0\,,\qquad\qquad\qquad j<k=2, \ldots,N\,,\\ & d\omega^{\psi_{k}}{}_{p_{j}}+\sum_{i=2}^{N}\omega^{\psi_{k}}{} _{\psi_{i}}\wedge\omega^{\psi_{i}}{}_{p_{j}}+\sum_{i=2}^{N}\omega^{\psi_{k}}{} _{p_{i}}\wedge\omega^{p_{i}}{}_{p_{j}}=0\,,\qquad\qquad\qquad j\neq k\,,\\ & d\omega^{\psi_{j}}{}_{p_{j}}+\sum_{i=2}^{N}\omega^{\psi_{j}}{} _{\psi_{i}}\wedge\omega^{\psi_{i}}{}_{p_{j}}+\sum_{i=2}^{N}\omega^{\psi_{j}}{} _{p_{i}}\wedge\omega^{p_{i}}{}_{p_{j}}=-\frac{1}{\mathsf{A}}\,dp_{j}\wedge\,d \psi_{j}\,,\quad j=2,\ldots,N\,.\end{split} \tag{4.36}\] The case \(N=2\) is trivial as there is a unique solution \(\omega^{\psi_{2}}{}_{p_{2}}=\frac{\psi_{2}}{\mathsf{A}}\,dp_{2}+\xi\) given in Eq. (3.43), where \(\xi\) is any closed 1-form, which can be neglected as it does not contribute to the geometric phase. For \(N>2\) we have a system of nonlinear equations to solve for the connection 1-forms, and there may be more than one solution. If we require the only non-zero connection forms to be \(\widetilde{\alpha}_{j},\ j=2,\ldots,N\) in Eq. (4.31) then we have a solution2 that follows from Eq. (4.32) and (4.33) as \(\omega^{\psi_{j}}{}_{p_{j}}=-\frac{p_{j}}{\mathsf{A}}\,d\psi_{j}\), \(j=2,\ldots,N\), and \(\omega^{p_{k}}{}_{\psi_{j}}=\omega^{p_{k}}{}_{p_{j}}=\omega^{\psi_{k}}{}_{\psi_ {j}}=0\). From Eq. (4.30), it follows that the non-zero curvature 2-forms are the exterior derivatives of the 1-forms \(\widetilde{\alpha}_{j}\) \[\mathcal{R}^{\psi_{j}}{}_{p_{j}}=d\omega^{\psi_{j}}{}_{p_{j}}=d\,\widetilde{ \alpha}_{j}\,,\quad j=2,\ldots,N\,. \tag{4.37}\] From Eq. (2.40) we then have \[\omega^{p_{k}}{}_{\psi_{j}} =\frac{\partial_{\psi_{j}}\mathsf{G}_{p_{k}}}{2\sqrt{\mathsf{G}_ {p_{k}}\mathsf{G}_{\psi_{j}}}}\,dp_{k}-\epsilon_{p_{k}}\epsilon_{\psi_{j}}\, \frac{\partial_{p_{k}}\mathsf{G}_{\psi_{j}}}{2\sqrt{\mathsf{G}_{p_{k}}\mathsf{ G}_{\psi_{j}}}}\,d\psi_{j}=0\,, j\neq k\,, \tag{4.38}\] \[\omega^{\psi_{k}}{}_{\psi_{j}} =\frac{\partial_{\psi_{j}}\mathsf{G}_{\psi_{k}}}{2\sqrt{\mathsf{ G}_{\psi_{k}}\mathsf{G}_{\psi_{j}}}}\,d\psi_{k}-\epsilon_{\psi_{k}} \epsilon_{\psi_{j}}\,\frac{\partial_{\psi_{k}}\mathsf{G}_{\psi_{j}}}{2\sqrt{ \mathsf{G}_{\psi_{k}}\mathsf{G}_{\psi_{j}}}}\,d\psi_{j}=0\,, j<k=2,\ldots,N\,,\] \[\omega^{p_{k}}{}_{p_{j}} =\frac{\partial_{p_{j}}\mathsf{G}_{p_{k}}}{2\sqrt{\mathsf{G}_{p _{k}}\mathsf{G}_{p_{j}}}}\,dp_{k}-\epsilon_{p_{k}}\epsilon_{p_{j}}\,\frac{ \partial_{p_{k}}\mathsf{G}_{p_{j}}}{2\sqrt{\mathsf{G}_{p_{k}}\mathsf{G}_{p_{j }}}}\,dp_{j}=0\,, j<k=2,\ldots,N\,,\] \[\omega^{p_{j}}{}_{\psi_{j}} =\frac{\partial_{\psi_{j}}\mathsf{G}_{p_{j}}}{2\sqrt{\mathsf{G}_ {p_{j}}\mathsf{G}_{\psi_{j}}}}\,dp_{j}-\epsilon_{p_{j}}\epsilon_{\psi_{j}}\, \frac{\partial_{p_{j}}\mathsf{G}_{\psi_{j}}}{2\sqrt{\mathsf{G}_{p_{j}}\mathsf{ G}_{\psi_{j}}}}\,d\psi_{j}=-\frac{p_{j}}{\mathsf{A}}\,d\psi_{j}\,, j=2,\ldots,N\,.\] Thus, we must have \[\partial_{\psi_{k}}\mathsf{G}_{p_{j}}=0,\qquad\partial_{p_{k}}\mathsf{G}_{p_{j }}=0,\qquad\partial_{\psi_{k}}\mathsf{G}_{\psi_{j}}=0,\qquad\partial_{p_{k}} \mathsf{G}_{\psi_{j}}=0,\quad k\neq j\,, \tag{4.39}\] and \[\partial_{\psi_{j}}\mathsf{G}_{p_{j}}=0,\qquad-\epsilon_{p_{j}}\epsilon_{\psi_ {j}}\,\frac{\partial_{p_{j}}\mathsf{G}_{\psi_{j}}}{2\sqrt{\mathsf{G}_{p_{j}} \mathsf{G}_{\psi_{j}}}}=-\frac{p_{j}}{\mathsf{A}}\,. \tag{4.40}\] The above relations imply that \(\mathsf{G}_{p_{j}}=\mathsf{G}_{p_{j}}(p_{j})\) and \(\mathsf{G}_{\psi_{j}}=\mathsf{G}_{\psi_{j}}(\psi_{j},p_{j})\). Thus, the metric coefficients \(G_{p_{j}}\) and \(G_{\psi_{j}}\) depend only on the coordinates \(\{\psi_{j},p_{j}\}\) of the submanifold (hyper-plane) \(\mathcal{B}_{j}\). The curvature 2-forms \(\mathcal{R}^{\psi_{j}}{}_{p_{j}}\) in Eq. (4.37) are now expressed in terms of the metric coefficients using Eq. (2.48) as \[\mathcal{R}^{\psi_{j}}{}_{p_{j}}=d\omega^{\psi_{j}}{}_{p_{j}}=K_{(p_{j},\psi_{j} )}\sqrt{\mathsf{G}_{p_{j}}\mathsf{G}_{\psi_{j}}}\,dp_{j}\wedge d\psi_{j}\,, \tag{4.41}\] where \(K_{(p_{j},\psi_{j})}\) is the Gaussian curvature of the hyper-plane \(\mathcal{B}_{j}\) with coordinates \(\{\psi_{j},p_{j}\}\) (see Eq. (2.50)). Then, Eq. (4.37) imposes the equality of the 2-forms \(d\,\widetilde{\alpha}_{j}=d\omega^{\psi_{j}}{}_{p_{j}}\), that is \[-\frac{1}{\mathsf{A}}\,dp_{j}\wedge d\psi_{j}=K_{(p_{j},\psi_{j})}\sqrt{ \mathsf{G}_{p_{j}}\mathsf{G}_{\psi_{j}}}\,dp_{j}\wedge d\psi_{j}\,, \tag{4.42}\] and it follows that \[K_{(p_{j},\psi_{j})}=-\frac{\sqrt{\mathsf{G}_{p_{j}}\mathsf{G}_{\psi_{j}}}}{ \mathsf{A}}\,. \tag{4.43}\] Similar to the 2-pendulum, we further require that the symplectic 2-form \(d\,\widetilde{\alpha}_{j}\) is compatible with the (pseudo) Riemannian volume (area) 2-form \(\mathsf{vol}_{\mathcal{B}_{j}}=\sqrt{\mathsf{G}_{p_{j}}\mathsf{G}_{\psi_{j}}}\,dp _{j}\wedge d\psi_{j}\) of the submanifold \(\mathcal{B}_{j}\), that is \[\frac{1}{|\mathsf{A}|}\,dp_{j}\wedge d\psi_{j}=\sqrt{\mathsf{G}_{p_{j}}\mathsf{ G}_{\psi_{j}}}\,dp_{j}\wedge d\psi_{j}\,. \tag{4.44}\] This implies that \[\frac{1}{|\mathsf{A}|}=\sqrt{\mathsf{G}_{p_{j}}\mathsf{G}_{\psi_{j}}}\,, \tag{4.45}\] which together with the second equation of Eq. (4.40) gives us \[\epsilon_{p_{j}}\epsilon_{\psi_{j}}\,\partial_{p_{j}}\mathsf{G}_{\psi_{j}}=2 \frac{p_{j}}{\mathsf{A}|\mathsf{A}|}\,. \tag{4.46}\] Solving for \(\mathsf{G}_{\psi_{j}}\), and using Eq. (4.45) to solve for \(\mathsf{G}_{p_{j}}\) we get \[\mathsf{G}_{p_{j}}=\frac{1}{p_{j}^{2}+\mu_{j}^{2}},\qquad\mathsf{G}_{\psi_{j}}= \frac{p_{j}^{2}+\mu_{j}^{2}}{\mathsf{A}^{2}},\qquad\epsilon_{p_{j}}=\text{sgn}( \mathsf{A}),\qquad\epsilon_{\psi_{j}}=1\,, \tag{4.47}\] where we have imposed that both metric coefficients are positive and \(\mu_{j}\) are arbitrary constants. The shape manifold \(\mathcal{B}\) is thus the product manifold of \((N-1)\) shape submanifolds \(\mathcal{B}_{j}\) with local coordinate charts \(\{p_{j},\psi_{j}\}\), or \(\mathcal{B}=\mathcal{B}_{2}\times\cdots\times\mathcal{B}_{N}\) (see SS2.5). Each submanifold is the shape space of two adjacent bobs, or a two-pendulum. Thus, the intrinsic metric of each submanifold follows from Eq. (4.47), (or from Eq. (3.53)) as \[\mathbf{G}_{j}=\frac{\text{sgn}(\mathsf{A})}{p_{j}^{2}+\mu_{j}^{2}}\,dp_{j} \otimes dp_{j}+\frac{p_{j}^{2}+\mu_{j}^{2}}{\mathsf{A}^{2}}\,d\psi_{j}\otimes d \psi_{j}\,. \tag{4.48}\] Then the metric of \(\mathcal{B}\) is the product of these metrics, i.e., \[\mathbf{G}=\mathbf{G}_{2}\times\ldots\times\mathbf{G}_{N}=\sum_{j=2}^{N}\left[ \frac{\text{sgn}(\mathsf{A})}{p_{j}^{2}+\mu_{j}^{2}}\,dp_{j}\otimes dp_{j}+ \frac{p_{j}^{2}+\mu_{j}^{2}}{\mathsf{A}^{2}}\,d\psi_{j}\otimes d\psi_{j}\right]\,. \tag{4.49}\] From Eq. (4.30) the geometric drift follows by integrating the 2-form \[d\,\widetilde{\alpha}=\sum_{j=2}^{N}\,\mathcal{R}^{p_{j}}{}_{\psi_{j}}( \mathbf{e}_{p_{j}},\mathbf{e}_{\psi_{j}})\,. \tag{4.50}\] This is the sum of the curvature 2-forms of each submanifold \(\mathcal{B}_{j}\), that is \[\theta_{\text{geom}}=-\sum_{j=2}^{N}\int_{S(\gamma)}\mathcal{R}^{p_{j}}{}_{ \psi_{j}}(\mathbf{e}_{p_{j}},\mathbf{e}_{\psi_{j}})=-\sum_{j=2}^{N}\int_{S( \gamma)}\frac{1}{\mathsf{A}}\,dp_{j}\wedge\,d\psi_{j}\,, \tag{4.51}\] where each term is both the oriented area and curvature of the projected path \(\gamma\) on the hyper-plane \(\mathcal{B}_{j}\) with coordinates \(\{\psi_{j},p_{j}\}\). The geodesics of the product manifold \(\mathcal{B}\) are the Cartesian product of the geodesics of each submanifold \(\mathcal{B}_{j}\), which follow from Eq. (3.59). ### Negative angular momentum: Multi-universe For \(\mathsf{A}<0\), the metric (4.49) describes a multi-universe of expanding 2D Robertson-Walker spacetime universes (Misner et al., 1973; Carroll, 2003). Indeed, for \(\mu_{j}\neq 0\) we use the chart coordinate transformation (3.61) \[t_{j}=\tanh^{-1}\left[\frac{p_{j}}{\sqrt{p_{j}^{2}+\mu_{j}^{2}}}\right],\qquad x _{j}=\psi_{j}\,,\quad j=2,\cdots,N\,. \tag{4.52}\] Then \(dt_{j}=dp_{j}/\sqrt{p_{j}^{2}+\mu_{j}^{2}},\ dx_{j}=d\psi_{j}\), and the metric (4.49) transforms into the sum of \((N-2)\) 2-dimensional Robertson-Walker metrics of each submanifold \(\mathcal{B}_{j}\) \[ds^{2}=\sum_{j=2}^{N}ds_{j}^{2},\qquad ds_{j}^{2}=-dt_{j}^{2}+\frac{\mu_{j}^{ 2}(\cosh t_{j})^{2}}{\mathsf{A}^{2}}\,dx_{j}^{2}\,,\quad j=2,\cdots,N\,, \tag{4.53}\] with the scale factor \(a(t_{j})\sim\cosh t_{j}\). The associated Hubble constants of each spacetime is \(H_{j}=\dot{a}_{j}/a_{j}=\tanh(t_{j})\), indicating a matter-dominated universe for small \(t_{j}\) and vacuum-dominated for large \(t_{j}\)(Carroll, 2003). Similarly, if \(\mu_{j}=0\) we use the coordinate transformation (3.63) \[t_{j}=\mathrm{e}^{p_{j}},\qquad x_{j}=\psi_{j}\,,\quad j=2,\cdots,N\,, \tag{4.54}\] where \(dt_{j}=dp_{j}/p_{j}\) and \(dx_{j}=d\psi_{j}\), and the metric transforms to \[ds_{j}^{2}=-dt_{j}^{2}+\frac{\mathrm{e}^{2t_{j}}}{\mathsf{A}^{2}}dx_{j}^{2}\,, \quad j=2,\cdots,N\,, \tag{4.55}\] which is still a Robertson-Walker metric with the scale factor \(a(t_{j})\sim\mathrm{e}_{j}^{t}\) and Hubble constant \(H=\dot{a}_{j}/a_{j}=1\)(Carroll, 2003). ### Positive angular momentum: The hyperbolic product space \(\mathbb{H}^{2(N-1)}\) For \(\mathsf{A}>0\), each submanifold \(\mathcal{B}_{j}\) is a hyperbolic plane \(\mathbb{H}^{2}\) and the shape manifold is the Cartesian product of \(N-1\) hyperbolic planes \(\mathbb{H}^{2}\), each with negative Gaussian curvature \(\mathsf{K}_{j}=-1\). So, \(\mathcal{B}\) is the hyperbolic product space \(\mathbb{H}^{2(N-1)}\). As an example, for \(\mu=0\) we use the coordinate transformations (3.61), (3.63) and the metric (4.49) transforms into the sum of \(N-2\) metrics \[ds^{2}=\sum_{j=2}^{N}ds_{j}^{2},\qquad ds_{j}^{2}=dt_{j}^{2}+R_{j}^{2}(t)\,dx_ {j}^{2}\,,\quad j=2,\cdots,N\,, \tag{4.56}\] where \(R_{j}(t)=\frac{e^{2t_{j}}}{\mathsf{A}^{2}}\). The metrics \(ds_{j}^{2}\) of the submanifolds \(\mathcal{B}_{j}\) are disguised metrics of the hyperbolic plane as the change of coordinates \(\tilde{x}_{j}=x_{j}/\sqrt{\mathsf{A}}\) and \(\tilde{y}_{j}=\mathsf{A}\exp(-t_{j})\) transform each of them into \[ds_{j}^{2}=\frac{d\tilde{x}_{j}^{2}+d\tilde{y}_{j}^{2}}{\tilde{y}_{j}^{2}}\,, \quad j=2,\cdots,N\,. \tag{4.57}\] ## 5 Dynamics of nonlinear planar \(N\)-pendula under self-equilibrated external moments We next generalize the \(N\)-pendulum system described above by assuming that time-dependent external moments \(\mathsf{M}_{j}^{e}(t)\), \(j=1,\cdots,N\) act on the rigid bars (see Fig. 5). In order to preserve the invariance of the total angular momentum we assume that \[\sum_{j=1}^{N}\mathsf{M}_{j}^{e}(t)=0\,. \tag{5.1}\] The associated Lagrangian is3 Footnote 3: The external moments appear in the Lagrange d’Alembert principle. Equivalently, one can use Hamilton’s principle using the modified Lagrangian given in (5.2). \[\mathcal{L}=\sum_{j=1}^{N}\frac{1}{2}I_{j}\,\dot{\theta}_{j}^{2}-\Pi\left( \theta_{2}-\theta_{1},\theta_{3}-\theta_{2},...,\theta_{N}-\theta_{N-1} \right)+\sum_{j=1}^{N}\mathsf{M}_{j}^{e}(t)\,\theta_{j}\,, \tag{5.2}\] and the associated dynamical equations follow by minimizing the action \(\int\mathcal{L}dt\) as \[\frac{d}{dt}\frac{\partial\mathcal{L}}{\partial\dot{\theta}_{j}}-\frac{ \partial\mathcal{L}}{\partial\dot{\theta}_{j}}=I_{j}\ddot{\theta}_{j}+\frac{ \partial\Pi}{\partial\dot{\theta}_{j}}-\mathsf{M}_{j}^{e}(t)=0\,,\quad j=1, \cdots N\,. \tag{5.3}\] From (4.2) potential moments are in equilibrium and summing up equations (5.3) yields \[\frac{d}{dt}\sum_{j=1}^{N}I_{j}\dot{\theta}_{j}(t)=\sum_{j=1}^{N}\mathsf{M}_{j }^{e}(t)\,. \tag{5.4}\] From Eq. (5.1) the sum of the external moments on the right-hand side is null and the total angular momentum is conserved, i.e., \[I_{1}\dot{\theta}_{1}(t)+I_{2}\dot{\theta}_{2}(t)+\cdots+I_{N}\dot{\theta_{N}} (t)=\mathsf{A}\,. \tag{5.5}\] ### Extended autonomous Hamiltonian system The \(N\)-pendulum is a non-autonomous system since the Lagrangian is explicitly time-dependent. We can associate an extended autonomous Hamiltonian system on the cotangent bundle \(T^{*}Q_{t}\) of the extended configuration space \(Q_{t}=\mathbb{R}\times\mathbb{T}^{N}\), i.e., the Cartesian product of the real line \(\mathbb{R}\) and the \(N\)-torus. \(Q_{t}\) has the coordinate chart \(\{t,\theta_{1},\theta_{2},\cdots,\theta_{N}\}\). The conjugate momentum of time \(t\) is the energy \(E\) and \(p_{j}=I_{j}\dot{\theta}_{j}\) are the conjugate momenta of the angles \(\theta_{j}\). Thus, the phase space \(T^{*}Q_{t}\) is the cotangent space of \(Q_{t}\), and has the coordinate chart \(\{t,\theta_{1},\theta_{2},...,\theta_{N},E,p_{1},p_{2},...,p_{N}\}\). A generic trajectory in the extended phase space is parameterized by the parameter \(\lambda\). The Hamiltonian is given by \[\mathcal{H}=E+\frac{1}{2}\sum_{j=1}^{N}\frac{p_{j}^{2}}{I_{j}}+\Pi\left(\theta _{2}-\theta_{1},\theta_{3}-\theta_{2},...,\theta_{N}-\theta_{N-1}\right)-\sum _{j=1}^{N}\mathsf{M}_{j}^{e}\,\theta_{j}\,. \tag{5.6}\] The dynamical equations follow from the Hamiltonian by \(\mathbf{X}^{\prime}=\mathbf{J}\nabla_{\mathbf{X}}\mathcal{H}\), where \(\mathbf{X}^{\prime}=\frac{d\mathbf{X}}{d\lambda}\) denotes differentiation with respect to \(\lambda\), and \[\mathbf{X}=\begin{bmatrix}t\\ \theta_{1}\\ \theta_{2}\\ \vdots\\ E\\ p_{1}\\ p_{2}\\ \vdots\end{bmatrix}\,, \tag{5.7}\] and \(\mathbf{J}\) is the symplectic matrix \[\mathbf{J}=\begin{bmatrix}\mathbf{O}_{N+1}&\mathbf{I}_{N+1}\\ -\mathbf{I}_{N+1}&\mathbf{O}_{N+1}\end{bmatrix}\,. \tag{5.8}\] \(\mathbf{I}_{N+1}=[\delta_{ij}]\) is the \(N+1\times N+1\) identity matrix, \(\mathbf{O}_{N+1}\) is the \(N+1\times N+1\) null matrix, and \(\delta_{ij}\) is the Kronecker tensor. In particular, \[t^{\prime}=\frac{\partial\mathcal{H}}{\partial E}=1\,,\quad E^{\prime}=-\frac {\partial\mathcal{H}}{\partial t}=\sum_{j=1}^{N}\frac{d\mathsf{M}_{j}^{e}}{dt }\theta_{j}\,,\quad\theta_{j}^{\prime}=\frac{p_{j}}{I_{j}}\,,\quad p_{j}^{ \prime}=-\frac{\partial\Pi}{\partial\theta_{j}}+\mathsf{M}_{j}^{e}(t)\,, \quad j=1,\dots N\,. \tag{5.9}\] From Eq. (5.5) the conserved total angular momentum is \(\mathsf{A}=\sum_{j=1}^{N}p_{j}\). The associated symplectic 1 and 2-forms are \[\alpha=Edt+\sum_{j=1}^{N}p_{j}\,d\theta_{j},\qquad d\alpha=dE\wedge dt+\sum_{ j=1}^{N}dp_{j}\wedge d\theta_{j}\,. \tag{5.10}\] We now consider the shape configuration space \(Q_{s}\), which has the coordinate chart \(\{t,\theta_{1},\psi_{2},\psi_{3},...\psi_{N}\}\), where the shape parameters \(\psi_{j}=\theta_{j}-\theta_{1}\) represent the relative angular displacement of the \(N-1\) rigid bars with respect to the first bar. Since the total angular momentum \(p_{1}+p_{2}+...+p_{N}=\mathsf{A}\) is known a priori, then \(p_{1}=\mathsf{A}-p_{2}-p_{3}-\dots-p_{N}\) and the motion must occur on the subspace \(T^{*}Q_{t}/\mathsf{A}\), which has the coordinate chart \(\{t,E,\theta_{1},\psi_{2},p_{2},\psi_{3},p_{3},\dots,\psi_{N},p_{N}\}\), where \((t,E)\) and \((p_{j},\psi_{j})\) are pairs of conjugate variables. The 1-form in Eq. (4.12) reduces to \[\alpha=\mathsf{A}\,d\theta_{1}+Edt+\sum_{j=2}^{N}p_{j}\,d\psi_{j}\,, \tag{5.11}\] and the associated symplectic 2-form is written as \[d\alpha=dE\wedge dt+\sum_{j=2}^{N}dp_{j}\wedge d\psi_{j}\,. \tag{5.12}\] The reduced phase space \(\mathcal{P}=T^{*}Q/\mathsf{A}\) has the geometric structure of a principal fiber bundle: the \(2N\)-dimensional shape manifold \(\mathcal{B}\) with coordinate chart \(\{t,E,\psi_{2},p_{2},\cdots,\psi_{N},p_{N}\}\) and transversal one-dimensional fibers \(\mathcal{F}\) with coordinate \(\{\theta_{1}\}\). The Hamiltonian vector field \(\mathbf{X}^{\prime}_{s}\) in \(\mathcal{P}=T^{*}Q/\mathsf{A}\) can be decomposed as the sum of the flow \[\mathbf{X}^{\prime}_{\mathcal{B}}=\begin{bmatrix}t^{\prime}\\ E^{\prime}\\ \psi^{\prime}_{2}\\ p_{2}\\ \vdots\\ \psi^{\prime}_{N}\\ p_{N}\end{bmatrix}\,, \tag{5.13}\] on the shape manifold \(\mathcal{B}\) and the flow \(\mathbf{X}^{\prime}_{\mathcal{F}}=\theta^{\prime}_{1}\) along the fiber \(\mathcal{F}\). The dynamics on the shape manifold \(\mathcal{B}\) is governed by the reduced (time-varying) Hamiltonian \[\mathcal{H}_{R}(t,E,\psi_{2},p_{2},\cdots,\psi_{N},p_{N})=E+\frac{1}{2I_{1}} \left[\mathsf{A}-\sum_{k=2}^{N}p_{k}\right]^{2}+\sum_{j=2}^{N}\left[\frac{p_{ j}^{2}}{2I_{j}}-\mathsf{M}^{e}_{j}(t)\psi_{j}\right]+\Pi\left(\psi_{2},\psi_{3},...,\psi_{N}\right)\,, \tag{5.14}\] and the components of the Hamiltonian vector field \(\mathbf{X}^{\prime}_{\mathcal{B}}\) are \[t^{\prime}=1\,,\quad E^{\prime}=-\frac{\partial\mathcal{H}_{R}}{\partial t}= \sum_{j=1}^{N}\frac{d\mathsf{M}^{e}_{j}}{dt}\psi_{j}\,,\quad\psi^{\prime}_{j}= p_{j}\left(\frac{1}{I_{1}}+\frac{1}{I_{j}}\right)-\frac{1}{I_{1}}\left(\mathsf{A}- \sum_{k=2}^{N}p_{k}\right)\,,\quad p^{\prime}_{j}=\frac{\partial\tilde{\Pi}}{ \partial\psi_{j}}-\mathsf{M}^{e}_{j}(t)\,. \tag{5.15}\] Notice that the motion along the fiber depends on \(\mathbf{X}^{\prime}_{\mathcal{B}}\) since \[\theta^{\prime}_{1}=\frac{1}{I_{1}}\left(\mathsf{A}-\sum_{k=2}^{N}p_{k}\right)\,. \tag{5.16}\] From Eq. (4.14), we define the 1-form \(\widetilde{\alpha}=\alpha/\mathsf{A}\) and the total drift \(\theta_{1}\) along the fiber follows by integrating the form \[d\theta_{1}=\widetilde{\alpha}-\frac{E}{\mathsf{A}}dt-\sum_{k=2}^{N}\frac{p_{ k}}{\mathsf{A}}\,d\psi_{k}\,, \tag{5.17}\] that is \[\theta_{1}=\int d\theta_{1}=\int_{0}^{\lambda}\widetilde{\alpha}\,d \widetilde{\lambda}-\int_{\gamma}\left(\frac{E}{\mathsf{A}}dt+\sum_{k=2}^{N} \frac{p_{k}}{\mathsf{A}}\,d\psi_{k}\right)\,, \tag{5.18}\] where \(\gamma\) is a closed trajectory of the motion on the shape manifold \(\mathcal{B}\) parameterized by \(\lambda\). Thus, \[\theta_{1}=\theta_{\text{dyn}}+\theta_{\text{geom}}\,, \tag{5.19}\] where the dynamical and geometric rotation drifts are defined as \[\theta_{\text{dyn}}(\lambda)=\int_{0}^{\lambda}\widetilde{\alpha}\,d \widetilde{\lambda},\qquad\theta_{\text{geom}}(\lambda)=-\int_{\gamma}\left( \frac{E}{\mathsf{A}}dt+\sum_{k=2}^{N}\frac{p_{k}}{\mathsf{A}}\,d\psi_{k} \right)\,. \tag{5.20}\] Here, the dynamical rotation drift \(\theta_{\text{dyn}}\) depends on the inertia of the \(N\)-pendulum and can be written as \[\theta_{\text{dyn}}(\lambda)=2\int_{0}^{\lambda}\frac{\mathsf{K}(\widetilde{ \lambda})+E(\widetilde{\lambda})}{\mathsf{A}}\,d\widetilde{\lambda}\,,\qquad \mathsf{K}=\frac{1}{2}\sum_{j=1}^{N}\frac{p_{j}^{2}}{I_{j}}\,, \tag{5.21}\] where \(\mathsf{K}\) and \(\mathsf{A}\) are the total kinetic energy and the total angular momentum, respectively. If the bars of the \(N\)-pendulum are rigidly connected and cannot change their shape, i.e., no motion on the shape manifold as \(\psi^{\prime}_{j}=0\), then the rotation drift is solely due to the inertia of the system measured by the total angular momentum and it is measured by \(\theta_{\rm dyn}\). If the \(N\)-pendulum changes its shape, i.e., the angles \(\psi_{j}\) vary over time, then the motion on \(\mathcal{B}\) induces also the geometric rotation drift \(\theta_{\rm geom}\). From Eq.(5.20), and using Stokes' theorem \[\theta_{\rm geom}=-\int_{S(\gamma)}\left(\frac{1}{\mathsf{A}}dE\wedge\,dt+ \sum_{k=2}^{N}\frac{1}{\mathsf{A}}dp_{k}\wedge\,d\psi_{k}\right)\,. \tag{5.22}\] The geometric drift is thus proportional to the area \(S(\gamma)\) enclosed by the path \(\gamma\) spanned by the motion on the shape manifold \(\mathcal{B}\). The 2-form \(dE\wedge dt\) encodes the effects of the time-dependent external moments on the geometric drift. The remaining 2-forms are the same as those of a free \(N\)-pendulum given in Eq. (4.51) and measure the effects of the pendulum shape changes. In the following, we will show that the base manifold \(\mathcal{B}\) can be endowed with a Riemannian structure. ### Curvature and intrinsic metric of the shape manifold One can interpret the geometric rotation drift in Eq. (5.22) as the curvature of the \(2N\)-dimensional shape manifold \(\mathcal{B}\) equipped with a pseudo-Riemannian metric of the following form \[ds^{2}=\epsilon_{t}\,G_{t}\,dt^{2}+\epsilon_{E}\,G_{E}\,dE^{2}+\sum_{j=2}^{N} \left[(\epsilon_{p_{j}}G_{p_{j}}(dp_{j})^{2}+\epsilon_{\psi_{j}}G_{\psi_{j}}( d\psi_{j})^{2}\right]\,, \tag{5.23}\] where the \(2N\) positive metric coefficients depend on the coordinates \(\{t,E,p_{2},\psi_{2},\cdots,p_{N},\psi_{N}\}\). The signature of the manifold is \((\epsilon_{t},\epsilon_{E},\epsilon_{p_{2}},\epsilon_{\psi_{2}},\cdots)\). The metric coefficients will be calculated using Cartan's structural equations as follows. From Eq. (5.22) the geometric drift follows by integrating the 2-form \[d\,\widetilde{\alpha}=d\,\widetilde{\alpha}_{t}+\sum_{j=2}^{N}d\,\widetilde{ \alpha}_{j}\,, \tag{5.24}\] where \(d\,\widetilde{\alpha}_{t}=-\frac{1}{\mathsf{A}}dE\wedge\,dt\), and \(d\,\widetilde{\alpha}_{j}=-\frac{1}{\mathsf{A}}dp_{j}\wedge\,d\psi_{j}\). The associated 1-forms are \[\widetilde{\alpha}_{t}=-\frac{E}{\mathsf{A}}dt\,,\quad\widetilde{\alpha}_{j} =-\frac{p_{j}}{\mathsf{A}}d\psi_{j}\,,\quad j=2,\cdots,N\,. \tag{5.25}\] Drawing on Cartan's structural equations, the 2-forms \(d\,\widetilde{\alpha}_{t}\) and \(d\,\widetilde{\alpha}_{j}\) are interpreted as the only non-zero curvature 2-forms of the \(2N\)-dimensional shape manifold \(\mathcal{B}\). We relabel the pair \((t,E)\) as \((p_{1},\psi_{1})\) so that \(d\,\widetilde{\alpha}\) is written as \[d\,\widetilde{\alpha}=\sum_{j=1}^{N}d\,\widetilde{\alpha}_{j}\,, \tag{5.26}\] where we set \(\widetilde{\alpha}_{1}=\widetilde{\alpha}_{t}\). Comparing with Eqs. (4.30) and (4.31), \(\widetilde{\alpha}_{j}\) and \(d\,\widetilde{\alpha}_{j}\) can be interpreted as the connection and curvature forms of a free \((N+1)\)-pendulum. Thus, we can use the results we obtained in SS4.1. For the forced \(N\)-pendulum, the shape manifold \(\mathcal{B}\) has dimension \(2N\). It is reducible since it is the product manifold of \(N\) submanifolds (hyper-planes) \(\mathcal{B}_{j}\) with coordinate charts \(\{\psi_{j},p_{j}\}\), \(j=1,\ldots N\). From Eq. (4.49), the metric of \(\mathcal{B}\) is written as \[\mathbf{G}=\mathbf{G}_{1}\times\ldots\times\mathbf{G}_{N}=\sum_{j=1}^{N} \left[\frac{\rm sgn(\mathsf{A})}{p_{j}^{2}+\mu_{j}^{2}}\,dp_{j}\otimes dp_{j} +\frac{p_{j}^{2}+\mu_{j}^{2}}{\mathsf{A}^{2}}\,d\psi_{j}\otimes d\psi_{j} \right]\,, \tag{5.27}\] where \(\mu_{j}\) are arbitrary parameters. Since \(p_{1}=t\) and \(\psi_{1}=E\), then \(G_{p_{1}}=G_{t}\) and \(G_{\psi_{1}}=G_{E}\) and the intrinsic metric of each submanifold \(\mathcal{B}_{j}\) follows from Eq. (4.48) as \[\mathbf{G}_{1}=\frac{\rm sgn(\mathsf{A})}{E^{2}+\mu_{t}^{2}}\,dE\otimes dE+ \frac{E^{2}+\mu_{t}^{2}}{\mathsf{A}^{2}}\,dt\otimes dt\,, \tag{5.28}\] \[{\bf G}_{j}=\frac{{\rm sgn}({\sf A})}{p_{j}^{2}+\mu_{j}^{2}}\,dp_{j}\otimes dp_{j} +\frac{p_{j}^{2}+\mu_{j}^{2}}{{\sf A}^{2}}\,d\psi_{j}\otimes d\psi_{j}\,,\quad j =2,\cdots,N\,. \tag{5.29}\] Then \[{\bf G}=\frac{{\rm sgn}({\sf A})}{E^{2}+\mu_{1}^{2}}\,dE\otimes dE+\frac{E^{2} +\mu_{1}^{2}}{{\sf A}^{2}}\,dt\otimes dt+\sum_{j=2}^{N}\,\frac{{\rm sgn}({\sf A })}{p_{j}^{2}+\mu_{j}^{2}}\,dp_{j}\otimes dp_{j}+\frac{p_{j}^{2}+\mu_{j}^{2}}{ {\sf A}^{2}}\,d\psi_{j}\otimes d\psi_{j}\,. \tag{5.30}\] Similar to that of the free \(N\)-pendulum the shape manifold is the product manifold of Robertson-Walker spacetime universes (\({\sf A}<0\)) or hyperbolic planes (\({\sf A}>0\)). The geometric drift follows by integrating the 2-form \[d\,\widetilde{\alpha}={\cal R}_{t}^{E}({\bf e}_{E},{\bf e}_{t})+\sum_{j=2}^{N} \,{\cal R}_{\psi_{j}}^{p_{j}}({\bf e}_{p_{j}},{\bf e}_{\psi_{j}})\,, \tag{5.31}\] which is the sum of the curvature 2-forms of each submanifold \({\cal B}_{j}\), that is \[\begin{split}\theta_{\rm geom}&=\int_{S(\gamma)}d \,\widetilde{\alpha}=\int_{S(\gamma)}\left[{\cal R}_{t}^{E}({\bf e}_{E},{\bf e }_{t})+\sum_{j=2}^{N}\,{\cal R}_{\psi_{j}}^{p_{j}}({\bf e}_{p_{j}},{\bf e}_{ \psi_{j}})\right]\\ &=-\int_{S(\gamma)}\left[\frac{1}{{\sf A}}dE\wedge\,dt+\sum_{j=2 }^{N}\frac{1}{{\sf A}}dp_{j}\wedge\,d\psi_{j}\right]\,,\end{split} \tag{5.32}\] where each term is both the oriented area and curvature of the projected path \(\gamma\) on the hyper-planes \({\cal B}_{j}\) and \({\cal B}_{t}\) with coordinates \(\{\psi_{j},p_{j}\}\) and \(\{t,E\}\), respectively. ## 6 Conclusions We investigated the geometric phase of nonlinear planar \(N\)-pendula with continuous rotational symmetry in the Hamiltonian framework. The geometric structure of the phase space is a principal fiber bundle, i.e., a base, or shape manifold \({\cal B}\), and fibers \({\cal F}\) along the symmetry direction attached to it. The connection and curvature forms of the shape manifold are defined by the symplectic structure of the Hamiltonian dynamics. Then, Cartan's moving frames provide the means to derive an intrinsic metric structure for \({\cal B}\). This characterizes the kinematically admissible shape deformations of the pendula. An orbit on \({\cal B}\) is a succession of infinitesimal changes in shape of the pendulum from an initial configuration to another. If the pendulum returns to its initial shape, the orbit is closed and the area (or curvature) spanned by it measures the induced geometric rotation drift. We first studied the geometric phase of a nonlinear planar double pendulum that conserves the total angular momentum \({\sf A}\). If \({\sf A}<0\) we found that the metric is pseudo-Riemannian and the shape manifold is an expanding spacetime described by the Robertson-Walker metric with positive curvature. For \({\sf A}>0\), the shape manifold is the hyperbolic plane \(\mathbb{H}^{2}\) with negative curvature. We then generalized these results to nonlinear planar \(N\)-pendula. We found that the associated shape manifold \({\cal B}\) is reducible since it is the product manifold of \((N-1)\) hyperbolic planes \(\mathbb{H}^{2}\) (\({\sf A}>0\)), or Robertson-Walker 2D spacetimes (\({\sf A}<0\)). We then considered \(N\)-pendula subject to time-dependent self-equilibrated moments. The geometric phase is studied in the extended autonomous Hamiltonian framework. The associated shape manifold is still reducible as the product space of \(N\) hyperbolic planes \(\mathbb{H}^{2}\) (\({\sf A}>0\)), or Robertson-Walker 2D spacetimes (\({\sf A}<0\)). We found that the Riemannian structure of the shape manifold depends on the sign of the conserved total angular momentum. The geometric phase is evaluated by the same 2-form given by the sum of the sectional curvature forms of \({\cal B}\). The intrinsic metric allows one to quantify the similarity of a shape \(S_{1}\), or point on \({\cal B}\), to another point, or shape \(S_{2}\), by measuring the intrinsic geodesic distance between the two points in terms of curvature, or induced geometric phase. The Euclidean metric would give misleading shorter distances between the two shapes. This is because it is not an intrinsic structure that follows from the dynamics. Thus, low-momentum shapes are far apart from high-momentum shapes. If \(\mathsf{A}<0\), the shape manifold is an expanding spacetime universe and the two different shapes are'red-shifted' as very far away from each other. If \(\mathsf{A}>0\), the shape manifold has the character of the hyperbolic plane and the two shapes appear far away from each other as the difference of their momenta becomes larger. Furthermore, the intrinsic distance between shapes is relevant for measuring how close an orbit is to the stable/unstable sub-manifolds of fixed points of the dynamics on the shape manifold. In future work, we will use Cartan's moving frames to derive an intrinsic metric for the shape manifold of Navier-Stokes turbulence with continuous translational symmetry, or turbulent channel flows [Fedele et al., 2015]. To unveil the _shape of turbulence_ one needs to quotient out the translation symmetry of the Navier-Stokes equations. This can be achieved, for example, by means of a physically meaningful slice or chart representation of the quotient space or shape manifold [Budanur et al., 2015, Cvitanovic et al., 2012, Fedele et al., 2015]. To measure how close one vortical shape is to another, the standard Euclidean metric is typically used. An important conclusion of our present study is that the similarities of shapes should be measured by a metric intrinsic to the shape manifold. Other non-intrinsic distances are misleading as they do not account for the curvature, or induced geometric phase. ## Acknowledgement This work was partially supported by NSF - Grant No. CMMI 1939901, and ARO Grant No. W911NF-18-1-0003.
2308.12771
Non-reciprocal coherent all-optical switching between magnetic multi-states
We present experimental and computational findings of the laser-induced non-reciprocal motion of magnetization during ultrafast photo-magnetic switching in garnets. We found distinct coherent magnetization precession trajectories and switching times between four magnetization states, depending on both directions of the light linear polarization and initial magnetic state. As a fingerprint of the topological symmetry, the choice of the switching trajectory is governed by an interplay of the photo-magnetic torque and magnetic anisotropy. Our results open a plethora of possibilities for designing energy-efficient magnetization switching routes at arbitrary energy landscapes.
T. Zalewski, V. Ozerov, A. Maziewski, I. Razdolski, A. Stupakiewicz
2023-08-24T13:18:54Z
http://arxiv.org/abs/2308.12771v1
# Non-reciprocal coherent all-optical switching between magnetic multi-states ###### Abstract We present experimental and computational findings of the laser-induced non-reciprocal motion of magnetization during ultrafast photo-magnetic switching in garnets. We found distinct coherent magnetization precession trajectories and switching times between four magnetization states, depending on both directions of the light linear polarization and initial magnetic state. As a fingerprint of the topological symmetry, the choice of the switching trajectory is governed by an interplay of the photo-magnetic torque and magnetic anisotropy. Our results open a plethora of possibilities for designing energy-efficient magnetization switching routes at arbitrary energy landscapes. ## Introduction Understanding the interplay between topology, crystal symmetry, and free energy landscape map is key for the manipulation of magnetic moments in complex systems. Control of spin dynamics at ultrashort timescales by various stimuli has recently become a vibrant research direction revolving around physical mechanisms acting on magnetization as well as their dynamical characteristics [1; 2; 3; 4]. Steering the magnetization along energy-efficient routes across the magnetic anisotropy landscape requires the highly sought-after ability to control the torque. Multiple mechanisms have been suggested, including current modulation in spin-orbit torque systems [5; 6], employing ultrashort acoustic pulses [7; 8; 9], and utilizing coherent phonon-magnon coupling [10; 11; 12; 13]. In general, among the variety of methods, the ability to manipulate magnetization solely using laser pulses holds tremendous potential for future technologies, facilitating the fastest-ever data recording with minimal heat dissipation. On top of essentially thermal mechanisms of all-optical magnetization switching [14; 15], of particular interest is the non-thermal photo-magnetic excitation where spin-orbit interaction mediates the modification of the spin energy landscape through absorption of optical radiation. Over the last decades, a solid body of knowledge has been accumulated aiming at bringing the photo-magnetism onto ultrafast timescale and separating it from concomitant thermal and inverse magneto-optical effects [16; 17; 18; 19; 20; 21]. Owing to its non-thermal nature, the dynamics of photo-magnetism, in which magnetization is not quenched upon pulse laser irradiation, heavily relies on the magnetization precession. The latter is usually described by the Landau-Lifshitz-Gilbert (LLG) dynamics with a time-dependent effective magnetic field [22]. This highlights the importance of the topography of a magnetic landscape, which could play a key role in determining the magnetization switching conditions as well as the relevant dynamical characteristics. Materials with a cubic magnetic symmetry usually have a large number of degenerate energy minima, and smaller angles between them facilitate the nonlinear regime of large-angle magnetization precession [23] and eventually all-optical switching [24]. Since the magnetization is set into motion through an ultrafast modification of the energy landscape, it is of utter importance to understand the optical response of the latter in detail. This dynamical topography can then be used advantageously, for example, reducing the energy consumption or accelerating the switching dynamics. The latter is particularly relevant in light of the recently demonstrated controllable magnetization reversal between two equilibrium states at the picosecond timescale [24]. The open questions that we address in this work pertain to the detailed analysis of the trajectories of the magnetization switching as well as the anatomy of the photo-magnetic excitation in high-symmetry media. Employing time-resolved magneto-optical imaging technique, we get valuable insights into the coherent dynamics of simultaneous switching of coexisting magnetic states by a single laser pulse. We demonstrate the existence of two non-equivalent switching trajectories in cubic photo magnetic yttrium-iron garnet with Co-ions (YIG:Co), whereas the forward and backward routes of magnetization can be chosen with the polarization of light. We further highlight the key role of an interplay between the torque and magnetic symmetries in the laser-induced motion of spins in co-existing magnetic domains. We argue that the apparent (almost twofold) difference in the switching times between the two pairs of states represents another fingerprint of the torque symmetry exhibiting coherent magnetization switching. ## Results ### Time-resolved multi-states magnetization switching. The photo-magnetic YIG:Co(001) film is characterized by the predominantly cubic magnetic anisotropy with eight energy minima, with magnetization oriented close to the diagonals of the cubic unit cell [25, 26] (see Supplemental Material). Among those minima, we focus on four (Fig. 1a) which exhibit the full richness of the magnetization dynamics upon photo-magnetic excitation with a single 50 fs pump laser pulse. To achieve the highest photo-magnetic efficiency of the switching [27], the pump wavelength was set to 1300 nm (see SM). In contrast to our previous observations [24], here we employed the highly-sensitive time-resolved single-shot magneto-optical imaging technique [28] which allows for simultaneous observation of magnetization dynamics in coexisting magnetic phases and retrieve both the spatial and temporal dynamics of multi-domain states. In particular, we monitored the laser-induced dynamics of the magnetization in domains depicted in Fig. 1a through transient variations of the Faraday rotation of the defocused probe beam at 650 nm wavelength. To improve the signal-to noise ratio, background images with the pump beam blocked were subtracted at each time delay, thus creating the presented stack of differential images of the normal [001] magnetization projection \(M_{z}\). Selecting regions of interest within the borders of individual domains and integrating the variations over them, we obtained a set of traces corresponding to the laser-induced \(\Delta M_{z}\) dynamics in each of the magnetic states as a function of the optical delay between the pump and probe pulses \(\Delta t\) (see SM). In the image sequence complemented by (or together with) a sketch of magnetization states in Fig. 1b and 1c, we show that the magnetization switching occurs in both large (M\({}_{1}\) and M\({}_{8}\)) and small (M\({}_{4}\) and M\({}_{5}\)) domains simultaneously. We note that the pattern of the small domains remains constant during the entire experiment, thus confirming the non-thermal nature of the photo-magnetic switching. Interestingly, after about 30 ps the magnetization starting its dynamics from the M\({}_{1}\) direction obtains the transient perpendicular state to the sample plane. This is in contrast with the previously reported [24] magnetization switching between the canted states in YIG:Co via the in-plane direction. A typical example of the experimental \(\Delta M_{x}(\Delta t)\) traces in the two states with opposite normal magnetization components is shown in Fig. 1d, illustrating the switching along the M\({}_{8}\)\(\rightarrow\)M\({}_{1}\) and M\({}_{4}\)\(\rightarrow\)M\({}_{5}\) routes with the \(F\parallel[100]\) pump pulses. It is seen that the trajectories are clearly asymmetric. The sketch in Fig.1c illustrates the rotation of the magnetization pair in adjacent domains (red and blue arrows), where the transient variations of the angle between them are inextricably linked to the inequality of the two trajectories. Similar measurements performed when all 8 different magnetic states were prepared and illuminated with light at one of the two orthogonal polarizations (\(E\parallel\) [100] and \(E\parallel\) [010] directions) revealed that only two unique time traces of \(\Delta M_{x}(\Delta t)\) can be observed (see SM). Thus, all switching trajectories can be assigned to either type I or type II, depending on the initial magnetization state and pump polarization. In Fig. 2a we show the averaged traces of both types for the orthogonal pump polarizations along the [100] and [010] directions, and the inequivalence of these trajectories becomes glaringly apparent. This observation is highly surprising in light of the fourfold crystalline symmetry and magnetic anisotropy of the garnet. To understand the physical origins of the dissimilarity of the two trajectories, we decomposed them into even and odd contributions. These are shown in Fig. 2b with green and black full data points, respectively. Notably, the even part closely resembles the averaged over multiple domains, a step-like switching trace reported previously[24]. It is seen that by 100 ps, the switching is already completed, and the remaining minute variations are related to the relaxation of magnetization within the corresponding potential minimum. On the other hand, the odd part exhibits a strongly damped oscillatory character, highlighting the precessional character of the switching dynamics in general. It is the odd contribution that is responsible for the non-reciprocal Figure 1: **Multi-states magnetization switching in a cubic magnetic system.****a** Images of multi-states magnetic domain structure and orientation of easy magnetization axes in YIG:Co films. **b** The differential stack of time-resolved images illustrating the transient dynamics of photo-magnetic switching in YIG:Co. The experiment involved the use of a 50 fs single pump pulse with a fluence of 50 mJ cm\({}^{2}\) and polarization along \(E\parallel\) [100] direction in the garnet. **c** the sketch illustrates the motion and rotation of magnetization at domains for different delay time \(\Delta t\). **d** The normalized out-of-plane component of magnetization \(M_{x}\) for the simultaneous switching between four magnetic states (Mn-M\({}_{1}\) and Mn-Ms). switching seen in Fig. 2a: the switching along the M\({}_{\alpha}\)\(\rightarrow\)M\({}_{\gamma}\) route and back proceeds along the different trajectories. We argue that the odd, non-reciprocal contribution can be understood as a fingerprint of the interplay between the torque symmetry and the coherent magnetization dynamics. The solid line in Fig. 2b shows the (scaled) time derivative of the even contribution. Its striking reminiscence of the odd part suggests that the latter originates in the precession of magnetization when moving along the switching route. This results in the emergence of the orthogonal component \(dM/dt\) since \((M\cdot dM/dt)=0\). The sign of this orthogonal component is governed by the direction of movement along the switching route, thus giving rise to the odd contribution to the magnetization dynamics. In other words, a new curvilinear reference frame can be introduced where the even contribution depicting the genuine switching route is bereft of any rotational part. In such a reference frame, the LLG equation will be transformed accordingly, acquiring an additional term similarly to the mechanical equations of motion in a rotating reference frame. This additional term can be viewed as a fictitious magnetic field giving rise to the magnetization dynamics in the direction orthogonal to the switching route, or the aforementioned odd contribution. The characteristic shape of the normal projection of this contribution is then determined by the change rate of \(M_{z}\) along the switching route, in agreement with Fig. 2b. Figure 2: **Non-reciprocal of all-optical magnetization switching.****a** Pump polarization effect on magnetization trajectory enabling selecting switching trajectory and final state for orthogonal orientation of laser pump polarization along [100] and [010] direction in YIG:Co film. **b** The differential signal of switched states (close black points) compared to the measured magnetization precession (open points) below the threshold level of switching. The even and odd contribution of photo-magnetic switching. The green solid line is the scaled derivative of the even component of photo-magnetic switching. **Photo-magnetic torque symmetry.** To get further insight into the laser-induced asymmetry of the magnetization switching trajectories, we performed a tensor analysis of the photo-magnetic excitation along the lines discussed in [24]. In particular, the photo-magnetic Hamiltonian contribution reads: \[\mathcal{H}_{p-m}=\hat{\beta}\mathbf{EE^{*}MM}, \tag{1}\] where \(E\) is the electric field of the optical pump, and \(\hat{\beta}\) is a fourth-order polar tensor responsible for the photo-magnetic susceptibility [27]. In the reference frame aligned with the main crystal axes, the initial (equilibrium) magnetization \(M\) has all three non-zero components \((M_{x},M_{y},M_{x})\). Our analysis shows that the asymmetry of the trajectories is inextricably linked to the cubic symmetry of the photo-magnetic medium. In particular, we consider a cubic garnet crystal with an \(m3m\) point group symmetry. Assuming normal pump incidence so that \(E_{x}\parallel[100]\), \(E_{y}\parallel[010]\), \(E_{x}=0\), only three non-zero \(\hat{\beta}\) components remain relevant in our case (see SM). The photo-magnetic excitation of spin dynamics is mediated by the effective photo-magnetic field \(\mathbf{H}_{\mathbf{L}}=-\frac{\partial\mathcal{H}}{\partial M}\): It then exerts a torque \(T\) on magnetization, thus setting it into motion according to the LLG formalism: \(\frac{\partial\mathbf{M}}{\partial t}=-\mathbf{M}\times\mathbf{H}_{\mathbf{L}}\). Directly after the optical excitation, the out-of-plane laser-induced magnetization dynamics takes the following form: \[\frac{\partial M_{x}}{\partial t}=\beta^{\prime}\big{(}E_{x}^{2}-E_{y}^{2} \big{)}M_{x}M_{y}+2\beta^{\prime\prime}E_{x}E_{y}\big{(}M_{x}^{2}-M_{y}^{2} \big{)}, \tag{2}\] Here \(\beta^{\prime}\) and \(\beta^{\prime\prime}\) are the linear combinations of the non-zero \(\beta\) components: \(\beta^{\prime}\equiv(\beta_{3}-\beta_{1})\), \(\beta^{\prime\prime}\equiv 2\beta_{2}\) (see SM). Two main conclusions can be drawn from this result. First, consider the two initial magnetic states \(\mathbf{M}_{8}\) and \(\mathbf{M}_{4}\), as indicated in Fig. 1, that is, with identical in-plane magnetization components but opposite out-of-plane ones, \(M_{x}^{8}=-M_{x}^{4}\). Without the loss of generality, we refer to them as the "up" and "down" states, although the choice of up and down directions is arbitrary. Then, since it is even with respect to \(M_{x}\), the normal component of the magnetization dynamics \(\frac{\partial M_{x}}{\partial t}|_{0}\) will be the same for these two states. In other words, the "up" state will get the momentum towards the final "down" state of the switching process, whereas the "down" state will start its motion in the direction away from its destination. It is thus seen that the asymmetry of the switching trajectories is introduced immediately after the photo-magnetic excitation, and is governed by the tensor nature of the photo-magnetic effect and the cubic crystal symmetry. Secondly, consider an excitation with linearly polarized light. Because the uniaxial anisotropy in our garnet film is much weaker than the cubic one, the equilibrium magnetization directions are close to the diagonals of the cubic unit cell. We thus can assume \(M_{x}\approx M_{y}\) so that the first term in the normal torque component dominates. As such, the in-plane symmetry of the excitation is given by the \(E_{x}^{2}-E_{y}^{2}\) term, reducing to \(E^{2}\cos 2\varphi\), where \(\varphi\) indicates the light polarization. Effectively, the in-plane symmetry of the photo-magnetic switching is lowered from _four-fold_ to _two-fold_, which can be illustrated by rotating the polarization of the incident light from the [100] to the [010] direction. This leads to the reversal of the switching asymmetry: the normal component of the torque changes its sign, and thus the switching trajectories of the two considered initial magnetic states will be flipped. All these implications are in a striking agreement with the experimental findings. **Modeling the trajectories of photo-magnetic switching.** Employing the extended LLG model of photo-magnetic switching [27] in which laser pulses act as a perturbation to the free energy density, we simulated the trajectories for the multi-states magnetization switching in a real YIG:Co(001) thin film. In the simulations, a tetragonal distortion of the cubic magnetic symmetry has been introduced (see SM). In Fig. 3a, the trajectories of magnetization switching induced by a normally incident laser pulse with the electric field \(E\) are shown on the map, highlighting the polarization selectivity. The variations of the \(M_{x}\) observed experimentally are summarized in Fig. 3b. Notably, the difference in the movement rates of the two simultaneously precessing magnetization vectors is enabled by the lack of coupling between them. This is further confirmed in the simulations where the angle between the two magnetization vectors (bottom panel) exhibits strong variations during their switching. Because magneto-dipole interaction in adjacent domains is too weak and manifests on the nanosecond timescale at best, the two magnetization vectors move independently. It is apparent that the model captures the pronounced asymmetry of the switching trajectories, thus paving the way for further analysis. In particular, we turn to the apparent dissimilarity of the characteristic timescales when the magnetization takes either one of the two available switching routes. To quantify this, we calculated the evolution of the normalized distance to the route destination as: Figure 3: **Trajectories of magnetization dynamics on a fourfold-symmetric energy landscape.****a** The simulated energy map with multi-states switching for different initial states **b** The trajectories of a normalized out-of-plane component \(M_{x}\) with variations of the angle between moving vectors **c** The evolution of the distance to destination \(D\) highlighting type I and type II trajectories (dashed grey lines are the linear fitting). The red-shaded area shows the photo-induced anisotropy field \(H_{L}(t)\) with a lifetime of about 20 ps. **d** The full simulated energy map showing different magnetization trajectories induced by the \(F\parallel[100]\) or \(F\parallel[010]\) light pulses. **e** Non-reciprocity of the all-optical switching caused by the change in the light pulse polarization plane (forward and backward switching). **f** Differential (odd) contribution to \(F\parallel[100]\) and \(F\parallel[010]\) trajectories responsible for the no-reciprocity compared with the experimental data. \[D=\sqrt{\sum_{i}\left(M_{i}(t)-M_{i}^{f}\right)^{2}}\,/D_{0}. \tag{3}\] where \(M_{i}^{f}\) are the magnetization components of the final point on the trajectory and \(D_{0}\) is the initial distance to the destination. A striking almost twofold difference in the movement rates along the two trajectories (Fig. 3c) corroborates their inequivalence and enables the qualitative distinction between the faster and slower (type I and type II as above) switching routes. Lastly, the simulations reproduce the non-equivalence of the two types of trajectories in terms of transient variations of the normal magnetization component \(\Delta M_{x}(\Delta t)\) (Fig. 3e). Depending on the polarization (parallel to either [100] or [010]) of the optical pump pulse, magnetization selects the fast or slow trajectory towards its destination, in agreement with the experimentally found non-reciprocity (cf. Fig. 2a). Similarly, the simulated odd component in Fig. 3f exhibits an oscillating character, again correlating well with the experimental observations and reinforcing our understanding of its origin in the sign of the initially produced photo-magnetic torque. ## Discussion The apparent non-reciprocity of the photo-magnetic switching routes is enabled by two main factors. Firstly, it is only possible when the switching is precessional in nature, as opposed to the thermal mechanisms accompanied by the magnetization quenching. Secondly, it requires an intricate interplay of the excitation tensor symmetry and magnetic energy landscape. In particular, in a uniaxial magnetic system, the observed asymmetry is impossible, which makes it difficult to imagine the discussed non-reciprocity in amorphous alloys where the dominant uniaxial contribution of magnetic anisotropy is often governed by the growth direction. Cubic magnetic crystals thus represent an attractive class of materials for exploring the richness of the non-thermal excitation of non-reciprocal magnetization dynamics. They constitute a promising playground for engineering magnetic energy landscapes in order to achieve faster and more efficient unidirectional switching. In particular, the change in cubic and uniaxial growth-induced anisotropy enables encoding materials with a spatial distribution of states with different energy thresholds, which can dramatically reduce the switching energy. The approach developed in our work can be extended towards more sophisticated systems featuring additional external stimuli and physically rich interaction mechanisms. In particular, thermal demagnetization can be accounted for on equal footing by introducing Landau-Lifshitz-Bloch formalism. Moreover, there is tremendous potential in utilizing strain (either \(dc\) or phonon-induced) or auxiliary magnetic and electric fields to control the effective field of the anisotropy. Taking advantage of the extremely large phase space with these fields as control parameters, the switching between multiple domain states holds promise for the advancement of multi-level magnetic memories. In particular, the inequality of the switching trajectories indicates the possibility of simultaneously steering a set of magnetizations from one combination of states to another with a single stimulus, realizing a multi-dimensional magnetization switching. By encoding information not only in binary but in a set of magnetization states (e.g. logical combinations of 00, 01, 10, 11, etc.), a greater number of distinct levels can be achieved, allowing for higher data storage density.
2309.02893
On the distribution of sequences of the form $(q_ny)$
We study the distribution of sequences of the form $(q_ny)_{n=1}^\infty$, where $(q_n)_{n=1}^\infty$ is some increasing sequence of integers. In particular, we study the Lebesgue measure and find bounds on the Hausdorff dimension of the set of points $\gamma \in [0,1)$ which are well approximated by points in the sequence $(q_ny)_{n=1}^\infty$. The bounds on Hausdorff dimension are valid for almost every $y$ in the support of a measure of positive Fourier dimension. When the required rate of approximation is very good or if our sequence is sufficiently rapidly growing, our dimension bounds are sharp. If the measure of positive Fourier dimension is itself Lebesgue measure, our measure bounds are also sharp for a very large class of sequences. We also give an application to inhomogeneous Littlewood type problems.
S. Kristensen, T. Persson
2023-09-06T10:30:46Z
http://arxiv.org/abs/2309.02893v2
# On the distribution of sequences of the form \((q_{n}y)\) ###### Abstract. We study the distribution of sequences of the form \((q_{n}y)_{n=1}^{\infty}\), where \((q_{n})_{n=1}^{\infty}\) is some increasing sequence of integers. In particular, we study the Lebesgue measure and find bounds on the Hausdorff dimension of the set of points \(\gamma\in[0,1)\) which are well approximated by points in the sequence \((q_{n}y)_{n=1}^{\infty}\). The bounds on Hausdorff dimension are valid for almost every \(y\) in the support of a measure of positive Fourier dimension. When the required rate of approximation is very good or if our sequence is sufficiently rapidly growing, our dimension bounds are sharp. If the measure of positive Fourier dimension is itself Lebesgue measure, our measure bounds are also sharp for a very large class of sequences. We also give an application to inhomogeneous Littlewood type problems. 2020 Mathematics Subject Classification: 11J83, 28A78 We thank Niclas Technau for pointing out to us that the paper [2] contains a result that implies a previous version of our Threorem 1. This lead to the present and stronger version of Threorem 1. SK's research is supported by the Independent Research Fund Denmark (Grant ref. 1026-00081B) and the Aarhus University Research Foundation (Grant ref. AUFF-E-2021-9-20). Of course, this set is equal to the set of numbers for which the sequence of partial quotients in the simple continued fraction expansion is bounded, see e.g. [9]. In fact, this was shown to hold for \(\mu\)-almost every \(y\) in the support of a probability measure \(\mu\) of positive Fourier dimension, a notion which we will define along with the notion of Hausdorff dimension in the next section. In fact, the right hand side of (2) was improved by Chow and Zafeiropoulos [3] to \((\log\log\log q)^{\epsilon+1/2}/(\log q)^{1/2}\), though one should note that the power of the logarithm in the denominator is \(1/2\). This reflects in the exponent \(\alpha\) in the definition of the set \(W_{y,\alpha}\), and it appears to be very difficult to prove properties of this set for \(\alpha\geq 1/2\) as remarked in [12]. In fact, it follows from the work of Technau and Zafeiropoulos [14] that methods involving the discrepancy of sequences of the form \((q_{n}\alpha)_{n=1}^{\infty}\) will not accomplish this. We are able to obtain some results for \(\alpha\) in the range between \(1/2\) and \(1\), although we suspect that these are not best possible. For some non-lacunary (and for lacunary) sequences \((q_{n})\) we are able to prove that for \(\mu\)-almost every \(y\) in the support of a probability measure of positive Fourier dimension, the set \(W_{y,\alpha}\) has positive Lebesgue measure. If the sequence \((q_{n})\) is lacunary, then Chow and Technau [2] showed that \(W_{y,\alpha}=[0,1)\) for \(\mu\)-almost every \(y\). If \(\mu\) is the Lebesgue measure, we are able to relax the growth condition on the sequence \((q_{n})\) to being just a little more than linear, while at the same time prove that the Lebesgue measure of \(W_{y,\alpha}\) is equal to \(1\). Of course, the result of Chow and Technau implies this and more when \((q_{n})\) is lacunary, but our result allows for less restrictive growth conditions. For \(\alpha>1\), an application of the Borel-Cantelli lemma immediately shows that the set is Lebesgue null for all \(x\). However, it is not usually empty, and in this note we will derive some properties of the set \(W_{y,\alpha}\) in the \(\alpha>1\) range. A special case of our results is that for Lebesgue almost all \(y\), the set has Hausdorff dimension \(\frac{1}{\alpha}\), though our result is more general. We will state it precisely, once we have provided the technical background for doing so. In the above mentioned result on the Hausdorff dimension, it is critical that the sequence \((q_{n})_{n=1}^{\infty}\) grows geometrically fast. Nonetheless, the problem of studying the distribution of orbits of fractional parts of \(q_{n}y\) for such sequences has attracted quite a bit of attention. For instance, if the sequence \((q_{n})_{n=1}^{\infty}\) contains all numbers of the form \(2^{i}3^{j}\) in increasing order, Furstenberg [6] famously showed that orbits are either finite or dense and conjectured that dense orbits are uniformly distributed. Similarly, initiated by the work of Rudnick and Sarnak [13], the distribution of sequences of the form \((n^{d}y)_{n=1}^{\infty}\) has attracted a considerable amount of attention. While we do not claim to solve any of the famous and important open problems about the distribution of these sequences, we are able to find upper and lower bounds on the Hausdorff dimension of \(W_{y,\alpha}\) regardless of any growth conditions, which are valid for almost all \(y\) in an appropriate sense. Specialising our main result to the Lebesgue measure, we find that for \(\alpha>1\) and for Lebesgue almost every \(y\), the Hausdorff dimension of \(W_{y,\alpha}\) is somewhere in between \(\frac{1}{2\alpha}\) and \(\frac{1}{\alpha}\). The paper is structured as follows. In the next section, we will provide the relevant definitions of notions from uniform distribution theory and fractal geometry. With these notions in place, we will state our main theorems in Section 3, where we will also derive some corollaries of these. The main theorems are proved in Section 4. We end the paper with some concluding remarks. ## 2. Some technical background We will let \(\lambda\) denote the Lebesgue measure on \([0,1]\). Our approach depends heavily on the notion of the discrepancy of a sequence, which is a measure of how far the sequence is from being uniformly distributed. For a sequence \((x_{n})_{n=1}^{\infty}\) in \([0,1)\) and \(N\in\mathbb{N}\), we define the \(N\)-discrepancy of the sequence by \[D_{N}(x_{n})=\sup_{I\subseteq[0,1)}|\#\{\,n\leq N:x_{n}\in I\,\}-N\,\lambda(I )|,\] where the _supremum_ is taken over all sub-intervals of \([0,1)\). The sequence \((x_{n})_{n=1}^{\infty}\) is said to be uniformly distributed in \([0,1)\) if \(D_{N}(x_{n})=o(N)\). A word of caution is appropriate here. Some authors define the discrepancy of a sequence to be the above quantity divided by \(N\). In this notion of discrepancy, the quantity can perhaps most easily be understood to be a speed of convergence of the probability measures \(\frac{1}{N}\sum_{n=1}^{N}\delta_{x_{n}}\) to the Lebesgue measure. The former notion favours counting over measure theory. The two notions are of course entirely equivalent, but the double meaning of the word 'discrepancy' can cause confusion. _In this paper, discrepancy is taken to mean the former_. Many of our results deal with Hausdorff dimension, which we now define. For a set \(E\subseteq\mathbb{R}^{n}\) and real numbers \(s\geq 0\) and \(\delta>0\), consider the quantity, \[\mathcal{H}^{s}_{\delta}(E)=\inf_{\mathcal{C}_{\delta}}\sum_{U\in\mathcal{C}_ {\delta}}\operatorname{diam}(U)^{s},\] where the _infimum_ is taken over all covers \(\mathcal{C}_{\delta}\) of \(E\) with sets of diameter at most \(\delta\). As \(\delta\) decreases, the collection of such covers becomes smaller, so that \(\mathcal{H}^{s}_{\delta}(E)\) must increase. Allowing infinite limits, this means that \[\mathcal{H}^{s}(E)=\lim_{\delta\to 0}\mathcal{H}^{s}_{\delta}(E)\] exists. In fact, it is a regular Borel measure, and for all but possibly one value of \(s\), the value attained by \(\mathcal{H}^{s}(E)\) is either \(0\) or \(\infty\). Furthermore, \(\mathcal{H}^{s}(E)=0\) for \(s>n\). Since \(\mathcal{H}^{s}(E)\) evidently decreases as \(s\) increases, we can define the Hausdorff dimension of \(E\) to be the 'breaking point', i.e. \[\dim_{\mathrm{H}}(E)=\inf\{\,s\geq 0:\mathcal{H}^{s}(E)=0\,\}.\] A particularly pleasing type of sets are intersective sets in the sense of Falconer, see [5]. For \(s>0\), the Falconer class \(\mathcal{G}^{s}\) is defined as the collection of \(G_{\delta}\)-sets \(F\) of \(\mathbb{R}^{n}\) for which \[\dim_{\mathrm{H}}\biggl{(}\bigcap_{i=1}^{\infty}f_{i}(F)\biggr{)}\geq s,\] for any countable sequence \((f_{i})_{i=1}^{\infty}\) of similarities. Among the desirable properties of the families \(\mathcal{G}^{s}\) we find the fact that it is closed under countable intersections. We will consider the intersection classes \(\mathcal{G}^{s}\) restricted to \([0,1)\), which we denote by \(\mathcal{G}^{s}([0,1))\). A set \(E\) belongs to \(\mathcal{G}^{s}([0,1))\) if there is a set \(F\in\mathcal{G}^{s}\) such that \(E=[0,1)\cap F\). Sets in \(\mathcal{G}^{s}([0,1))\) have Hausdorff dimension at least \(s\), and \(\mathcal{G}^{s}([0,1))\) is closed under countable intersections. A final notion, which will play an important role is the Fourier dimension of a measure, see e.g. [10]. The Fourier dimension of a Radon measure \(\mu\) on \(\mathbb{R}\) is defined as \[\dim_{F}(\mu)=\sup\{\,s\in[0,1]:|\hat{\mu}(\xi)|=O(|\xi|^{-s/2})\,\}. \tag{3}\] Here, \(\hat{\mu}\) is the Fourier transform of the measure defined as usual to be \[\hat{\mu}(\xi)=\int e^{2\pi ix\xi}\,\mathrm{d}\mu(x).\] The terminology originates in the fact that for a set \(E\subseteq\mathbb{R}\), any measure supported on \(E\) has Fourier dimension at most equal to the Hausdorff dimension of \(E\). The Fourier dimension of the set \(E\) is the supremum of the Fourier dimensions of the Radon measures supported on \(E\). Note that a set \(E\) need not have the same Hausdorff and Fourier dimension. For instance, the ternary Cantor set has Hausdorff dimension \(\log 2/\log 3\) by a classical calculation found in all textbooks on fractal geometry. However, it was shown by Kahane and Salem [8] that the Fourier dimension of the ternary Cantor set is equal to zero. This result in fact follows rather easily from the results of both [7] and this paper. ## 3. Statement of results From [7], we know that if \(\mu\) is some measure supported on \([0,1)\) of positive Fourier dimension, then for \(\mu\)-almost every \(x\), there is a positive number \(\nu\), such that \(W_{y,\alpha}=[0,1]\) for \(\alpha<\nu\). If we further assume that \((q_{n})_{n=1}^{\infty}\) is lacunary, then Chow and Technau [2, Theorem 1.11] recently showed that this holds true for any \(\alpha<1\). In fact they obtained an even stronger result: Changing the rate \(n^{-\alpha}\) to \(n^{-1}(\log n)^{3+\varepsilon}\), they proved that \[\{\,\gamma\in[0,1):\|q_{n}y-\gamma\|<n^{-1}(\log n)^{3+\varepsilon}\text{ for infinitely many }n\in\mathbb{N}\,\}=[0,1)\] for almost every \(y\). Our first results is valid also for some non-lacunary sequences including sequences of the form \(q_{n}=n^{d}\). **Theorem 1**.: _Suppose that \(\mu\) is a probability measure of positive Fourier dimension with decay \(|\hat{\mu}(\xi)|=O(|\xi|^{-\tau})\). If \((q_{n})_{n=1}^{\infty}\) is a sequence of integers with \(|q_{n}-q_{m}|>c|n-m|^{\frac{1}{\tau}+\varepsilon}\) for some \(c,\varepsilon>0\), and \(\alpha<1\), then \(\lambda(W_{y,\alpha})>0\) for \(\mu\)-almost every \(y\)._ In particular, Theorem 1 applies to the case when \(\mu\) is the Lebesgue measure and \((q_{n})\) is given by an integer polynomial of degree at least \(3\). Specialising \(\mu\) to be the Lebesgue measure on \([0,1]\), we can obtain the following stronger result. **Theorem 2**.: _Suppose that \((q_{n})_{n=1}^{\infty}\) is a sequence of integers with \(|q_{n}-q_{m}|>c|n-m|^{1+\varepsilon}\). For \(\alpha<1\), the set \(W_{y,\alpha}\) is Lebesgue full for Lebesgue-almost every \(y\)._ We should remark that the growth condition in Theorem 1 can be relaxed to \(|q_{n}-q_{m}|>c|n-m|^{\frac{1}{\tau}}(\log|n-m|)^{1+\varepsilon}\) and similarly in Theorem 2. As it will be apparent in the proofs, all that matters is that a certain series is convergent. We are not able to push the results for \(\alpha<1\) any further. However as already mentioned, for \(\alpha>1\) the first Borel-Cantelli lemma immediately implies that the set \(W_{y,\alpha}\) is a Lebesgue null set, and it is of interest to calculate its Hausdorff dimension. We are able to get a precise result valid almost surely in the case of lacunary sequences. **Theorem 3**.: _Let \(\mu\) be a probability measure on \([0,1]\) with positive Fourier dimension, and let \((q_{n})_{n=1}^{\infty}\) be a lacunary sequence of integers. For any \(\alpha\geq 1\), for \(\mu\)-almost all \(x\),_ \[\dim_{\mathrm{H}}(W_{y,\alpha})=\frac{1}{\alpha}.\] _In fact, the set is intersective in the sense of Falconer._ As in [7], our result has applications to a Littlewood type question, where one adds an inhomogeneous term. More precisely, we can show the following. **Theorem 4**.: _Let \(\mu\) be a probability measure on \([0,1]\) with positive Fourier dimension, let \(x\in\mathrm{Bad}\), and let \(m\in\mathbb{N}\). Then, for \(\mu\times\ldots\times\mu\)-almost all \((y_{1},\ldots,y_{m})\in[0,1)^{m}\),_ \[\dim_{\mathrm{H}}\biggl{\{}\gamma\in[0,1):q\|qx\|\prod_{i=1}^{m}\|qy_{i}- \gamma\|<\frac{1}{(\log q)^{m\alpha}}\text{ infinitely often}\,\biggr{\}}= \frac{1}{\alpha}.\] As in [7], one could extend this to require that there are infinitely many solutions to these inequalities for a countable family of badly approximable \(x_{j}\)'s. Also, just as in that paper, replacing \(\|qx\|\) by the \(p\)-adic absolute value of \(q\), \(|q|_{p}\), or more generally by the pseudo-absolute value \(|q|_{\mathcal{D}}\), where \(\mathcal{D}=(d_{k})_{k=1}^{\infty}\) is a sequence of positive integers with \(d_{k}|d_{k+1}\) of at most geometric growth yields the same result. In that case, one considers an inhomogeneous analogue of the mixed Littlewood conjecture. Let us derive Theorem 4 from Theorem 3. Proof of Theorem 4.: First, let \((q_{n})_{n=1}^{\infty}\) denote the sequence of denominators of convergents to \(x\). From Theorem 3, we know that for \(\mu\)-almost every \(y_{1}\), the set \(W_{y_{1},\alpha}\) is intersective of dimension \(1/\alpha\). Now, extract from the sequence \((q_{n})_{n=1}^{\infty}\) a subsequence \((q_{n_{j}})_{j=1}^{\infty}\) with \[\|q_{n_{j}}y_{1}-\gamma\|<n_{j}^{-\alpha}.\] The subsequence is again lacunary, so we will use Theorem 3. Hence, for \(\mu\)-almost every \(y_{2}\), the set \(W_{y_{2},\alpha}\) is intersective of dimension \(1/\alpha\) with this new sequence. We proceed in this way \(m\) times, until we have a sequence, also denoted \((q_{n_{j}})_{j=1}^{\infty}\) by abuse of notation, such that for \(i=1,2,\ldots,m\), \[\|q_{n_{j}}y_{i}-\gamma\|<n_{j}^{-\alpha}, \tag{4}\] whenever \(\gamma\in W_{y_{i},\alpha}\). Thus, (4) holds for every \(i\), provided \(\gamma\in\bigcap_{i=1}^{m}W_{y_{i},\alpha}\). Since these sets are in Falconer's class of intersective sets, the dimension of the intersection remains \(1/\alpha\). Multiplying everything, and noting that \(q_{n_{j}}\|q_{n_{j}}x\|\leq 1\), \[q_{n_{j}}\|q_{n_{j}}x\|\prod_{i=1}\|q_{n_{j}}y_{i}-\gamma\|<\frac{1}{n_{j}^{m \alpha}}.\] To conclude, we only need to know that the sequence of denominators of convergents of a badly approximable number satisfies the inequalities \(\phi^{n}\leq q_{n}\leq(2M)^{n}\), where \(M\) is an upper bound on the partial quotients of the number and \(\phi\) is the Golden Ratio. Thus \(\log q_{n_{j}}\asymp n_{j}\), and we are done. To obtain a version of our result for inhomogeneous mixed Littlewood type problems, one just needs to replace the sequence of denominators of convergents by the sequence \(\mathcal{D}\). The details are left to the interested reader. For non-lacunary sequences \((q_{n})_{n=1}^{\infty}\), our results are less precise. Nonetheless, the problem of studying the the distribution of orbits of fractional parts of \(q_{n}y\) for such sequences has attracted quite a bit of attention. For instance, if the sequence \((q_{n})_{n=1}^{\infty}\) contains all numbers of the form \(2^{i}3^{j}\) in increasing order, Furstenberg [6] famously showed that orbits are either finite or dense and conjectured that dense orbits are uniformly distributed. Similarly, initiated by the work of Rudnick and Sarnak [13], the distribution of sequences of the form \((n^{d}y)_{n=1}^{\infty}\) has attracted a considerable amount of attention. While we do not claim to solve any of the famous and important open problems about the distribution of these sequences, we will prove the following. **Theorem 5**.: _Let \((x_{n})_{n=1}^{\infty}\) be a sequence in \([0,1)\) with \(D_{N}((x_{n}))=O(N^{1-\eta})\) for some \(\eta\in(0,1)\). Then the set_ \[W=\{\,\gamma\in[0,1):\|x_{n}-\gamma\|<n^{-\alpha}\text{ infinitely often}\,\}\] _satisfies that \(\frac{\eta}{\alpha}\leq\dim_{\mathrm{H}}W\leq\frac{1}{\alpha}\) for \(\alpha\geq 1\)._ In [7], it is shown that for an increasing sequence of integers \((q_{n})_{n=1}^{\infty}\), the sequence \((\{q_{n}y\})_{n=1}^{\infty}\) satisfies the required discrepancy estimate for \(\mu\)-almost every \(y\), whenever \(\mu\) is a measure of positive Fourier dimension. More precisely, it follows from the proof of Corollary 7 in [7] that if \(|\hat{\mu}(\xi)|=O(|\xi|^{-2\eta})\), then \(D_{N}((x_{n}))=O(N^{1-\eta^{\prime}})\) for any \(\eta^{\prime}<\eta\). Thus, we immediately have the following corollary. **Corollary 6**.: _Let \(\mu\) be a probability measure on \([0,1]\) with Fourier decay \(|\hat{\mu}(\xi)|=O(|\xi|^{-2\eta})\), and let \((q_{n})_{n=1}^{\infty}\) be an increasing sequence of integers. For any \(\alpha\geq 1\), for \(\mu\)-almost all \(y\),_ \[\frac{\eta}{\alpha}\leq\dim_{\mathrm{H}}W_{y,\alpha}\leq\frac{1}{\alpha}.\] Note in particular that although the Fourier dimension of the Lebesgue measure \(\lambda\) is \(1\), we have the stronger Fourier decay \(|\hat{\lambda}(\xi)|=O(|\xi|^{-1})\). (This is because in the definition (3) of the Fourier dimension, the supremum is taken over the interval \([0,1]\).) Hence, in the case of Lebesgue measure, Corollary 6 implies that \(\frac{1}{2\alpha}\leq\dim_{\mathrm{H}}W_{y,\alpha}\leq\frac{1}{\alpha}\) for Lebesgue almost every \(y\). Theorem 5 does not give a precise Hausdorff dimension of the set \(W\), but under an additional assumption on the sequence \((x_{n})_{n=1}^{\infty}\), we do in fact get the exact dimension. If we define the quantity, \[d_{N}((x_{n}))=\inf_{1\leq k<l\leq N}\|x_{k}-x_{l}\|,\] we are able to prove the following. **Theorem 7**.: _Let \((x_{n})\) be a sequence in \([0,1)\) with \(D_{N}((x_{n}))\leq cN^{1-\eta}\) and \(d_{N}((x_{n}))\geq cN^{-\beta}\), where \(0<\eta<1\) and \(1\leq\beta\leq\eta(\alpha-1)+1\). Put_ \[W=\{\,y\in[0,1):\|x_{n}-y\|<n^{-\alpha}\text{ infinitely often}\,\}.\] _Then \(\dim_{\mathrm{H}}W=\frac{1}{\alpha}\)._ We get the following corollary to Theorem 7. **Corollary 8**.: _Let \((q_{n})_{n=1}^{\infty}\) be an increasing sequence of integers. For any \(\alpha\geq 3\) and \(\lambda\)-almost every \(y\), we have_ \[\operatorname{dim_{H}}W_{y,\alpha}=\frac{1}{\alpha}.\] ## 4. Proofs We first prove Theorem 1. Proof of Theorem 1.: We put \[A_{k}=A_{k}(y)=\{\,\gamma\in[0,1):\|q_{k}y-\gamma\|<k^{-\alpha}\,\}=B(q_{k}y,k^ {-\alpha}).\] Then \[\lambda(A_{k})=2k^{-\alpha}.\] We also have for \(k<l\) that \[\lambda(A_{k}\cap A_{l})\leq 2l^{-\alpha}\mathbb{1}_{B(0,2k^{-\alpha})}((q_{k}- q_{l})y) \tag{5}\] since \(\lambda(A_{k}\cap A_{l})\leq\lambda(A_{l})=2l^{-\alpha}\) always holds and \(A_{k}\cap A_{l}=\emptyset\) holds when \((q_{k}y-q_{l}y)\not\in B(0,2k^{-\alpha})\). We now consider the average \(\int\lambda(A_{k}(y)\cap A_{l}(y))\operatorname{d}\!\mu(y)\). Writing \(\mathbb{1}_{B(0,2k^{-\alpha})}\) as a Fourier series, \[\mathbb{1}_{B(0,2k^{-\alpha})}(y)=\sum_{j\in\mathbb{Z}}a_{j}e^{i2\pi jy},\] we can estimate the Fourier coefficients by \(|a_{j}|\leq 1/|j|\) for \(j\neq 0\) and we have \(a_{0}=4k^{-\alpha}\). This implies that \[\int\mathbb{1}_{B(0,2k^{-\alpha})}((q_{k}-q_{l})y)\operatorname{d}\!\mu(y)= \sum_{j}a_{j}\int e^{i2\pi j(q_{k}-q_{l})y}\operatorname{d}\!\mu\leq a_{0}+ \sum_{j\neq 1}j^{-1}\hat{\mu}(j(q_{k}-q_{l})).\] Since \(\mu\) has positive Fourier dimension with decay rate \(\tau\), and by the assumption on the sequence \((q_{n})\), we have \(\hat{\mu}(j(q_{k}-q_{l}))\leq\tilde{c}_{1}j^{-\tau}(l-k)^{-1-\tau\varepsilon}\). Therefore, we have \[\int\mathbb{1}_{B(0,2k^{-\alpha})}((q_{k}-q_{l})y)\operatorname{d}\!\mu(y) \leq 4(k^{-\alpha}+c_{1}(l-k)^{-1-\tau\varepsilon})\] for some constant \(c_{1}\). By (5) we then have \[\int\lambda(A_{k}\cap A_{l})\operatorname{d}\!\mu\leq 8l^{-\alpha}(k^{- \alpha}+c_{1}(l-k)^{-1-\tau\varepsilon}). \tag{6}\] Take \(m<n\) and put \[S_{m,n}(y)=S_{m,n}=\sum_{k=m}^{n}\lambda(A_{k}),\qquad C_{m,n}(y)=\sum_{m\leq k,l\leq n}\lambda(A_{k}(y)\cap A_{l}(y)),\] We consider the sets \[\Delta_{m,n}(p)=\bigg{\{}\,y:C_{m,n}(y)>p\int C_{m,n}(y)\operatorname{d}\!\mu( y)\,\bigg{\}}\] For \(p\geq 1\), we have \(\mu(\Delta_{m,n}(p))\leq p^{-1}\). Therefore, \(\mu(\complement\Delta_{m,n}(p))\geq 1-p^{-1}\) and \[\complement\Delta_{m,n}(p)=\bigg{\{}\,y:C_{m,n}(y)\leq p\int C_{m,n}(y) \operatorname{d}\!\mu(y)\,\bigg{\}}.\] Fix \(p>1\) and take a quickly increasing sequence \(n_{j}\) going to infinity in such a way that \(\frac{n_{j+1}}{n_{j}}\to\infty\). Put \(\Delta_{j}(p)=\Delta_{n_{j},n_{j+1}}(p)\). Then \(G(p)=\limsup\complement\Delta_{j}(p)\) satisfies \(\mu(G(p))\geq 1-p^{-1}\). Take \(y\in G(p)\). Then there are infinitely many \(j\) for which \[C_{n_{j},n_{j+1}}(y)\leq pC_{j},\] where \(C_{j}=\int C_{n_{j},n_{j+1}}\,\mathrm{d}\mu\). Put \(S_{j}=S_{n_{j},n_{j+1}}\) We estimate \(C_{j}\) and \(S_{j}\). By (6) we have \[C_{j} \leq S_{j}+2\sum_{l=n_{j}+1}^{n_{j+1}}\sum_{k=n_{j}}^{l-1}8l^{- \alpha}(k^{-\alpha}+c_{1}(l-k)^{-1-\tau\varepsilon})\] \[\leq S_{j}+2\sum_{l=n_{j}+1}^{n_{j+1}}8l^{-\alpha}\biggl{(}\int_ {0}^{l}x^{-\alpha}\,\mathrm{d}x+c_{2}\biggr{)}\] \[=S_{j}+16\sum_{l=n_{j}+1}^{n_{j+1}}l^{-\alpha}\biggl{(}\frac{1}{ 1-\alpha}l^{1-\alpha}+c_{2}\biggr{)} \tag{7}\] \[\leq S_{j}+\frac{16}{(1-\alpha)^{2}}(n_{j+1}+1)^{2-2\alpha}+\frac {16c_{2}}{(1-\alpha)}(n_{j+1}+1)^{1-\alpha}.\] We also have \[S_{j}=\sum_{k=n_{j}}^{n_{j+1}}2k^{-\alpha}\geq 2\int_{n_{j}}^{n_{j+1}}x^{- \alpha}\,\mathrm{d}x=\frac{2}{1-\alpha}(n_{j+1}^{1-\alpha}-n_{j}^{1-\alpha}). \tag{8}\] For any such \(j\), we then have by the very famous and important Chung-Erdos inequality [4] that \[\lambda\biggl{(}\bigcup_{k=n_{j}}^{n_{j+1}}A_{k}(y)\biggr{)}\geq\frac{S_{n_{ j},n_{j+1}}^{2}}{C_{n_{j},n_{j+1}}}\geq\frac{S_{j}^{2}}{pC_{j}}.\] By the estimates (7) and (8) we have that \[\lim_{j\to\infty}\frac{S_{j}^{2}}{C_{j}}=\frac{1}{4p}.\] It follows that for any \(y\in G(p)\) we have \(\lambda(W_{y,\alpha})\geq\frac{1}{4p}>0\). Hence \(\lambda(W_{y,\alpha})>0\) for any \(y\in\bigcup_{p>1}G(p)\) and \(\mu(\bigcup_{p>1}G(p))=1\). Now, we consider the special case of \(\mu=\lambda\) and prove Theorem 2. Proof of Theorem 2.: We first note that \(|\hat{\lambda}(\xi)|=O(|\xi|^{-1})\), whence by Theorem 1, \(W_{y,\alpha}\) will have positive Lebesgue measure for any \(\alpha<1\). We consider the auxiliary set \[\overline{W}_{\alpha}=\{\,(x,y)\in[0,1]^{2}:\|q_{n}y-x\|<n^{-\alpha}\text{ infinitely often }\,\}.\] By Theorem 1 and Fubini's Theorem, the two dimensional Lebesgue measure of this set is positive for any \(\alpha<1\). Now, fix \(\alpha^{\prime}\in(\alpha,1)\). By the above, \(\lambda(\overline{W}_{\alpha^{\prime}})>0\). We will use this and an inflation argument due to Cassels [1] to prove that this implies that \(\lambda(\overline{W}_{\alpha})=1\). Let \(\epsilon>0\). We split up the square \([0,1)^{2}\) into \(T^{2}\) disjoint squares of side length \(1/T\), where \(T\in\mathbb{N}\) is chosen by Lemma VII.5 in [1], so that for one of these boxes, \(B\) say, \[\lambda(\overline{W}_{\alpha^{\prime}}\cap B)>(1-\epsilon)\lambda(B).\] Scaling the box by \(T^{2}\) and projecting back onto the \([0,1)^{2}\) by reducing modulo \(1\), we cover the entire unit square, so that \[\lambda(T(\overline{W}_{\alpha^{\prime}}\cap B)\pmod{1})>(1-\epsilon).\] Since \(\epsilon>0\) was arbitrary, it follows that for every pair \((x,y)\in[0,1)^{2}\) outside of a set of Lebesgue measure \(<\epsilon\), there is a point \((x^{\prime},y^{\prime})\in\overline{W}_{\alpha^{\prime}}\) with \((x,y)=T(x^{\prime},y^{\prime})\pmod{1}\). Now, for such a point, \[\|q_{n}y-x\|=\|T(q_{n}y^{\prime}-x^{\prime})\|\leq T\|q_{n}y^{\prime}-x^{\prime }\|.\] Since \((x^{\prime},y^{\prime})\in\overline{W}_{\alpha^{\prime}}\), for infinitely many values of \(n\), \[T\|q_{n}y^{\prime}-x^{\prime}\|<Tn^{-\alpha^{\prime}}=Tn^{\alpha-\alpha^{ \prime}}n^{-\alpha}.\] Since \(\alpha^{\prime}>\alpha\), \(Tn^{\alpha-\alpha^{\prime}}\leq 1\) for \(n\) large enough, since \(T\) was fixed. Consequently, \((x,y)\in\overline{W}_{\alpha^{\prime}}\), so that \(\lambda(\overline{W}_{\alpha})>1-\epsilon\). Since \(\epsilon>0\) was arbitrary, it follows that \(\lambda(\overline{W}_{\alpha})=1\). Now, to conclude we apply Fubini's Theorem once more and find that almost all fibers \(W_{y,\alpha}\) are Lebesgue full. We now prove Theorem 3. In the proof, we will make use of the following quantity related to discrepancy, \[\tilde{D}_{N}(x_{n})=\sup_{I\subseteq[0,1)}|\#\{\,N+1\leq n\leq 2N:x_{n}\in I \,\}-N\,\lambda(I)|.\] We will need the following lemmata. **Lemma 9**.: _If \(D_{N}((x_{n}))\leq O(N^{1-\eta})\) then \(\tilde{D}_{N}((x_{n}))\leq O(N^{1-\eta})\)._ Proof.: Take \(I\subset[0,1)\). Then \[\bigg{|}\sum_{n=N+1}^{2N}\mathbb{1}_{\,I}(x_{n})-N|I|\bigg{|} =\bigg{|}\sum_{n=1}^{2N}\mathbb{1}_{\,I}(x_{n})-2N|I|-\sum_{n=1}^ {N}\mathbb{1}_{\,I}(x_{n})+N|I|\bigg{|}\] \[\leq D_{2N}((x_{n}))+D_{N}((x_{n})).\qed\] We will also need the following result of Persson and Reeve [11]. **Lemma 10** (Persson and Reeve [11]).: _Let \((\mu_{n})\) be a sequence of probability measures with \(\operatorname{supp}\mu_{n}\subset E_{n}\subset[0,1]\), and such that \(\mu_{n}\to\lambda\) weakly. If there is a constant \(C\) such that_ \[I_{s}(\mu_{n}):=\iint|x-y|^{-s}\,\mathrm{d}\mu(x)\mathrm{d}\mu(y)<C\] _for all \(n\), then the set \(E=\limsup E_{n}\) has Hausdorff dimension at least \(s\), and \(E\) has large intersections as defined in Section 2._ The reader will have noted that the upper bound on the dimension in each case is the same. It is the easier part of all the statements. We prove it here as a lemma. **Lemma 11**.: _Let \((x_{n})_{n=1}^{\infty}\) be any sequence in \([0,1)\) and let \(\alpha\geq 1\). Then,_ \[\dim_{\mathrm{H}}\{\,\gamma\in[0,1):\|x_{n}-\gamma\|<n^{-\alpha}\text{ infinitely often}\,\}\leq\frac{1}{\alpha}.\] Proof.: Let \(s=1/\alpha+\epsilon\) for some \(\epsilon>0\). For any \(N\in\mathbb{N}\), the set, \(W\) say, in the statement of the lemma is covered by the intervals \((x_{n}-n^{-\alpha},x_{n}+n^{-\alpha})\), each of length \(2n^{-\alpha}\). For \(\delta>0\), pick \(N_{0}\in\mathbb{N}\) so that \(2n^{-\alpha}<\delta\) for \(n\geq N_{0}\). Then, for any \(N\geq N_{0}\) \[\mathcal{H}_{\delta}^{s}(W)\leq\sum_{n=N}^{\infty}(2n^{-\alpha})^{s}=2^{-1- \alpha\epsilon}\sum_{n=N}^{\infty}n^{-1-\alpha\epsilon}.\] The latter tends to \(0\) as \(N\) tends to infinity, so for any \(\delta>0\), \(\mathcal{H}_{\delta}^{s}(W)=0\), whence \(\mathcal{H}^{s}(W)=0\). But then, \(\dim_{\mathrm{H}}(W)\leq s=1/\alpha+\epsilon\). As \(\epsilon>0\) was arbitrary, we are done. Proof of Theorem 3.: By Lemma 11, we need only prove the lower bound. We let \(\mu_{N}\) be the sequence of probability measures defined by \[\frac{\mathrm{d}\mu_{N}}{\mathrm{d}x}=\frac{1}{N}\sum_{k=N+1}^{2N}\frac{1}{2(2 N)^{-\alpha}}\mathbb{1}_{B(a_{k}x,r_{N})},\qquad r_{N}=(2N)^{-\alpha},\] and consider \[I_{t}(\mu_{N})=\iint|x-y|^{-t}\,\mathrm{d}\mu(x)\mathrm{d}\mu(y)=c_{0}\int| \hat{\mu}_{N}(\xi)|^{2}|\xi|^{t-1}\,\mathrm{d}\xi,\] where \(c_{0}\) is a constant which does not depend on \(N\) and \(t\). We have \[\frac{\mathrm{d}\mu_{N}}{\mathrm{d}x}=\frac{1}{N}\sum_{k=N+1}^{2N}\frac{1}{2(2 N)^{-\alpha}}\mathbb{1}_{B(0,r_{N})}*\delta_{a_{k}x},\] so that \[\hat{\mu}_{N}(\xi)=(2N)^{\alpha-1}\sum_{k=N+1}^{2N}\widehat{1_{B(0,r_{N})}}( \xi)e^{-i2\pi\xi a_{k}x},\] and \[|\hat{\mu}_{N}(\xi)|^{2}=(2N)^{2\alpha-2}\sum_{k=N+1}^{2N}\sum_{l=N+1}^{2N}| \widehat{1_{B(0,r_{N})}}(\xi)|^{2}e^{-i2\pi\xi(a_{k}-a_{l})x}.\] Since \[\widehat{1_{B(0,r_{N})}}(\xi)=r_{N}\frac{\sin(2\pi r_{n}\xi)}{\pi r_{N}\xi},\] we can estimate that \[|\widehat{1_{B(0,r_{N})}}(\xi)|^{2}\leq m_{N}(\xi):=\min\{2r_{N},\pi|\xi|^{-1} \}^{2}.\] It follows that \[\int|\hat{\mu}_{N}(\xi)|^{2}\,\mathrm{d}\mu \leq(2N)^{2\alpha-2}\sum_{k=N+1}^{2N}\sum_{l=N+1}^{2N}\int m_{N}( \xi)e^{-i2\pi\xi(a_{k}-a_{l})x}\,\mathrm{d}\mu(x)\] \[=(2N)^{2\alpha-2}m_{N}(\xi)\sum_{k=N+1}^{2N}\sum_{l=N+1}^{2N}\hat {\mu}((a_{k}-a_{l})\xi).\] Take constants \(c\) and \(\eta>0\) such that \(|\hat{\mu}(\xi)|\leq c\min\{1,|\xi|^{-\eta}\}\). Then, we have \[\sum_{k=N+1}^{2N}\sum_{l=N+1}^{2N}\hat{\mu}((a_{k}-a_{l})\xi)\leq cN^{2}.\] If \(|\xi|\geq 1\), we can use that \((a_{k})\) is lacunary to prove that \[\sum_{k=N+1}^{2N}\sum_{l=N+1}^{2N}\hat{\mu}((a_{k}-a_{l})\xi)\leq\sum_{k=N+1}^ {2N}\sum_{l=N+1}^{2N}c\min\{1,|a_{k}-a_{l}|^{-\eta}\}\leq CN,\] for some constant \(C\). We thus have \[\int|\hat{\mu}_{N}|^{2}\,\mathrm{d}\mu\leq c(2N)^{2\alpha}m_{N}(\xi),\] and if \(|\xi|\geq 1\), we have \[\int|\hat{\mu}_{N}|^{2}\,\mathrm{d}\mu\leq C(2N)^{2\alpha-1}m_{N}(\xi).\] Combining the last two estimates, we get that \[\int I_{t}(\mu_{N})\,\mathrm{d}\mu =c_{0}\iint|\hat{\mu}_{N}(\xi)|^{2}|\xi|^{t-1}\,\mathrm{d}\xi\, \mathrm{d}\mu\] \[=c_{0}\iint|\hat{\mu}_{N}(\xi)|^{2}\,\mathrm{d}\mu\,|\xi|^{t-1} \,\mathrm{d}\xi\] \[\leq c^{\prime}N^{2\alpha}\int_{0}^{1}m_{N}(\xi)|\xi|^{t-1}\, \mathrm{d}\xi+C^{\prime}N^{2\alpha-1}\int_{1}^{\infty}m_{N}(\xi)|\xi|^{t-1}\, \mathrm{d}\xi.\] We can now easily compute that \[N^{2\alpha}\int_{0}^{1}m_{N}(\xi)|\xi|^{t-1}\,\mathrm{d}\xi \leq N^{2\alpha}r_{N}^{2}\int_{0}^{1}\xi^{t-1}\,\mathrm{d}\xi=c_{1},\] \[N^{2\alpha-1}\int_{1}^{\frac{\pi}{2}r_{N}^{-1}}m_{N}(\xi)|\xi|^{ t-1}\,\mathrm{d}\xi =4N^{2\alpha-1}r_{N}^{2}\int_{1}^{\frac{\pi}{2}r_{N}^{-1}}\xi^{t- 1}\,\mathrm{d}\xi\leq c_{2}N^{-1+\alpha t},\] \[N^{2\alpha-1}\int_{\frac{\pi}{2}r_{N}^{-1}}^{\infty}m_{N}(\xi)| \xi|^{t-1}\,\mathrm{d}\xi =2\pi^{2}CN^{2\alpha-1}\int_{\frac{\pi}{2}r_{N}^{-1}}^{\infty}\xi^ {t-3}\,\mathrm{d}\xi\leq c_{3}N^{-1+\alpha t},\] where \(c_{1}\), \(c_{2}\) and \(c_{3}\) do not depend on \(N\). Hence \[\int I_{t}(\mu_{N})\,\mathrm{d}\mu\leq c_{4}+c_{5}N^{-1+\alpha t},\] which is uniformly bounded in \(N\) provided \(t<1/\alpha\). Fix \(t<1/\alpha\). It follows that for \(\mu\)-almost all \(x\), there is a subsequence \((N_{k})\) of the natural numbers, such that \(I_{t}(\mu_{N_{k}})\) is uniformly bounded in \(k\). Lemma 10 finishes the proof. Proof of Theorem 5.: As before, the upper bound is taken care of by Lemma 11. Once again, we define probability measures \(\mu_{N}\) with densities \[\frac{\mathrm{d}\mu_{N}}{\mathrm{d}x}=\frac{1}{N}\sum_{n=N+1}^{2N}\frac{1}{2(2 N)^{-\alpha}}1_{\,B(x_{n},(2N)^{-\alpha})}.\] Then \[\operatorname{supp}\mu_{N}=E_{N}=\bigcup_{n=N+1}^{2N}B(x_{n},(2N)^{-\alpha}).\] Hence \(E_{N}\subset W_{N}\), where \[W_{N}=\bigcup_{n=N+1}^{2N}B(x_{n},n^{-\alpha}).\] It follows that \(\limsup E_{N}\subset W=\limsup W_{N}\). Let \(s=\eta/\alpha\). We show that there is a constant \(c_{1}\) such that \[\mu_{N}(I)\leq c_{1}|I|^{s}\] holds for every \(N\) and every interval \(I\) (and hence for every Borel set). Let \(M(I,N)\) denote the number of \(n\) with \(N+1\leq n\leq 2N\) and \(x_{n}\in I\). We have \[M(N,I)\leq N|I|+\tilde{D}_{N}((x_{n}))\leq N|I|+cN^{1-\eta}.\] In every point \(y\in[0,1)\), the number of \(n=N+1,\dots,2N\) such that \(y\in B(x_{n},(2N)^{-\alpha})\) is at most \[\sup_{|I|=2(2N)^{-\alpha}}M(I,N)\leq 2^{1-\alpha}N^{1-\alpha}+cN^{1-\eta}<(1+c)N ^{1-\eta}.\] Hence, the density of \(\mu_{N}\) satisfies \[\frac{\operatorname{d}\!\mu_{N}}{\operatorname{d}\!x}\leq\frac{1}{N}\frac{1}{ 2(2N)^{-\alpha}}(1+c)N^{1-\eta}\leq 2^{\alpha-1}(1+c)N^{\alpha-\eta}. \tag{9}\] If \(|I|<N^{-\alpha}\), we use (9) to estimate that \[\mu_{N}(I) \leq 2^{\alpha-1}(1+c)N^{\alpha-\eta}|I|\] \[=2^{\alpha-1}(1+c)N^{\alpha-\eta}|I|^{1-\eta/\alpha}|I|^{\eta/ \alpha}\leq 2^{\alpha-1}(1+c)|I|^{\eta/\alpha}.\] If \(|I|\geq N^{-\alpha}\), let \(3I\) denote the interval concentric with \(I\) and such that \(|3I|=3|I|\). Then \[\mu_{N}(I)\leq\frac{1}{N}M(3I,N)\leq 3|I|+cN^{-\eta}\leq 3|I|+c|I|^{\eta/ \alpha}\leq(3+c)|I|^{\eta/\alpha}. \tag{10}\] This shows that \(\mu_{N}(I)\leq c_{2}|I|^{s}\) with \(c_{2}=2^{\alpha-1}(3+c)\). It now follows for \(t<s\) that \[I_{t}(\mu_{N}) =\iint|x-y|^{-t}\operatorname{d}\!\mu_{N}(x)\operatorname{d}\! \mu_{N}(y)\] \[=\int\int_{0}^{\infty}\mu_{N}(B(x,u^{-1/t}))\operatorname{d}\!u \operatorname{d}\!\mu_{N}(x)\] \[\leq\int\int_{0}^{\infty}\min\{1,c_{2}(u^{-1/t})^{s}\} \operatorname{d}\!u\operatorname{d}\!\mu_{N}(x)\] \[=\int_{0}^{\infty}\min\{1,c_{2}u^{-s/t}\}\operatorname{d}\!u=:C <\infty.\] Since \(D_{N}((x_{n}))\leq o(N)\), the measures \(\mu_{N}\) converges weakly to \(\lambda\) and it follows from Lemma 10 that \(\dim_{\rm H}\limsup E_{N}\geq t\). Since \(t\) can be taken arbitrary close to \(s=\eta/\alpha\) we have \(\dim_{\rm H}W\geq\dim_{\rm H}\limsup E_{N}\geq\eta/\alpha\) Proof of Theorem 7.: The proof is similar to that of Theorem 5. We define \(\mu_{N}\) in the same way. It is then enough to prove that there is a constant \(c_{3}\) such that \(\mu_{N}(I)\leq c_{3}|I|^{\frac{1}{\alpha}}\) holds for all \(N\) and all intervals \(I\). We consider three cases. First, we assume that \(|I|\leq N^{-\beta}\). Then \(I\) intersects at most two balls \(B(x_{n},(2N)^{-\alpha})\). Therefore, we have \[\mu_{N}(I)\leq|I|\frac{2}{N}\frac{1}{2(2N)^{-\alpha}}\leq 2^{\alpha}|I|N^{ \alpha-1}\leq 2^{\alpha}|I|^{1-\frac{1}{\alpha}(\alpha-1)}=2^{\alpha}|I|^{ \frac{1}{\alpha}}.\] If \(\beta>\eta\alpha\), then we assume that \(N^{-\beta}\leq|I|\leq N^{-\eta\alpha}\). Otherwise, we do not have to consider this case. The assumption \(\beta\leq\eta(\alpha-1)+1\) implies that \(\alpha\geq\beta\). Hence, \(I\) intersects a ball \(B(x_{n},(2N)^{-\alpha})\) only if \(x_{n}\in 3I\). The number of such \(x_{n}\) is at most \(3|I|/(2N)^{-\beta}\) and \[\mu_{N}(I)\leq 2^{\beta}3|I|N^{\beta}\frac{1}{N}=2^{\beta}3|I|^{\frac{1}{ \alpha}}|I|^{1-\frac{1}{\alpha}}N^{\beta-1}\leq 2^{\beta}3|I|^{\frac{1}{ \alpha}}N^{-\eta(\alpha-1)+\beta-1}.\] Since \(-\eta(\alpha-1)+\beta-1\leq 0\) by assumption, we have \(\mu_{N}(I)\leq 2^{\beta}3|I|^{\frac{1}{\alpha}}\). Finally, assume that \(|I|\geq N^{-\eta\alpha}\). Then we have as in (10) that \[\mu_{N}(I)\leq 3|I|+cN^{-\eta}\leq(3+c)|I|^{\frac{1}{\alpha}}.\] All estimates taken together, we have for any interval \(I\) and any \(N\) that \(\mu_{N}(I)\leq c_{3}|I|^{\frac{1}{\alpha}}\), with \(c_{3}=\max\{2^{\alpha},2^{\beta}3,3+c\}\). The rest of the proof is exactly as in the proof of Theorem 5. Proof of Corollary 8.: Let \(\eta<\frac{1}{2}\). Then \(D_{N}((q_{n}y))\leq cN^{1-\eta}\) for \(\lambda\)-a.e. \(y\in[0,1)\). Let \(\beta>2\) and consider the set \[\Delta=\{\,|q_{l}-q_{k}|:k\neq l\,\}.\] Let \((n_{j})_{j=1}^{\infty}\) be an enumeration of \(\Delta\) such that if \(q_{l}-q_{k}\in\Delta\), then there is a \(j\leq l^{2}\) such that \(n_{j}=q_{l}-q_{k}\). (Such an enumeration clearly exists.) Consider the set \[V=\{\,y\in[0,1):\|n_{j}y\|<j^{-\beta/2}\text{ for infinitely many }n\in\mathbb{N}\,\}.\] Since for each \(j\), \[\lambda\{\,y\in[0,1):\|n_{j}y-\gamma\|<j^{-\beta/2}\,\}=2j^{-\beta/2},\] we have by the first Borel-Cantelli lemma that \(\lambda(V)=0\). It follows that for \(\lambda\)-a.e. \(y\) there exists \(c>0\) such that \(\|n_{j}y\|>cj^{-\beta/2}\) for all \(j\). Suppose now that \(y\) is such a typical point, and consider the sequence \((q_{n}y)\). Take \(1\leq k<l\leq N\) and let \(j\leq l^{2}\) be such that \(n_{j}=q_{l}-q_{k}\in\Delta\). Since \(j\leq N^{2}\), and by the choice of \(y\), we have \[\|q_{k}y-q_{l}y\|=\|(q_{l}-q_{k})y\|=\|n_{j}y\|>cj^{-\beta/2}\geq cN^{-\beta}.\] This shows that \(d_{N}((q_{n}y))>cN^{-\beta}\). It follows by Theorem 7 that \(\dim_{\rm H}W_{y,\alpha}=\frac{1}{\alpha}\) whenever \(\alpha\) is such that \(\beta\leq\eta(\alpha-1)+1\). Since \(\beta\) can be taken as close to \(2\) and \(\eta\) as close to \(\frac{1}{2}\) as we desire, \(\dim_{\rm H}W_{y,\alpha}=\frac{1}{\alpha}\) holds for almost all \(y\) as long as \(\alpha>3\). The case \(\alpha=3\) follows by a limiting proceedure. ## 5. Concluding remarks We strongly suspect that -- as in the case of Lebesgue measure -- the conclusion of Theorem 1 can be strengthened from positive to full measure. However, the 'inflation argument' of Cassels used in the proof of Theorem 2 is not immediately applicable unless both measures are Lebesgue or at least have the same scaling properties. Hence, we do not at present know how to extend this. In Corollary 8, the exact value of the Hausdorff dimension is calculated, but only for \(\alpha\geq 3\). Again, it would be natural to suspect that this can be extended to \(\alpha>1\), but our methods do not allow us to do this. As a final remark, in the paper of Pollington, Velani, Zafeiropoulos and Zorin [12], a weaker requirement on the Fourier decay of the measure \(\mu\) is required. It is possible that positive Fourier dimension can be weakened to, say, logarithmic decay. We have not attempted to do this in the present manuscript.
2303.06476
TransMatting: Tri-token Equipped Transformer Model for Image Matting
Image matting aims to predict alpha values of elaborate uncertainty areas of natural images, like hairs, smoke, and spider web. However, existing methods perform poorly when faced with highly transparent foreground objects due to the large area of uncertainty to predict and the small receptive field of convolutional networks. To address this issue, we propose a Transformer-based network (TransMatting) to model transparent objects with long-range features and collect a high-resolution matting dataset of transparent objects (Transparent-460) for performance evaluation. Specifically, to utilize semantic information in the trimap flexibly and effectively, we also redesign the trimap as three learnable tokens, named tri-token. Both Transformer and convolution matting models could benefit from our proposed tri-token design. By replacing the traditional trimap concatenation strategy with our tri-token, existing matting methods could achieve about 10% improvement in SAD and 20% in MSE. Equipped with the new tri-token design, our proposed TransMatting outperforms current state-of-the-art methods on several popular matting benchmarks and our newly collected Transparent-460.
Huanqia Cai, Fanglei Xue, Lele Xu, Lili Guo
2023-03-11T18:21:25Z
http://arxiv.org/abs/2303.06476v1
# TransMatting: Tri-token Equipped Transformer Model for Image Matting ###### Abstract Image matting aims to predict alpha values of elaborate uncertainty areas of natural images, like hairs, smoke, and spider web. However, existing methods perform poorly when faced with highly transparent foreground objects due to the large area of uncertainty to predict and the small receptive field of convolutional networks. To address this issue, we propose a Transformer-based network (TransMatting) to model transparent objects with long-range features and collect a high-resolution matting dataset of transparent objects (Transparent-460) for performance evaluation. Specifically, to utilize semantic information in the trimap flexibly and effectively, we also redesign the trimap as three learnable tokens, named tri-token. Both Transformer and convolution matting models could benefit from our proposed tri-token design. By replacing the traditional trimap concatenation strategy with our tri-token, existing matting methods could achieve about 10% improvement in SAD and 20% in MSE. Equipped with the new tri-token design, our proposed TransMatting outperforms current state-of-the-art methods on several popular matting benchmarks and our newly collected Transparent-460. Tri-token, Vision Transformer, Image Matting, Deep Learning ## 1 Introduction Image matting is a technique to separate the foreground object from the background in an image by predicting a precise alpha matte. It has been widely used in many applications, such as image and video editing, background replacement, and virtual reality [1, 2, 3]. Image matting assumes that every pixel in the image \(I\) is a linear combination of the foreground object \(F\) and the background \(B\) by an alpha matte \(\alpha\): \[I=\alpha F+(1-\alpha)B,\alpha\in[0,1] \tag{1}\] As only the image \(I\) is known in this equation, image matting is an ill-posed problem. So many existing methods [1, 2, 3, 4, 5, 6, 7] take a trimap as an auxiliary input. The trimap is a single-channel image with three unique values: 255, 0, and 128. These values segment the original image into three parts: known foreground, known background and unknown areas. Most traditional methods, including sampling-based [5, 8, 9, 10, 11, 12] and propagation-based methods [1, 3, 4, 13], utilize the known area samples to find candidate colors or propagate the known alpha value. They heavily rely on information from known areas, especially the known foreground areas. Recently, learning-based methods greatly improve image matting performance, but they still need specific information from known areas to predict unknown areas [7, 14]. However, according to [14], more than 50% pixels in the unknown areas cannot be correlated to pixels in the known regions due to the limited reception field of deep learning methods. Furthermore, some highly transparent objects (_e.g._, glass, bonfires, plastic bags, etc.) and even non-salient objects (_e.g._, web, smoke, water drops, etc.) have meticulous interiors with most pixels belonging to the unknown regions [15]. It is very challenging for existing models to learn long-range features with little known information. Tab. 1 illustrates the performance of some state-of-the-art (SOTA) methods on transparent totally (TT) and transparent partially (TP) objects separately on the Composition-1k test set. We could observe that the results of TT are much worse than TP, indicating that TT objects are the key to improving the overall evaluation performance. To address this issue, we make the first attempt to introduce Vision Transformer (ViT) [16] to extract features with a large receptive field. The Transformer model is first proposed in natural language processing (NLP) and has achieved great performance in computer vision tasks, such as classification [16, 17, 18], segmentation [19, 20], and detection [21, 22]. It mainly consists of multi-head self-attention (MHSA) and multi-layer perception modules. Different from the convolution, the MHSA module could mine information in a global scope. Thus, the ViT model could learn global semantic features of the foreground object with high-level position relevance. To further help the model integrate the low-level appearance features (_e.g._, texture) with high-level semantic features (_e.g._, shape), a Multi-scale Global-guided Fusion (MGF) module is proposed. The MGF takes three adjacent scales of features as input, uses the non-background mask to guide the low-level feature, and employs the high-level feature to guide the information integration. With this new MGF module, only foreground features could be transmitted to the decoder, reducing the influence of background noises. On the other hand, most current trimap-based methods follow DIM [2] to treat the trimap as an image and concatenate it with the RGB image before feeding into the deep model. This brings the same weakness: insufficient long-range relationships extracted from the trimap. Meanwhile, compared with the RGB image, the information in trimap is very sparse and has some high-level positional relevance [23]. Most areas in the trimap have the same value, making convolution neural networks with small kernels inefficient in extracting features from trimaps. Different from the traditional manner, we propose a new form of trimap named **tri-token** with inspirations from the [cls] token in ViT. We utilize three tokens to represent three kinds of semantic information in the trimap: foreground, background, and unknown area. Based on the proposed tri-token, we could directly introduce these semantic information into the self-attention mechanism. The Transformer module could identify which features are from the known areas and which are from the unknown areas without any range limitation. In addition, profiting from the semantic-representation capability of our proposed tri-token, we could introduce trimap information to any depth of both Transformer and convolutional neural networks (CNNs). Experiments on various SOTA methods verify that the performance could be further improved by only replacing the traditional trimap with our tri-token. Besides, there has not been any test bed for images with transparent or non-salient foreground objects. Previous datasets mainly focus on salient and opaque foregrounds, like animals [25] and portraits [23, 26], which have significantly been investigated. To further help the community to dig into the transparent and non-salient cases, we collect 460 high-solution natural images with large unknown areas and manually label their alpha mattes. To summarize, our main contributions are as follows: 1. We propose a Transformer-based model (TransMatting) with a big receptive field to help extract global features from images, especially for transparent objects. An MGF module is also introduced to guide the integration of multi-scale features with these global features. 2. We redesign the trimap in a semantic tri-token manner to directly integrate with deep features. The tri-token could introduce trimap information to image features flexibility by reducing the domain gap between them. 3. We build a high-resolution matting dataset with 460 images of the transparent or non-salient foreground. The dataset will be released to promote the development of matting research. 4. Experiments on four matting datasets demonstrate that the proposed TransMatting method outperforms the current SOTA methods. And the proposed tri-token could boost both Transformer and convolutional SOTA methods, indicating the great utilization potentiality in image matting. A preliminary conference version of this work appeared in [27]. On the basics of [27], we extend it in threefolds. (1) We extend the network (named as TransMattingV1) proposed in [27] to TransMattingV2 by introducing our proposed tri-token to both Transformer and convolutional modules. Thus we could boost the performance of any off-the-shelf deep network by replacing the traditional trimap with our tri-token. (2) We further investigate some critical designs of the tri-token, including more introduction positions, initialization methods, and interpretability. (3) By introducing the tri-token to both the CNN local extractor and Transformer blocks, we show the superiority of TransMatting in both composite and real-world images. ## 2 Related Works ### _Traditional Matting Methods_ Traditional matting methods can be divided into two categories: sampling-based and propagation-based methods. These methods mainly rely on low-level features, like color, location, etc. The sampling-based methods [5, 8, 9, 10, 11, 12] first predict the colors of the foreground and background by evaluating the similarity of colors between the known foreground, background and unknown area in samples, and then predict alpha mattes. Various sampling techniques have been investigated, including colour cluster sampling [12], edge sampling [11], ray casting [10], etc. The propagation-based methods [1, 4, 13] propagate the information from the known foreground and background to the unknown area by solving the sparse linear equation system [3], the Poisson equation system [28], etc., to obtain the best global optimal alpha. ### _Deep-Learning Matting Methods_ In recent decades, deep learning technologies have boomed in various fields of computer vision. The same goes for the image matting task. [29] combines the sampling and deep neural network to improve the accuracy of alpha matting prediction. Based on providing a larger dataset Composition-1k [2], DIM utilizes an encoder-decoder model to directly predict alpha mattes, which effectively improves the accuracy. IndexNet [6] proposes a learnable index function to unify upsampling operators for image matting and other three dense prediction tasks. [30] introduces semantic classification information of the matting region and uses learnable weights and multi-class discriminators to revise the prediction results. [24] proposes a general matting framework, which is conducive to obtaining better results under the guidance of different qualities and forms. [23] further mines the \begin{table} \begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{3}{c}{MSE\(\downarrow\)} & \multicolumn{3}{c}{SAD\(\downarrow\)} \\ \cline{2-7} & IT & IT & IT+IP & IT & IT & IT+IF \\ \hline IndexNet [6] & 22.87 & 8.59 & 13 & 110.3 & 18.08 & 45.8 \\ GCAMatting [7] & 15.89 & 6.2 & 9.1 & 85.72 & 13.68 & 35.3 \\ MGMatting [24] & 13.01 & 4.65 & 7.18 & 77.88 & 11.87 & 31.76 \\ \hline TransMattingV1 & 7.49 & 3.40 & 4.58 & 59.37 & 10.35 & 24.96 \\ TransMattingV2 & **6.94** & **3.26** & **4.37** & **55.47** & **10.25** & **23.82** \\ \hline \hline \end{tabular} \end{table} TABLE I: Performance of transparent totally (TT) and transparent partially (TP) objects on the Composition-1k test set. TT objects are those whose entire image is highly transparent, while TP denotes objects with the significant known foreground. information of the RGB map and trimap and fuses the global information from these maps for obtaining better alpha mattes. All of the above methods use trimap as guidance. Some trimap-free methods can predict alpha mattes without using trimap. However, the accuracy of these trimap-free methods still has a big gap compared to that of the trimap-guided ones [31, 32, 33, 34], indicating that the trimap could help the model to capture information efficiently. ### _Vision Transformer_ The Transformer is firstly proposed in [35] to model long-range dependencies for machine translation and has demonstrated impressive performance on NLP tasks. Inspired by this, numerous attempts have been made to adapt transformers for vision tasks, and promising results have been shown for vision fields such as image classification, objection detection, semantic segmentation, etc. In particular, ViT [16] divides the input image into patches with a size of 16 \(\times\) 16 and feeds the patch sequences to the vanilla Transformer model. To help the training process and improve the performance, DeiT [17] proposes a teacher-student strategy, which includes a distillation token for the student to learn from the teacher. Later, Swin [18], PVT [36], Crossformer [37], and HVT [38] combine the Transformer and pyramidal structure to decrease the number of patches progressively for obtaining multi-scale feature maps. To reduce computing and memory complexity, Swin, HRFormer [39], and CrossFormer apply local-window self-attention in Transformer, which also shows superior or comparable performance compared to the counterpart CNNs. The powerful self-attention mechanism in Transformer shows great advantages over CNN by capturing global attention of the whole image. However, some researchers [40] argue that locality and globality are both essential for vision tasks. Therefore, various researchers have tried combining the locality of CNN with the globality of Transformer to improve performance further. LocalViT [40] brings depth-wise convolutions to vision transformer to combine self-attention mechanism with locality, and shows great improvement compared to the pure Transformer, like DeiT, PVT, and TNT [41]. ## 3 Matting Dataset According to the transparency of foregrounds, we could divide the images into two types: 1) Transparent Partially (TP): there are significant foreground and uncertainty areas, and the foreground areas can provide information for the prediction of uncertainty areas. For example, when the foreground is human, the opaque and unknown regions are the hair or clothes. 2) Transparent Totally (TT): there are minor or non-salient foreground areas, and the entire image is semi-transparent or highly transparent. These images include glass, plastic bags, fog, water drops, etc. As illustrated in Tab. II, we select four popular image matting datasets for comparison, including DAPM [26], Composition-1k [2], Distinctions-646 [42], and AIM-500 [15]. The DAPM dataset only consists of portraits with no translucent or transparent objects. The Composition-1k dataset contains multiple categories, while most images are portraits and there are only 86 TT-type objects among them. The Distinctions-646 dataset also mainly consists of portraits and has a similar number (79) of TT objects. The same condition is also in the AIM-500 dataset, which contains only 76 (15.2%) TT-type images (corresponding to the Salient Transparent/Meticulous type and the Non-Salient type as defined in the original dataset). As we can see, the transparent objects in the above datasets only occupy a small portion. This may be because it is much more difficult to label transparent objects than other objects, limiting the progress of transparent objects in the matting field. In this work, we propose the first large-scale dataset targeting various highly transparent objects called Transparent-460 dataset. Our Transparent-460 dataset includes 460 high-quality manually-annotated alpha mattes, where 410 images are for training and 50 for testing. Furthermore, to our best knowledge, the resolution of our Transparent-460 is the highest (the average resolution is up to 3820 \(\times\) 3766) among all datasets with high transparent objects. We believe this new matting dataset will greatly advance the matting research on objects with massive transparent areas. ## 4 Methodology ### _Baseline Structure_ To extract both local and global features, we combine CNN and the Transformer model as our baseline encoder, as illustrated in Fig. 1. Specifically, based on the ResNet34-UNet [7, 24], we only use the first two stage of the ResNet34 as our CNN local extractor. And then, a stack of Swin Transformer blocks [18] are adopted as the Transformer encoder. Other components are the same as the original ResNet34-UNet. It is worth noting that this Transformer-based baseline structure achieves a comparable performance with current matting methods. And we will introduce our main contribution as follows. ### _Tri-token_ Almost all SOTA methods [2, 23, 24, 30, 43, 44, 44] use trimap as a guide and directly concatenate the RGB image with the trimap as the model's input. However, the modalities of the RGB image and trimap are quite different. The RGB image scales from 0 to 255 and shows fine low-level features like texture, color similarity, etc. The trimap includes three values containing high-level semantic information, like shape, location, etc., [23]. It is not suitable to use one extract to extract features from these two modalities. Furthermore, most deep models utilize pre-trained weights on the classification task (with three channels in the input) for backbone initialization, the concatenation of the RGB \begin{table} \begin{tabular}{c c c c} \hline \hline Image Matting Dataset & Total Num & TT Num & Resolution \\ \hline DAPM [26] & 2000 & 0 & 800\(\times\)600 \\ Composition-1k [2] & 481 & 86 & 1297\(\times\)1082 \\ Distinction-646 [42] & 646 & 79 & 1272\(\times\)1565 \\ AIM-500 [15] & 500 & 76 & 1260\(\times\)1397 \\ Transparent-460 (Ours) & 460 & 460 & 3820\(\times\)3766 \\ \hline \hline \end{tabular} \end{table} TABLE II: Comparison between different public matting datasets. image and the trimap as input (four channels) will break the pre-trained weights and affect the performance. Thus, their direct concatenation is not the most efficient way to extract features from the trimap. To the best of our knowledge, we are the first to attempt to explore different harmonising manners of the RGB image and trimap rather than simply concatenating them. Inspired by the [cls] token in Vision Transformer, we design a new tri-token (shown in Fig. 1) manner, aiming to introduce the high-level semantic information directly into the deep features to replace the inefficient concatenation method. Given a vanilla \(Trimap\in\mathbb{R}^{H\times W}\), we generate three **learnable** tokens (denoted as \(Token_{i}\), \(i=\{0,1,2\}\)) with different initialization to represent the known foreground, known background, and unknown areas, respectively. Every token is a 1D vector, that is, \(Token_{i}\in\mathbb{R}^{C}\). Then we replace every pixel in the trimap with the corresponding tri-token values to generate the tri-token map, formulated as: \[Trimap[Trimap==i]=Token_{i},i=\{0,1,2\} \tag{2}\] In this manner, the trimap is converted to a more flexible manner. With an additional channel dim, the tri-token could easily introduce semantic and location information to high-level abstract feature maps. Besides, the learnable tri-token have the ability to learn to align trimap spaces and image representation spaces. The following will introduce how to utilize tri-token to guide both Transformer and convolutional models. ### _Tri-token Guided Image Matting Network_ #### 4.3.1 Tri-token Guided Transformer Block Due to the large proportion of uncertainty areas in images of transparent objects, global connectivity is much more important to mine features from far known patches. CNN does not have a big enough receptive field to cover the whole foreground object [14], which leads to poor estimation of pixels outside receptive fields. In contrast, the receptive field of the Transformer could extend to the whole image with the self-attention mechanism. Thus we adopt the Transformer model here to extract long-range features of transparent objects. The Transformer model consists of multi-head self-attention (MHSA) and Multi-layer Perception (MLP) modules. The self-attention mechanism can be thought of as a mapping between a query and a collection of key-value pairs. The output is a weighted sum of the values, and the weights are assigned by the compatibility function between the query and the relevant key. This can be implemented by Scaled Dot-Product Attention [35], in which a \(softmax\) function is used to activate the dot products of query and all keys for obtaining the weights. MHSA means that more than one self-attention is performed in parallel. Like [37, 39, 18], we use non-overlapping windows whose size is \(M\times M\) to divide the feature maps. The MHSA is performed within each window. The formulations of vanilla attention and our tri-token attention in a specific window are shown as follows: \[Attention(Q,K,V)=\mathcal{S}(QK^{T}/\sqrt{d})V \tag{3}\] Fig. 1: The structure of our TransMatting. Unlike vanilla matting methods that concatenate the trimap with the image, we feed the RGB image into the CNN local extractor and introduce tri-token directly to extracted deep features. We design two ways to introduce tri-token to both convolutional and Transformer models. To better model transparent objects with few known foregrounds, the TGTB module is proposed to extract further long-range features. And the MGF module is responded to fusing multi-level features with global guidance. ``` 1trimap=cv2.imread('trimap.png',0)#[H,W] 2H,W=trimap.shape 3 4#converttotri-token 5dim=64 6foreygen_token=nn.Parameter(torch.zeros(1,1,dim)) 7uncertain_token=nn.Parameter(torch.zeros(1,1,dim)+1) 8background_token=nn.Parameter(torch.zeros(1,1,dim)+2) 9buildstri=tokenbar for convenience 10tri_token_map=torch.zeros(H,W,dim) 11tri_token_map[trimap=0]=background_token 12tri_token_map[trimap=128]=uncertain_token 13tri_token_map[trimap=255]=foreground_token ``` **Algorithm 1** Pseudo codes to convert traditional trimap to our proposed tri-token in PyTorch style. \[Tri\text{-}token\ Attention(Q,K,V)=\mathcal{S}((Q+Tri\text{-}token)K^{T}/ \sqrt{d})V \tag{4}\] where \(Q,K,V\in\mathbb{R}^{M^{2}\times d}\) represent the query, key, and value in the attention mechanism, respectively. \(d\) is the query/key dimension and \(\mathcal{S}\) denotes the \(softmax\) function. In the Tri-token Attention formulation, \(Q\), \(K\), and \(V\) are the same as that in the standard self-attention. The \(Tri\text{-}token\) is our proposed learnable _trimap_ that adds to the query for forming a new tri-token guided query. In this way, our tri-token attention mechanism can selectively aggregate contexts and evaluate which region should be paid more attention to with the guidance of our learnable tri-tokens. In this way, we combine the self-attention and the tri-token to focus on more valuable regions by considering the relationship among known foreground, known background and uncertainty areas and finally achieve the best performance. We replace the vanilla Transformer block with our Tri-token Guided Transformer Block (TGTB) every five blocks in the Transformer encoder. #### 4.3.2 Tri-token Guided Convolutional Network As convolutional networks are still commonly used for image matting, it is quite meaningful to extend our tri-token to boost current CNN-based matting methods. Different from the Transformer model, which is based on the self-attention mechanism, convolutional networks mainly consist of a stack of convolutional layers. There are no ready-made attention modules in it to introduce tri-token information. To make our modification general and simple, we employ the additive operation (formulated in Eq. 5) for convolutional models. Specifically, after some convolution layers, a feature map is extracted from the input image. Every pixel vector in the feature map represents the semantic features of a small input image patch. According to the trimap, we could add the corresponding tri-token to the pixel vector to introduce high-level tri-token information into feature maps. In this way, the tri-token can guide the network to distinguish foreground, background, and uncertain areas to pay more attention to where it needs to focus. The \(Tri\text{-}token\text{-}map\) in Eq. 5 can be generated efficiently with Alg. 1. \[x=Conv_{2}(Conv_{1}(x)+Tri\text{-}token\text{-}map) \tag{5}\] ### _Multi-scale Global-guided Fusion Module_ In the multi-scale feature pyramid structure, in-depth features contain more global information, while shallow features have rich local information like texture, color similarity, etc. Fusing these features is vital for accurately predicting alpha mattes for high transparent objects [33]. Although the direct sum operation can realize feature fusion, the details in the shallow features may attenuate the impact of the advanced semantics, resulting in some subtle regions missing [33]. To address this issue, we propose a Multi-scale Global-guided Fusion (MGF) module in the decoder process (see Fig. 1 for details), with both the non-background information and the advanced semantic features as guidance, to fuse the high-level semantic information and the lower ones effectively. Specifically, we denote three adjacent features from shallow to deep as \(T_{n-1}\), \(T_{n}\), and \(T_{n+1}\). The \(T_{n-1}\) is first downsampled, then the Hadamard product is employed between the non-background mask and \(T_{n-1}\) to extract the low-level features of non-background, which helps to reduce the impact of complex background influence. This can guide the network to pay more attention to the foreground and unknown areas. After that, the \(T_{n-1}\) is concatenated with \(T_{n}\), and a convolution layer is performed to align the channel of fused features. We mark this feature as \(T_{f}\). For the \(T_{n+1}\), we first perform a global average pooling to generate channel-wise statistics and then use two fully connected (FC) layers to squeeze channels. As shown in Fig. 1, features output from the two FC layers are denoted as \(\gamma\) and \(\beta\), separately.To fully capture channel-wise dependencies, we add a sigmoid function to activate \(\gamma\) and perform broadcast multiplication with \(T_{f}\) for channel re-weighting. After that, broadcast addition is performed between the channel-weighted feature and \(\beta\). A convolution layer is used to fuse information from different groups. Notably, a skip connection from \(T_{n}\) is employed for obtaining the final fused features of MGF. In short, considering that fusing low-level features directly may cause a negative impact on the advanced semantics [33], two techniques are proposed here. Firstly, the non-background mask is introduced into the fusion process to filter out the complex background information and further help to concentrate more attention on the foreground and unknown areas. Secondly, the global channel-wise attention from higher-level features is used for re-weighting and enhancing the important information in the fused features. ### _Loss Function_ Following [24], we use three kinds of losses, including alpha loss (\(\mathcal{L}_{\alpha}\)), compositing loss [2] (\(\mathcal{L}_{comp}\)), and Laplacian loss [45] (\(\mathcal{L}_{lap}\)). As formulated below, the weights are set as 0.4, 1.2, and 0.16, respectively. \[\mathcal{L}_{final}=0.4*\mathcal{L}_{alpha}+1.2*\mathcal{L}_{comp}+0.16* \mathcal{L}_{lap} \tag{6}\] ## 5 Experiments ### _Dataset_ **Composition-1k**[2] contains 493 and 50 unique foreground objects as training and test sets, respectively. Every foreground object is composited with 100 background images from COCO [46] and 20 from Pascal VOC [47]. As a result, there are 49,300 images for training and 1,000 images for testing. **Distinction-646**[42] comprises 646 distinct foreground objects. Similar to the Composition-1k, 50 objects are divided as the test set. Following the same composition rule, there are 59,600 and 1000 images for training and testing, respectively. **AIM-500**[15] is a real-world benchmark, including many types of objects, like humans, glass, grids, etc. It contains 500 natural images for testing the performance of real-world matting methods. **Our Transparent-460** mainly consists of transparent and non-salient objects as the foreground, like water drops, jellyfish, plastic bags, glass, crystals, etc. We collect 460 high-resolution images and carefully annotate them with Photoshop. Considering the transparent objects are very meticulous, we keep the origin resolution of all collected images up to 3820 \(\times\) 3766 on average. To our best knowledge, this is the first transparent object matting dataset in such a high resolution. Some samples are illustrated in Fig. 2. ### _Evaluation Metrics_ Following [44, 23, 6, 24], we use four metrics for evaluation, including the Sum of Absolute Differences (SAD), Mean Squared Error (MSE), Gradient error (Grad.) and Connectivity error (Conn.). It is notable that the unit of MSE value is set to 1e-3 for easy reading. ### _Implementation Details_ We use the PyTorch [48] to implement our proposed method. All the experiments are trained for 200,000 iterations. We initialize our network with ImageNet [49] pre-trained weights and initialize the background, uncertainty, and foreground token in the tri-token with values of 0, 1, and 2, respectively. The ablation experiments in Tab. III, 4, 5, 6 are done with 2 NVIDIA Tesla V100 GPUs with a batch size of 32. Moreover, to compare our method with the existing SOTA methods, we use a batch size of 64 with 4 NVIDIA Tesla V100 GPUs to train our proposed method in Tab. VII, 8, 9. The Adam optimizer is utilized, and the initial learning rate is set to 1e-4 with the same learning rate decay strategy in [24, 50]. For a fair comparison, we follow the same data augmentation as [7], like random crop, rotation, scaling, shearing, etc. Moreover, trimaps for training are generated using dilation and erosion ways on alpha images by random kernel sizes from 1 to 30. Finally, we crop 512\(\times\)512 patches on the center of the unknown area of alpha and composition with backgrounds from COCO. The same settings are applied to Composition-1k, Distinction-646, and our collected Transparent-460. ### _Ablation Study_ To evaluate different designs of our new proposed tri-token and MGF module, we designed some ablation studies on the Composition-1k test set. **Evaluate the effectiveness of our proposed modules.** The quantitative results under the SAD, MSE, Grad., and Conn. with and without our proposed tri-token and MGF Fig. 2: Samples from our Transparent-460 dataset. The foreground objects are highly transparent and occupy a large area of the image, which is very challenging for current image matting methods. All images are high-resolution. Thus, it is better to zoom in to find subtle details. module are illustrated in Tab. 3. As we can see, with the tri-token introduced in Transformer (TGTB), the four metrics decrease from 29.14, 6.34, 12.06, and 25.21 to 27.45, 5.66, 11.77, and 24.30, respectively. Combined with introducing the tri-token in the CNN local extractor, the prediction error could further decrease to 26.40, 5.22, 11.03, and 22.47, respectively. It indicates that, for both the Transformer and convolutional networks, our redesigned tri-token is more effective in helping the model to distill the advanced semantics and pay attention to critical areas than traditional concatenating to the input image. Compared with the baseline model (the first row), the addition of the MGF module could solely achieve similar performance, which decreases by 1.93, 0.77, 0.83, and 1.96 on four metrics. This indicates that our proposed multi-scale feature fusion strategy can also help the decoder to harmonize local and global features to guide information integration. On the basis of the MGF module, whether adding the tri-token to the convolutional or Transformer module, our model can achieve better performance. When combined with the tri-token on two networks and the MGF module, the model achieves the best performance of 26.36, 5.08, 10.24, 21.68 in terms of SAD, MSE, Grad., and Conn., which improves 2.78, 1.26, 1.82, and 3.53 compared with the baseline strategy, showing the effectiveness of our proposed tri-token and the MGF module. **Determine different initialization methods for tri-token.** Different from the vanilla trimap, our proposed tri-token is learnable and could update itself as the model training. Considering that initialization is essential for deep learning methods, we further explore different approaches to initialize our tri-token: pre-defined values for every token or totally random. For the pre-defined manner, we initialize background, uncertainty, and foreground with values \(a\), \(b\), and \(c\), respectively, denoted as (\(a\), \(b\), \(c\)). We conduct experiments on three initialization approaches: (-1, 0, 1), (0, 1, 2), and random initialization. The results are illustrated in Tab. 4. As we can see, the proposed tri-token is robust with different initialization methods. Among these methods, we can observe that the (0, 1, 2) initialization performs best for both CNN-based MGMatting and our Transformer-based TransMatting. But the improvement is limited, indicating that the initialization method is not critical for our tri-token updating. **Determine where to introduce the tri-token in our TransMatting.** The tri-token is proposed to introduce high-level semantic and position information to the deep model. As we know, the shadow features in deep models have a big resolution which reserves more position information. In contrast, the deep layer could extract more abstract semantic features, which is suitable for mutual learning with the tri-token. To determine which position is more suitable to introduce the tri-token, we conduct experiments with different positions of the Transformer encoder in our TransMatting model. There are four stages in the Transformer encoder. We adopt the stage number \(i\) (\(i\in\{1,2,3,4\}\)) to represent the insertion position. For example, position 3 indicates that the tri-token is introduced at the beginning of stage 3. The experiment results on Composition-1k are illustrated in Tab. 5. By comparing the performance of different introduction positions for tri-token, we conclude that introducing tri-token in multiple positions achieves the best performance. **Determine where to introduce tri-token in convolutional networks.** As the introduction manner of the tri-token in convolutional models differs from the Transformer, we conduct experiments on Composition-1k to find the best introduction position for convolutional networks. Since our TransMatting only has a few \begin{table} \begin{tabular}{c c c c c c} \hline \hline Methods & Position & SAD \(\downarrow\) & MSE \(\downarrow\) & Grad. \(\downarrow\) & Conn. \(\downarrow\) \\ \hline \multirow{4}{*}{GCAMatting [7]} & - & 35.3 & 9.1 & 16.9 & 32.5 \\ & 1 & 33.71 & 7.75 & 15.80 & 30.28 \\ & 2 & 33.79 & 8.09 & 15.51 & 30.16 \\ & 3 & 32.72 & 7.48 & 14.05 & 29.19 \\ & 4 & **31.32** & **7.02** & **13.24** & **27.67** \\ & 3,4 & 32.41 & 7.45 & 14.62 & 29.08 \\ & 1,2,34 & 32.38 & 7.52 & 14.92 & 28.77 \\ \hline \multirow{4}{*}{MGMatting [24]} & - & 31.5 & 6.8 & 13.5 & 27.3 \\ & 1 & 28.34 & 5.45 & 11.08 & 24.83 \\ & 2 & 28.85 & 5.52 & 11.43 & 24.90 \\ \cline{1-1} & 3 & **28.31** & 5.52 & 11.37 & 24.73 \\ \cline{1-1} & 4 & 28.32 & **5.41** & **10.96** & **24.62** \\ \cline{1-1} & 3,4 & 28.82 & 5.61 & 11.39 & 25.12 \\ \cline{1-1} & 1,2,3,4 & 28.59 & 5.59 & 11.37 & 24.99 \\ \hline \hline \end{tabular} \end{table} TABLE VI: Quantitative results on the Composition-1k test set with different positions to introduce the tri-tokens into convolutional networks on SOTA CNN methods. \begin{table} \begin{tabular}{c c c c c c} \hline \hline \multicolumn{2}{c}{Tri-token} & \multirow{2}{*}{MGF} & \multirow{2}{*}{SAD \(\downarrow\)} & \multirow{2}{*}{MSE \(\downarrow\)} & \multirow{2}{*}{Grad. \(\downarrow\)} & \multirow{2}{*}{Conn. \(\downarrow\)} \\ \cline{1-1} \cline{6-6} CNN & Transformer & & 29.14 & 6.34 & 12.06 & 25.21 \\ & ✓ & & 27.45 & 5.66 & 11.77 & 24.30 \\ ✓ & ✓ & & 26.40 & 5.22 & 11.03 & 22.47 \\ & ✓ & ✓ & 27.21 & 5.57 & 11.23 & 23.25 \\ & ✓ & ✓ & 26.83 & 5.22 & 10.62 & 22.14 \\ ✓ & ✓ & ✓ & 26.43 & 5.24 & 10.97 & 22.41 \\ ✓ & ✓ & ✓ & **26.36** & **5.08** & **10.24** & **21.68** \\ \hline \hline \end{tabular} \end{table} TABLE III: The effectiveness of our proposed tri-token strategy and MGF module on the Composition-1k test set. \begin{table} \begin{tabular}{c c c c c c} \hline \hline Method & Init Method & SAD \(\downarrow\) & MSE \(\downarrow\) & Grad. \(\downarrow\) & Conn. \(\downarrow\) \\ \hline \multirow{4}{*}{MGMatting} & (-1, 0, 1) & 28.56 & 5.53 & 11.61 & 24.98 \\ & (0, 1, 2) & **28.34** & **5.45** & **10.96** & **24.62** \\ \cline{1-1} & random & **28.34** & 5.51 & 11.12 & 24.70 \\ \hline \multirow{4}{*}{TransMattingV2} & (-1, 0, 1) & 26.81 & 5.16 & 11.24 & 23.11 \\ & (0, 1, 2) & **26.36** & **5.08** & **10.24** & **21.68** \\ \cline{1-1} & random & 26.38 & 5.09 & 10.68 & 22.50 \\ \hline \hline \end{tabular} \end{table} TABLE IV: Quantitative results on the Composition-1k test set with different initialization methods of tri-token. \begin{table} \begin{tabular}{c c c c c c} \hline \hline Position & SAD \(\downarrow\) & MSE \(\downarrow\) & Grad. \(\downarrow\) & Conn. \(\downarrow\) \\ \hline \multirow{4}{*}{GCAMatting [7]} & - & 35.3 & 9.1 & 16.9 & 32.5 \\ & 1 & 33.71 & 7.75 & 15.80 & 30.28 \\ & 2 & 33.79 & 8.09 & 15.51 & 30.16 \\ & 3 & 32.72 & 7.48 & 14.05 & 29.19 \\ & 4 & **31.32** & **7.02** & **13.24** & **27.67** \\ & 3,4 & 32.41 & 7.45 & 14.62 & 29.08 \\ & 1,2,3,4 & 32.38 & 7.52 & 14.92 & 28.77 \\ \hline \multirow{4}{*}{MGMatting [24]} & - & 31.5 & 6.8 & 13.5 & 27.3 \\ & 1 & 28.34 & 5.45 & 11.08 & 24.83 \\ \cline{1-1} & 2 & 28.85 & 5.52 & 11.43 & 24.90 \\ \cline{1-1} & 3 & **28.31** & 5.52 & 11.37 & 24.73 \\ \cline{1-1} & 4 & 28.32 & **5.41** & **10.96** & **24.62** \\ \cline{1-1} & 3,4 & 28.82 & 5.61 & 11.39 & 25.12 \\ \cline{1-1} & 1,2,3,4 & 28.59 & 5.59 & 11.37 & 24.99 \\ \hline \hline \end{tabular} \end{table} TABLE IV: Quantitative results on the Composition-1k test set with different initialization methods of tri-token. extractor, we conduct experiments on two convolutional SOTA methods: GCMMatting and MGMatting. The results are illustrated in Tab. VI. The position "-" indicates no tri-token introduced, and the trimap is concatenated to the image. As we can see, replacing trimap with our tri-token achieves significant improvements on all metrics with any position, indicating that our tri-token is a highly flexible and efficient design. We can also find that the performance gets better as the introduction position goes deeper. The best performance is achieved when tri-token is introduced to stage four: outperform the trimap strategy with about 3 - 4 improvements on SAD. This indicates that the information in trimap is really high-level and more suitable for interacting with deep features in convolutional models. However, unlike the Transformer model, introducing the tri-token to multiple positions decreases the performance of GCMMatting. The SAD and MSE increase from 31.32 and 7.02 to 32.41 and 7.45, respectively, when introducing the tri-token to both stage 3 and stage 4. The performance further decreases to 32.38 and 7.52 when introduced to all four stages. The same trends are observed in MGMAtting. We guess this is caused by the different introduction manner between convolutional models and Transformer ones. The straightforward addition operation makes it hard for the tri-token to simultaneously suit different features from different stages. ### _Comparison with Prior Works_ To evaluate the performance of our proposed method, we compare it with current SOTA image matting models on three synthetic image matting datasets (Composition-1k, Distinction-646, and our collected Transparent-460) and one real-world dataset (AIM-500). #### 5.5.1 Experiments on Synthetic Datasets **Testing on Composition-1k.** We show some quantitative and qualitative results on Tab. VII and Fig. 3, respectively. Without any test time augmentations, our proposed TransMattingV1 model outperforms other SOTA methods on all four evaluation metrics by only training on the Composition-1k training set. As illustrated in Tab. VII, our model decreases the MSE and Grad. heavily: from 5.2, 10.6 to 4.58 and 9.72, respectively, indicating the effectiveness of our TransMattingV1. By further introducing the tri-token to the CNN local extractor, our TransMattingV2 performs better than TransMattingV1, especially on SAD and Conn., improving by 1.14 and 1.3, respectively. Fig. 3 visualizes the qualitative comparisons of two cases on the Composition-1k test set. As we can see in the first row, the color of the devdrop is similar to the background, for which previous methods are all mistaken. In contrast, thanks to the large receptive field and tri-token guided design, our proposed model learns image context and object structure effectively, contributing to sophisticated performance in such a challenging region. Besides, our proposed model can also accurately estimate the TT-type objects whose background is complex. For example, in the second row of Fig. 3, the background is so complicated that the previous methods are hard to describe the details. It is worth noting that our TransMattingV2 achieves more high-quality alpha mattes than TransMattingV1, indicating that our tri-token helps to distinguish the foreground and background in deep features. To evaluate the universality of our proposed tri-token, we introduce it to three current SOTA methods: DIM [2], GCMMatting [7], and MGMatting [24]. We follow [6] to drop the matting refinement stage in DIM for simplicity and fair comparison. The results are illustrated in Tab. VII. As we can see, our proposed tri-token boosts the performance of all three SOTA methods with a big margin. Especially for recent proposed GCMMatting and MGMAtting, by replacing the traditional concatenation with our proposed tri-token, the SAD is decreased by 3.98 and 3.18, respectively. This proves the effectiveness of the proposed tri-token strategy in existing SOTA methods and indicates a great potential to replace the concatenation strategy with the proposed tri-token in a wide range of image matting methods. As illustrated in Tab. I, to further verify our method in boosting the performance of TT-type objects, we analyze two types of objects separately. We can find that our TransMattingV1 achieves 42.4% and 23.7% improvements in terms of MSE and SAD on TT-type compared with MGMatting, which is the key factor in improving overall performance. Notably, our TransMattingV2 further improves MSE and SAD by 7.3% and 6.5% on TT-type compared to the TransMattingV1. In addition, our model can also achieve a good improvement on TP-type, although not as significant as TT-type. Thus, we can conclude that our proposed method can model TT-type objects with long-range features, and our tri-token design can assist our method in better understanding the semantic information in the trimap and integrating it into deep features as well as long-range features to further improve performance. **Testing on Distinction-646.** Distinction-646 comprises of more distinct foreground images than Composition-1k. Tab. VIII compares the performance of our TransMatting model with other SOTA methods on Distinction-646. For a fair comparison, we follow the whole inference protocol in [24], [42] to calculate the metrics based on the whole image. Without any additional tuning, our TransMattingV1 outperforms all the SOTA methods. Similarly, our TransMattingV2 could \begin{table} \begin{tabular}{l|c c c c} \hline \hline Methods & SAD \(\downarrow\) & MSE \(\downarrow\) & Grad. \(\downarrow\) & Conn. \(\downarrow\) \\ \hline AlphaGAN [51] & 52.4 & 30 & 38 & 53 \\ DIM [2] & 50.4 & 14 & 31.0 & 50.8 \\ IndexNet [6] & 45.8 & 13 & 25.9 & 43.7 \\ AdaMatting [44] & 41.7 & 10 & 16.8 & - \\ ContextNet [45] & 35.8 & 8.2 & 17.3 & 33.2 \\ TIM-Net [23] & 29.08 & 6.0 & 12.9 & 27.29 \\ FRAMatting [52]\({}^{\dagger}\) & 25.8 & 5.2 & 10.6 & 20.8 \\ \hline DIM* [2] & 54.6 & 17 & 36.7 & 55.3 \\ DIM* + Tri-token & **53.3** & **16.9** & **33.54** & **53.64** \\ \hline GCMMatting [7] & 35.3 & 9.1 & 16.9 & 32.5 \\ GCMMatting + Tri-token & **31.32** & **7.02** & **13.24** & **27.67** \\ \hline MGMatting [24] & 31.5 & 6.8 & 13.5 & 27.3 \\ MGMatting + Tri-token & **28.32** & **5.41** & **10.96** & **24.62** \\ \hline TransMattingV1 & 24.96 & 4.58 & 9.72 & 20.16 \\ TransMattingV2 & **23.82** & **4.37** & **9.43** & **18.86** \\ \hline \hline \end{tabular} \end{table} TABLE VII: The quantitative results on Composition-1k test set. \({}^{\dagger}\) denotes results with test-time augmentation. Following IndexNet, the matting refinement stage is not applied in DIM* for simplicity and fair comparison. further improve from introducing tri-token in convolutional layers. We also illustrate some visual comparisons in Fig. 4, especially the ones without distinct foregrounds, like grids. As we expected, our proposed TransMatting model could predict more precise alpha values of highly transparent objects compared with other SOTA methods. **Testing on our Transparent-460.** Based on their release codes, we train IndexNet and MGMatting methods on our dataset and compare them with ours in Tab. IX. Our Transparent-460 dataset mainly focuses on transparent and non-salient foregrounds, which is very difficult for existing image matting methods. Surprisingly, as illustrated in Tab. IX, our TransMattingV1 achieves a promising result with only 4.02 MSE and surpasses other methods by a large margin on SAD, Grad., and Conn. In addition, our TransMattingV2 has further improved compared to TransMattingV1, especially on Conn. Furthermore, to evaluate the generalization performance of our model, we train our TransMatting on the Composition-1k training set and directly test it on the Transparent-460 test set. The results are shown in Tab. IX. Thanks to the large receptive field and well-designed multi-scale fusion module, our model reduces more than half of the SAD, MSE, and Conn. As shown in Fig. 5, we visualize the comparison results of two representative TT Fig. 4: Visual comparison of our TransMatting against SOTA methods on the Distinction-646 test set. Fig. 3: Visual comparison of our TransMatting against SOTA methods on the Composition-1k test set. \begin{table} \begin{tabular}{c|c c c c} \hline \hline Methods & SAD\(\downarrow\) & MSE\(\downarrow\) & Grad.\(\downarrow\) & Conn.\(\downarrow\) \\ \hline KNNMatting [1] & 116.68 & 25 & 103.15 & 121.45 \\ DIM [2] & 47.56 & 9 & 43.29 & 55.90 \\ HAtHatMatting [42] & 48.98 & 9 & 41.57 & 49.93 \\ GCAMatting [7] & 27.43 & 4.8 & 18.7 & 21.86 \\ MGMatting [24] & 33.24 & 4.51 & 20.31 & 25.49 \\ \hline TransMattingV1 & 25.65 & 3.4 & 16.08 & 21.45 \\ TransMattingV2 & **24.19** & **3.0** & **14.42** & **19.14** \\ \hline \hline \end{tabular} \end{table} TABLE VIII: The quantitative results on Distinctions-646 test set. \begin{table} \begin{tabular}{c|c c c c} \hline \hline Methods & SAD\(\downarrow\) & MSE\(\downarrow\) & Grad.\(\downarrow\) & Conn.\(\downarrow\) \\ \hline IndexNet [6] & 573.09 & 112.53 & 140.76 & 327.97 \\ MGMatting [24] & 111.92 & 6.33 & 25.67 & 103.81 \\ \hline TransMattingV1 & 88.34 & **4.02** & 20.99 & 82.56 \\ TransMattingV2 & **87.26** & 4.06 & **20.86** & **80.6** \\ \hline \hline \end{tabular} \end{table} TABLE IX: The quantitative results on our proposed Transparent-460 test set. type objects, a plastic bag and a glass bottle. We can see that our TransMatting shows more sophisticated and accurate alpha mattes than other methods. In short, both quantitative and generalization comparisons prove the effectiveness and superiority of our proposed method. #### 5.5.2 Experiments on Real-World Dataset **Generalization on AIM-500.** Besides synthetic benchmarks, we also validate our method on real-world images, which is more challenging than synthetic ones. Specifically, we train the network on the synthetic Composition-1k training set and directly evaluate on AIM-500. The results are shown in Tab. XI. As we can see, without any extra data augmentation or fine-tuning, our model surpasses other SOTA methods with a big margin. The performance of TransMattingV2 is better than TransMattingV1, thanks to the introduced tri-token in the convolutional network. Furthermore, AIM-500 also has some real-world transparent objects, like glass, fire, plastic bags, etc. Although other methods fail to process such difficult cases, our model still has superior performance compared with them on the generalization results, which further proves the effectiveness of our model on transparent objects. ## 6 Analysis of the Tri-token The tri-token is proposed to effectively introduce semantic and position information in trimap to deep features. We hypothesize that this could help the model find where it should pay more attention. In Fig. 6, we visualize attention maps of the self-attention mechanism according to the red-marked point. As the red point belongs to the transparent glass cup, the pixels belonging to the cup should have a \begin{table} \begin{tabular}{c|c c c c} \hline \hline Methods & SAD\(\downarrow\) & MSE\(\downarrow\) & Grad.\(\downarrow\) & Conn.\(\downarrow\) \\ \hline DIM [2] & 351.98 & 53 & 151.37 & 292.04 \\ IndexNet [6] & 434.14 & 74.73 & 124.98 & 368.48 \\ MGMatting [24] & 344.65 & 57.25 & 74.54 & 282.79 \\ TIMI-Net [23] & 328.08 & 44.2 & 142.11 & 289.79 \\ \hline TransMattingV1 & 192.36 & 20.96 & 41.8 & **158.37** \\ TransMattingV2 & **181.94** & **18.37** & **40.99** & 159.53 \\ \hline \hline \end{tabular} \end{table} TABLE XI: Generalization results on our proposed Transparent-460 test set. Fig. 5: Visual comparison of our TransMatting against SOTA methods on our Transparent-460 test set. \begin{table} \begin{tabular}{c|c c c c} \hline \hline Methods & SAD\(\downarrow\) & MSE\(\downarrow\) & Grad.\(\downarrow\) & Conn.\(\downarrow\) \\ \hline IndexNet [6] & 28.49 & 28.8 & 18.15 & 27.95 \\ CA [45] & 26.33 & 26.6 & 18.89 & 25.05 \\ CA+DA [45] & 32.15 & 38.8 & 30.25 & 31.00 \\ GCAMMatting [7] & 35.10 & 38.9 & 25.67 & 35.48 \\ \(A^{2}U\)[53] & 30.38 & 30.7 & 22.60 & 30.69 \\ SIM [30] & 27.05 & 31.1 & 23.68 & 27.08 \\ \hline TransMattingV1 & 16.55 & 14.06 & 12.35 & 15.61 \\ TransMattingV2 & **15.70** & **12.20** & **11.36** & **15.05** \\ \hline \hline \end{tabular} \end{table} TABLE XI: Generalization results on our proposed Transparent-460 test set. Fig. 6: Visualization of attention maps (\(q\oplus k\)) of the self-attention mechanism. The color indicates the attention weight between the red point and other image pixels. (**a**) is the input image, (**b**) and (**c**) represent the attention maps of the traditional concatenation strategy and our tri-token manner. In (**d**), the tri-token values are replaced from uncertainty to background values, and the model is deceived into paying attention to the background, indicating the guiding efforts of our tri-token. high weight. However, as illustrated in Fig. 6 (b), many background pixels are highly activated with the concatenation strategy. While with our tri-token (shown in Fig. 6 (c)), the background is suppressive, and the model mainly concentrates on the foreground, indicating that our tri-token can efficiently guide the model to distinguish different areas. To further demonstrate the effectiveness of the tri-token, we replace the token in the red point with background values. As we can see from Fig. 6 (d), the replaced tri-token misleads the model to mistakenly focus on the background pixels with very high response, meaning that the model relies on the guidance from our tri-token to distinguish foreground and background while traditional concatenation loses much information in deep layers. ## 7 Conclusion In this paper, we present a Transformer-based network for image matting (TransMatting) to mine long-range features of foreground objects, especially for transparent and non-salient ones. A multi-scale global-guided fusion module is proposed to take the global information as a guide to fuse multi-scale features for better modelling the uncertainty regions in transparent objects. We also collect a high-resolution matting dataset of transparent objects to serve as a test bed for future research. Furthermore, we redesign the trimap as a unified representation to directly introduce semantic information to deep features instead of traditional concatenation. Various experiments demonstrate that it could boost both Transformer and convolutional networks, indicating that the proposed tri-token has the potential to be a new paradigm for image matting.
2301.08222
Damped harmonic oscillator revisited: the fastest route to equilibrium
Theoretically, solutions of the damped harmonic oscillator asymptotically approach equilibrium, i.e., the zero energy state, without ever reaching it exactly, and the critically damped solution approaches equilibrium faster than the underdamped or the overdamped solution. Experimentally, the systems described with this model reach equilibrium when the system's energy has dropped below some threshold corresponding to the energy resolution of the measuring apparatus. We show that one can (almost) always find an optimal underdamped solution that will reach this energy threshold sooner than all other underdamped solutions, as well as the critically damped solution, no matter how small this threshold is. We also comment on one exception to this for a particular type of initial conditions, when a specific overdamped solution reaches the equilibrium state sooner than all other solutions. We confirm some of our findings experimentally.
Karlo Lelas, Nikola Poljak, Dario Jukić
2023-01-19T18:33:49Z
http://arxiv.org/abs/2301.08222v2
# Damped harmonic oscillator revisited: the fastest route to equilibrium ###### Abstract It is generally known that the solutions of the damped harmonic oscillator asymptotically approach equilibrium, i.e., the zero energy state, without ever reaching it exactly, and that the solution of the critical damping regime approaches equilibrium faster than the underdamped or the overdamped solution. Experimentally, the systems described with this model reach an equilibrium state which is not exactly a zero energy state, but rather a state where the system's energy has dropped below some threshold corresponding to the energy resolution of the measuring apparatus. We show that one can always find an optimal solution in the underdamped regime that will reach this energy threshold sooner than all other underdamped solutions, as well as the critically damped solution, no matter how small this threshold is. We also comment on an exception to this for a particular type of initial conditions, when a specific overdamped solution reaches the equilibrium state sooner than all other solutions. We confirm some of our findings experimentally. + Footnote †: preprint: APS/123-QED ## I Introduction Under which conditions does the damped harmonic oscillator return to the equilibrium state the fastest? The standard textbook answer to that question is that the oscillator returns to the equilibrium state the fastest in the critically damped regime, regardless of the initial conditions [1; 2; 3; 4; 5]. Since none of the solutions to this model ever exactly reach the equilibrium state, but only asymptotically approach it, the authors usually justify this answer by considering the speed of convergence of the general underdamped, critically damped and overdamped solutions towards the equilibrium state in the infinite time limit [1]. Indeed, in that asymptotic sense, the critically damped solution approaches the equilibrium state faster than the solutions of the underdamped and the overdamped regimes. However, it has already been theoretically shown that one can determine the damping coefficient in the underdamped regime for which the oscillator settles down fastest to equilibrium by setting the maximal overshoot to be equal to the displacement resolution [6]. Similarly, Heald [7] sets the minimum detectable overshoot to be equal to the displacement resolution and computes the degree of underdamping that would be mistaken for critical due to the fastest return to the equilibrium. In this paper we theoretically find the time interval in which the envelope of the underdamped displacement is smaller in magnitude than the displacement of a critically damped oscillator by using the Lambert W function [8]. Depending on the damping coefficient, we determine the moment in time at which the envelope of underdamped displacement becomes larger than the displacement of the critically damped oscillator. We find that the displacements at and after that moment are experimentally practically immeasurable for damping coefficients that are still well in the underdamped regime, e.g. \(10\%\) smaller than the critical damping coefficient. Thus, we conclude that for all practical purposes one can always find a damping coefficient in the underdamped regime for which the system will come close enough to the equilibrium to declare that it has in fact reached the equilibrium state, in this case sooner than the critically damped oscillator. Since, in the ideal case, the equilibrium state is the zero energy state, with both the displacement and the velocity equal to zero, here we define the equilibrium state as the state in which the energy of the oscillator has dropped to an infinitesimal fraction of the initial energy, e.g. to the level of the energy resolution of an experiment which observes the oscillator. We show that it is possible to find an optimal damping coefficient in the underdamped regime for which the system will reach the equilibrium state (i.e. its energy will fall below some predetermined threshold comparable to the resolution of the experiment) sooner than for any other underdamped, critically damped or overdamped oscillators with the same initial conditions. An exception to this is a special case of initial conditions for which a damping coefficient in the overdamped regime makes the oscillator reach the equilibrium state sooner than all other overdamped, underdamped, and critically damped oscillators, regardless of whether we define the equilibrium state as a zero energy state or as a state with some infinitesimal fraction of initial energy. As an experimental check on some of these statements, we designed an RLC circuit with variable parameters and measured the oscillating voltage across the resistor using a standard laboratory oscilloscope. The measurements confirm the theoretical findings. ## II Short review of the model The differential equation of the damped harmonic oscillator is of the form \[\ddot{x}(t)+2\gamma\dot{x}(t)+\omega_{0}^{2}x(t)=0\,, \tag{1}\] where \(x(t)\) denotes the displacement from the equilibrium position as a function of time, the dots denote time derivatives, \(\gamma>0\) is the damping coefficient and \(\omega_{0}\) stands for the undamped oscillator angular frequency (sometimes called the natural frequency of the oscillator). The zero energy state of the system is achieved when both \(x=0\) and \(\dot{x}=0\). The form of the solution to the equation (1) depends on the relationship between \(\gamma\) and \(\omega_{0}\), producing three possible regimes. For \(\gamma<\omega_{0}\) the system is in the _underdamped regime_ with the solution \[\begin{split} x_{ud}(t)&=Ae^{-\gamma t}\cos(\omega t +\phi)\,,\quad\text{with}\\ A&=\sqrt{x_{0}^{2}+\frac{(v_{0}+\gamma x_{0})^{2}}{ \omega^{2}}}\quad\text{and}\\ \phi&=\arctan\left(-\frac{v_{0}+\gamma x_{0}}{ \omega x_{0}}\right)\,.\end{split} \tag{2}\] Here, \(\omega=\sqrt{\omega_{0}^{2}-\gamma^{2}}\) is the angular frequency of the damped oscillator, \(A\) and \(\phi\) are constants determined from the initial conditions \(x_{0}\equiv x(0)\) and \(v_{0}\equiv\dot{x}(0)\). The displacement oscillates quasiperiodically within an exponentially decaying envelope given by \(\pm Ae^{-\gamma t}\). In what follows, we will refer to the function \[A(t)=\sqrt{x_{0}^{2}+\frac{(v_{0}+\gamma x_{0})^{2}}{\omega^{2}}}e^{-\gamma t} \tag{3}\] as the _envelope_. We want to emphasize here that the displacement maxima are not tangent to the envelope and it could be misleading to refer to the function (3) as the time dependent amplitude [9]. It is easy to check that the envelope of the velocity, \(\dot{x}_{ud}(t)\), follows the same behavior, and thus the system asymptotically approaches equilibrium. For \(\gamma=\omega_{0}\) the system is in the _critically damped regime_ with the solution \[x_{c}(t)=(x_{0}+(v_{0}+\omega_{0}x_{0})t)\,e^{-\omega t}. \tag{4}\] The solution approaches equilibrium without oscillating. For \(\gamma>\omega_{0}\) the system is in the _overdamped regime_ with the solution \[x_{od}(t)=e^{-\gamma t}\left(\frac{v_{0}+(\gamma+\alpha)x_{0}}{2\alpha}e^{ \alpha t}-\frac{v_{0}+(\gamma-\alpha)x_{0}}{2\alpha}e^{-\alpha t}\right), \tag{5}\] where \(\alpha=\sqrt{\gamma^{2}-\omega_{0}^{2}}\). As in the critically damped regime, the solution approaches equilibrium without oscillating. The envelope of the underdamped oscillations decays with the function \(e^{-\gamma t}\), and thus to compare the decays of the underdamped and critically damped solutions, we examine the ratio \[\lim_{t\to\infty}\frac{x_{c}(t)}{e^{-\gamma t}}=\lim_{t\to\infty}\left(x_{0}+ (v_{0}+\omega_{0}x_{0})\,t\right)e^{-(\omega_{0}-\gamma t)t}=0, \tag{6}\] due to \(\omega_{0}-\gamma>0\). Therefore, the critically damped solution approaches equilibrium faster than the underdamped solution. For the overdamped regime, the approach to the equilibrium in the infinite time limit is governed by the slowly decaying exponential function \(e^{(-\gamma+\alpha)t}\), and thus we evaluate \[\lim_{t\to\infty}\frac{x_{c}(t)}{e^{(-\gamma+\alpha)t}}=\lim_{t\to\infty}\left( x_{0}+(v_{0}+\omega_{0}x_{0})\,t\right)e^{-(\omega_{0}-\gamma+\alpha)t}=0, \tag{7}\] due to \(\omega_{0}-\gamma+\alpha>0\). Again, we can conclude that the critically damped solution approaches the equilibrium faster than the overdamped solution. The last statement is not completely general, since there is a possibility of a special case in which the initial displacement and velocity are of opposite sign and satisfy \(|v_{0}|>|x_{0}\omega_{0}|\). In this case, it is possible to choose a damping coefficient \(\gamma\) such that the factor multiplying the slowly decaying exponential is zero and only the quickly decaying exponential \(e^{(-\gamma-\alpha)t}\) is present in the overdamped solution. In this case, the overdamped oscillator approaches the equilibrium state faster than the critically damped oscillator with the same initial conditions. Now consider the energy of the oscillator, and the relationship between the energy and the power of the dissipative force. If we multiply equation (1) by \(\dot{x}(t)\), and rewrite the resulting first and third terms as time derivatives, we get \[\frac{\mathrm{d}}{\mathrm{d}t}\left(\frac{\dot{x}(t)^{2}}{2}+\frac{\omega_{0}^ {2}x(t)^{2}}{2}\right)+2\gamma\dot{x}(t)^{2}=0\,. \tag{8}\] If we consider, e.g., a mass on a spring in viscous fluid, the first term in the equation (8) is the time derivative of the total mechanical energy (kinetic plus potential) per unit mass and the second term is the power of the dissipative force per unit mass. The ratio of the system's mechanical energy at time \(t\) to its initial mechanical energy is \[\frac{E(t)}{E_{0}}=\frac{\dot{x}(t)^{2}+\omega_{0}^{2}x(t)^{2}}{v_{0}^{2}+ \omega_{0}^{2}x_{0}^{2}}\,. \tag{9}\] Using (9), we can rewrite (8) as fractional rate of energy decrease \[\frac{\mathrm{d}}{\mathrm{d}t}\left(\frac{E(t)}{E_{0}}\right)=-\frac{4\gamma}{ v_{0}^{2}+\omega_{0}^{2}x_{0}^{2}}\dot{x}(t)^{2}\,. \tag{10}\] Here, for later convenience, we define the ratio of the kinetic to total energy at any given time \(t\) as \[\beta(t)=\frac{\dot{x}(t)^{2}}{\dot{x}(t)^{2}+\omega_{0}^{2}x(t)^{2}}\,. \tag{11}\] Using (11), we can rewrite (10) as \[\frac{1}{E(t)/E_{0}}\frac{\mathrm{d}}{\mathrm{d}t}\left(\frac{E(t)}{E_{0}} \right)=-4\gamma\beta(t)\,. \tag{12}\] Since the total energy takes on strictly real positive values as a function of time, we can rewrite (12) using the natural logarithm as \[\frac{\mathrm{d}}{\mathrm{d}t}\left(\ln\frac{E(t)}{E_{0}}\right)=-4\gamma\beta (t)\,. \tag{13}\] Equation (13) is valid for all three oscillating regimes, i.e., for underdamped, critically damped and overdamped regimes. We find it especially useful when analyzing the underdamped regime due to the quasiperiodic nature of that regime. Note that when \(\beta_{ud}(t)=1\) the system is in the equilibrium position and possesses only kinetic energy. Conversely, when \(\beta_{ud}(t)=0\) the system is at a turning point, and possesses only potential energy. Equations (10) and (13) are clearly related, but one must be aware of their implicit differences. For example, both loss rates have maxima, equal to zero, at moments for which \(\dot{x}(t)^{2}=0\) and \(\beta(t)=0\), but the minima occur at different times. Due to the damping present in the system, the magnitude of the velocity reaches its maximum somewhat earlier than when the system is in an equilibrium position [10] with \(\beta(t)=1\), as would be the case for the undamped harmonic oscillator. To avoid confusion, in what follows, we will refer to equation (13) as the _energy loss rate_. To summarize, the solutions of the damped harmonic oscillator asymptotically approach the equilibrium state, but never exactly reach it. In that asymptotic sense, the solution of the critically damped regime returns to the equilibrium in the fastest manner in all cases apart from a specific case of an overdamped oscillator. In nature and in experiments, systems described by this model reach the equilibrium state which is not an exact zero energy state, but rather a state in which the energy of the system has decreased to the level of the energy of the surrounding noise, or the energy resolution of the measuring apparatus. Following this line of thought, we will define a system to be in equilibrium for times \(t>\tau\) such that \[\frac{E(\tau)}{E_{0}}=10^{-\delta}\,, \tag{14}\] where \(\delta>0\) is a dimensionless parameter that defines what fraction of the initial energy is left in the system. In the next section we prove that, for any finite value of \(\delta\), one can always find an optimal solution in the underdamped regime (\(\gamma<\omega_{0}\)) which will reach equilibrium, in the sense of (14), sooner than all other underdamped or critically damped solutions with the same initial conditions. ## III The fastest route to equilibrium ### Displacement analysis We focus on the initial conditions given by \(x_{0}>0\) and \(v_{0}=0\). The underdamped, critically damped, and overdamped solutions for this set of initial conditions are: \[\begin{split} x_{ud}(t)=A(t)\cos\left(\omega t+\phi\right)\,, \quad\text{with}\\ A(t)=x_{0}\frac{\omega_{0}}{\omega}e^{-\gamma t}\quad\text{and} \\ \phi=\arctan\left(-\frac{\gamma}{\omega}\right)\,,\end{split} \tag{15}\] \[x_{c}(t)=x_{0}\left(1+\omega_{0}t\right)e^{-\omega_{0}t}, \tag{16}\] \[x_{od}(t)=\frac{x_{0}}{2\alpha}e^{-\gamma t}\left[(\gamma+\alpha)e^{\alpha t} -(\gamma-\alpha)e^{-\alpha t}\right]. \tag{17}\] The critical and overdamped solutions (16) and (17) have no zeros, and due to the higher damping coefficient, the overdamped solution approaches the equilibrium state more slowly. Thus, we can conclude that the overdamped solution is further away from equilibrium when compared to the critically damped solution, for any \(t>0\) (we rigorously prove that \(x_{od}(t)>x_{c}(t)\), for any \(t>0\), in supplementary material, section SI). Therefore, we exclude the overdamped regime in what follows. In Fig. 1 we show the envelope \(A(t)\) as a solid red curve, the underdamped solution \(x_{ud}(t)\) with \(\gamma=0.6\omega_{0}\) as a dotted blue curve, and the critically damped solution \(x_{c}(t)\) as a dashed black curve. If we assume, e.g., that Fig. 1 models a real-life experiment in which the resolution of the displacement is \(\Delta x=\pm 0.1x_{0}\) (which would be a poor resolution, we take it here just to set an example), from the point of view of that experiment, the underdamped solution settles down to equilibrium sooner than the critically damped solution. Thus, by setting the maximal overshoot to be equal to the displacement resolution, one can determine the damping coefficient in the underdamped regime for which the damped oscillator will come to equilibrium, within experimental resolution, as quickly as possible, and stay there, as Armstrong [6] has already shown. Here, we show that the envelope \(A(t)\) crosses \(x_{c}(t)\) at times \(\tau_{1}\) and \(\tau_{2}\). This means that in the time interval \(t\in(\tau_{1},\tau_{2})\) the envelope of the underdamped solution \(x_{ud}(t)\) is actually _closer_ to the equilibrium than \(x_{c}(t)\). We can gain additional analytical insight into the relationship between the underdamped and critically damped solutions by analyzing the dependence of the two crossing times, \(\tau_{1}\) and \(\tau_{2}\), on the damping coefficient. We equate \(A(\tau_{1,2})=x_{c}(\tau_{1,2})\), and get the equation \[e^{(\omega_{0}-\gamma)\tau_{1,2}}-\omega\tau_{1,2}-\frac{\omega}{\omega_{0}}= 0\,, \tag{18}\] with two solutions (see appendix A) \[\tau_{1,2}=-\frac{1}{\omega_{0}-\gamma}W_{0,-1}\left(-\frac{\omega_{0}- \gamma}{\omega_{0}^{\frac{\omega_{0}-\gamma}{\omega_{0}}}}\right)-\frac{1}{ \omega_{0}}, \tag{19}\] Figure 1: The envelope \(A(t)\) (solid red curve) and the complete underdamped solution \(x_{ud}(t)\) with \(\gamma=0.6\omega_{0}\) (dotted blue curve) compared to the critically damped solution \(x_{c}(t)\) (dashed black curve). The arrows denote the moments in time \(\tau_{1}\) and \(\tau_{2}\) when \(A(t)\) is equal to \(x_{c}(t)\). During the time interval \(t\in(\tau_{1},\tau_{2})\) the envelope of the underdamped solution \(x_{ud}(t)\) is closer to the equilibrium than \(x_{c}(t)\). The magnitude of the displacement in the underdamped case becomes confined within the band \(\pm 0.1x_{0}\) before the critically damped case and remains within that region for all subsequent times. where \(W_{0}(y)\) and \(W_{-1}(y)\) are two different branches of the Lambert W function [8]. Fig. 2 shows the results obtained from (19). Let us take, e.g., the values of the crossing times when \(\gamma=0.9\omega_{0}\), i.e., \(\tau_{1}=1.73\omega_{0}^{-1}\) and \(\tau_{2}=23.81\omega_{0}^{-1}\), from which we get \(A(\tau_{1})=0.48x_{0}\) and \(A(\tau_{2})=1.14\times 10^{-9}x_{0}\). In this case the underdamped solution \(x_{ud}(t)\) oscillates with an envelope that is smaller in magnitude than the critically damped solution \(x_{c}(t)\) for \(t<\tau_{2}\), until both solutions reach an displacement of the order \(10^{-9}x_{0}\) at \(t=\tau_{2}\). In general, since the magnitude of the displacement of the underdamped solution is smaller than the critically damped solution for \(\tau_{1}<t<\tau_{2}\), then if the \(A(\tau_{2})\) is less than the experimental resolution, the underdamped displacement will not be measured to be larger than the critically damped displacement at any time after \(\tau_{1}\). Note that \(\tau_{2}\) diverges as \(\gamma\) approaches \(\omega_{0}\), and therefore \(A(\tau_{2})\) can be made arbitrarily small by choosing a large enough damping coefficient. Therefore, even though the critically damped displacement approaches equilibrium faster than the underdamped displacement for times greater than \(\tau_{2}\), for any given experimental conditions, one can find a damping coefficient \(\gamma\) for which the underdamped solution will be measured to reach equilibrium first. ### Energy analysis Since, in the ideal case, the equilibrium state is the zero energy state, with both the displacement and the velocity equal to zero, we now turn our attention to the energy of the system. In Fig. 3 (a) we show the base 10 logarithm of the ratio (9), for four values of \(\gamma=\{0.85\omega_{0},0.9\omega_{0},0.95\omega_{0},\omega_{0}\}\). All the curves seem to initially overlap, even though the energy ratios for various underdamped cases are slightly larger than the corresponding ratio for the critically damped oscillator. This is true starting from \(t=0\) until \(t=\{1.10\omega_{0}^{-1},1.06\omega_{0}^{-1},1.03\omega_{0}^{-1}\}\), for \(\gamma=\{0.85\omega_{0},0.9\omega_{0},0.95\omega_{0}\}\) respectively, at which times the said ratios become equal. After that, the underdamped curves remain lower than the critically damped one for the rest of the time period of Fig. 3(a). In Fig. 3(b) we show the same behavior, this time only for the case when \(\gamma=0.9\omega_{0}\), but during a longer time period. We note that the energy ratios of the two systems are equal when \(t=24.24\omega_{0}^{-1}\) (indicated by an arrow in Fig. 3(b)). This energy ratio equality occurs slightly later in time than the equality of the underdamped oscillator envelope and the critically damped oscillator displacement, which was found earlier to happen when \(t=\tau_{2}=23.81\omega_{0}^{-1}\). We expect that these times do not coincide due to the differences in behavior of the envelope and the full underdamped solution, i.e., the underdamped displacement exceeds the critical displacement in magnitude later than the envelope. After \(t=24.24\omega_{0}^{-1}\), the energy ratio of the underdamped oscillator periodically dips below the corresponding ratio of the critically damped oscillator. However, during each subse Figure 3: (a) The base 10 logarithm of the energy ratio (9), in the case of underdamped oscillators for which \(\gamma=0.85\omega_{0}\) (green dash-dotted curve), \(\gamma=0.9\omega_{0}\) (red solid curve), \(\gamma=0.95\omega_{0}\) (blue dotted curve), and for the critically damped oscillator (black dashed curve). (b) The energy ratio (9) for the case when \(\gamma=0.9\omega_{0}\) (red solid curve) and for the critically damped oscillator (black dashed curve) during a longer time period. The energies of the two systems are equal when \(t=24.24\omega_{0}^{-1}\), as indicated by an arrow. After this moment, the energy ratio for the underdamped oscillator becomes periodically lower than the same ratio for the critically damped oscillator, until \(t=43.03\omega_{0}^{-1}\), after which it always remains above it, in accordance with infinite time limit (6). Figure 2: The solutions to equation (19) for values of \(\gamma\{\omega_{0}\}\) from \(0.05\) to \(0.95\) in steps of \(0.05\). For example, for \(\gamma=0.9\omega_{0}\), \(\tau_{1}=1.73\omega_{0}^{-1}\) and \(\tau_{2}=23.81\omega_{0}^{-1}\). During this time period the underdamped solution \(x_{ud}(t)\) oscillates with an envelope \(A(t)\) that is smaller in magnitude than the critically damped solution \(x_{c}(t)\). quent transition the energy ratio of the underdamped oscillator is closer to the energy ratio for the critically damped oscillator. Finally, when \(t>43.03\omega_{0}^{-1}\), which corresponds to \(E(t)<10^{-33}E_{0}\), the energy ratio of the underdamped oscillator remains permanently above the energy ratio for the critically damped oscillator, in accordance with infinite time limit (6). Any different selection of the damping coefficient in the underdamped case shows qualitatively the same behavior. To determine an optimal damping coefficient \(\gamma\) for which the energy of a system will drop to a desired energy level the soonest, we first give an example in Fig. 3(a), where we draw a horizontal line for which \(E(t)/E_{0}=10^{-5.5}\). It is immediately clear that the energy of an underdamped oscillator for which \(\gamma=0.9\omega_{0}\) reaches this line first, before other cases of underdamped oscillators for which \(\gamma\) is either \(0.85\omega_{0}\) or \(0.95\omega_{0}\). ### Existence of a unique optimal damping coefficient Now we show that it is possible to determine a unique underdamped \(\gamma\) for which the damped oscillator reaches some energy level faster than for any other \(\gamma\). The underdamped oscillator, described by (15), is in the equilibrium position at moments given by \[\tau_{A_{n}}=\frac{1}{\omega}\left[(2n-1)\frac{\pi}{2}+\arctan\left(\frac{ \gamma}{\omega}\right)\right]\,, \tag{20}\] and in the turning points at moments given by \[\tau_{B_{n}}=\frac{n\pi}{\omega}\,, \tag{21}\] where \(n\in\mathbb{N}\) is the index that counts the equilibrium crossings and turning points respectively. In Fig. 4(a) we show the energy loss rates (13) of the critically damped oscillator and of the underdamped oscillator for values \(\gamma=\{0.87\omega_{0},0.9\omega_{0}\}\). For the case \(\gamma=0.9\omega_{0}\) the maximal magnitude of the energy loss rate is \(4\gamma=3.6\omega_{0}\). This means that, according to (13), \(\beta_{ud}=1\) at a time \(\tau_{A_{1}}=6.17\omega_{0}^{-1}\). At this moment, the system arrives for the first time at the equilibrium position and its total energy is its kinetic energy. By contrast, the energy loss rate is zero when \(\beta_{ud}=0\) at a time \(\tau_{B_{1}}=7.21\omega_{0}^{-1}\), in which the total energy of the oscillator is equal to its potential energy, i.e., then the system arrives for the first time at the turning point. It is clear that \(\gamma=0.87\omega_{0}\), or any other \(\gamma\) in the underdamped regime, yields qualitatively the same behavior. The energy for \(\gamma=0.9\omega_{0}\), when first at equilibrium position, is \(1.5\cdot 10^{-5}E_{0}\). Let's assume that this energy is the energy resolution of the experiment. In Fig. 4(b) we show the energy ratio (9) as a function of \(\gamma\) at \(t=6.17\omega_{0}^{-1}\). At this moment, the underdamped oscillator with \(\gamma=0.9\omega_{0}\) reaches the equilibrium position for the first time and has energy equal to the energy resolution of the experiment. Oscillators with \(\gamma>0.9\omega_{0}\) have not yet reached the equilibrium position at that moment, and still have energy greater than the energy resolution of the experiment. Oscillators with \(\gamma<0.87\omega_{0}\) have already passed the equilibrium position one or two times up to this moment (depending on the value of \(\gamma\)), but also have energy greater than the energy resolution of the experiment. The oscillator with \(\gamma=0.87\omega_{0}\) reaches the same energy as the \(\gamma=0.9\omega_{0}\) case, but slightly sooner than when reaching its first turning point (designated by the red circle on the blue dotted curve shown in Fig. 4(a)). Finally, oscillators with \(\gamma\in(0.87\omega_{0},0.9\omega_{0})\) reach an energy smaller than the \(\gamma=0.9\omega_{0}\) case, with the first minima (maxima) of their energy loss rates at time instants between the time instants of the first minima (maxima) of the energy loss rates for \(\gamma=0.87\omega_{0}\) and \(\gamma=0.9\omega_{0}\) cases. This holds true since \(\tau_{A_{1}}\) and \(\tau_{B_{1}}\) are monotonically increasing functions of \(\gamma\). Therefore, oscillators with \(\gamma\in(0.87\omega_{0},0.9\omega_{0})\) reach an energy smaller than the \(\gamma=0.9\omega_{0}\) case when their displacements are between their equilibrium position and their first Figure 4: (a) The energy loss rates (13) of the critically damped oscillator (black dashed curve) and the underdamped oscillator with \(\gamma=0.87\omega_{0}\) (blue dotted curve) and \(\gamma=0.9\omega_{0}\) (red solid curve). The time instants of the first crossing of the equilibrium position are denoted as \(\tau_{A_{1}}\), and the time instants of the first turning point are denoted as \(\tau_{B_{1}}\). For \(\gamma=0.9\omega_{0}\) we have \(\tau_{A_{1}}=6.17\omega_{0}^{-1}\), \(\tau_{B_{1}}=7.21\omega_{0}^{-1}\) (both indicated with arrows). The red circle indicates the value of the energy loss rate for \(\gamma=0.87\omega_{0}\) at \(t=6.17\omega_{0}^{-1}\). (b) Energy ratio (9) as a function of \(\gamma\) at \(t=6.17\omega_{0}^{-1}\) (red solid curve) and at \(t=5.98\omega_{0}^{-1}\) (blue dashed curve). The energy level reached by \(\gamma=0.9\omega_{0}\) at \(t=6.17\omega_{0}^{-1}\) is indicated by the dashed horizontal line. The part of the curve under the horizontal line corresponds to the interval \(\gamma\in(0.87\omega_{0},0.9\omega_{0})\) for which the energy is lower than the energy for \(\gamma=0.9\omega_{0}\). The solid arrow indicates the global minimum when \(\gamma=0.886\omega_{0}\). For \(\gamma=0.878\omega_{0}\) the energy at \(t=5.98\omega_{0}^{-1}\) has a global minimum indicated by the dashed arrow. See text for details. turning point. We note here that, for any \(\gamma\), the energy before some instant in time \(t\) can only be higher than at that instant since energy is a monotonically decreasing function of time. Thus, only oscillators with \(\gamma\in(0.87\omega_{0},0.9\omega_{0})\) reach the energy resolution before \(t=6.17\omega_{0}^{-1}\), i.e., before the \(\gamma=0.9\omega_{0}\) case. Since the subset of these faster damping coefficients is only \(\Delta\gamma=0.03\omega_{0}\) wide, we determined the optimal damping coefficient simply by finding graphically \(\gamma\), within this subset, for which the energy ratio (9) as a function of time intersects the level \(1.5\cdot 10^{-5}\) the fastest. We find that the oscillator with \(\gamma=0.878\omega_{0}\) reaches this energy resolution at \(t=5.98\omega_{0}^{-1}\), faster than all other \(\gamma\), and with a relative time advantage of \(3.1\%\) over the \(\gamma=0.9\) case. To confirm this, in Fig. 4(b) we show the energy ratio as a function of \(\gamma\) at \(t=5.98\omega_{0}^{-1}\), and we can clearly see that the energy ratio has a minimum equal to the energy resolution when \(\gamma=0.878\omega_{0}\). Minima shown in Fig. 4(b) are global, and in the appendix B we show the behavior of the energy ratio (9), and of its global minimum, for a larger interval of \(\gamma\) at different instants chosen over a larger time span. This example shows us that if the energy as a function of \(\gamma\) at some instant has a global minimum at some \(\gamma_{min}\), the oscillator with \(\gamma_{min}\) reaches the energy of that minimum faster than the oscillator with any other \(\gamma\), i.e., \(\gamma_{min}\) is the optimal damping coefficient to reach that energy level. We have also shown that the \(\gamma\) for which the system reaches that same energy level when it is at the equilibrium position for the first time is an excellent first approximation of the optimal damping coefficient. In the case shown, the two differ by only \(2.5\%\) (this difference decreases as we go further in time, i.e. to lower energies, see appendix B) and in the optimal case the system reaches the energy resolution only \(3.1\%\) faster. In Fig. 4(b), we can see that an even better approximation of the optimal \(\gamma\) can simply be obtained if we take as optimal the \(\gamma=0.886\omega_{0}\) for which the energy has a minimum at the moment \(t=6.17\omega_{0}^{-1}\), since in that case the difference is only \(0.9\%\). The advantage of the coefficient \(\gamma\) for which the system reaches the energy resolution at \(\tau_{\Delta_{1}}\), as an approximation of the optimal \(\gamma\), is that this instant is simply analytically expressed by (20) and we can easily determine this \(\gamma\) directly from the condition that the energy is equal to the energy resolution at the instant \(\tau_{\Delta_{1}}\). Here, we started our discussion by focusing on \(\gamma=0.9\omega_{0}\), but an analysis focused on any other underdamped \(\gamma\) as a starting point would lead to qualitatively the same conclusions. ### Determination of the optimal damping coefficient We say that the system is effectively at equilibrium when its energy equals \(10^{-\delta}E_{0}\), with some \(\delta>0\) and we find the optimal underdamped coefficient \(\gamma_{opt}\) in two steps. First we determine the underdamped \(\gamma\) for which the system reaches this energy level when at the equilibrium position for the first time. We have shown that \(\gamma\) obtained this way can be considered as an excellent first approximation of the optimal damping coefficient, since only a small subset of smaller damping coefficients is slightly faster. Therefore, we will refer to it as the _first step optimal damping_ and denote its coefficient as \(\gamma_{1}\). In the second step, we use \(\gamma_{1}\) to find a small subset of underdamped coefficients for which the system reaches the energy \(10^{-\delta}E_{0}\) faster than for \(\gamma_{1}\), and then determine the optimal damping coefficient \(\gamma_{opt}\) from that subset. Thus, the first step optimal damping coefficient \(\gamma_{1}\) is determined by the condition \(E(\tau_{\Delta 1})=10^{-\delta}E_{0}\). Since in the moment given by \(\tau_{\Delta_{1}}\) the total energy is equal to the kinetic energy, we can write the chosen condition as \[\frac{E_{ud}(\tau_{\Delta_{1}})}{E_{0}}=\frac{\dot{x}_{ud}(\tau_{\Delta_{1}}) ^{2}}{\omega_{0}^{2}x_{0}^{2}}=10^{-\delta}\,. \tag{22}\] Next, using (20) with \(n=1\) and the time derivative of (15), we get \[\exp\left[-2\frac{\gamma}{\sqrt{\omega_{0}^{2}-\gamma^{2}}}\left(\frac{\pi}{2 }+\arctan\frac{\gamma}{\sqrt{\omega_{0}^{2}-\gamma^{2}}}\right)\right]=10^{- \delta}. \tag{23}\] Since the left-hand side of the condition (23) is a function of \(\gamma\) only, we introduce a shorthand notation \(f(\gamma)\) for it. We can then graphically determine \(\gamma_{1}\), for any given \(\delta>0\), from the condition \[\log_{10}(f(\gamma))=-\delta\,. \tag{24}\] In Fig. 5 we show the function \(\log_{10}f(\gamma)\) as a red curve and sketch the graphical determination of \(\gamma_{1}\) for \(\delta=10\), for which we find \(\gamma_{1}=0.9698\omega_{0}\). We give the results for other choices of \(\delta\) in Table 1. In the second step, we show how to refine the results of the first step and find the optimal damping coefficient. We focus on the energy resolution \(10^{-6}E_{0}\), and for any other choice the procedure is the same. From Table 1, we see that the oscillator with \(\gamma_{1}=0.9286\omega_{0}\) reaches this resolution when first at equilibrium position, i.e., at \(t=7.44\omega_{0}^{-1}\). In Fig. 6 we show the energy ratio (9) as a function of \(\gamma\) at \(t=7.44 Figure 5: Graphical determination of the first step optimal damping \(\gamma\) for \(\delta=10\) using condition (24). For the determined \(\gamma_{1}=0.9698\omega_{0}\) the energy of the underdamped oscillator permanently drops under a preset threshold energy when arriving in the equilibrium position for the first time. graph we determine the subset of underdamped coefficients \(\gamma\in(0.9104\omega_{0},0.9286\omega_{0})\) with energy smaller than \(10^{-6}E_{0}\) at this instant, i.e., the subset of underdamped coefficients that reach the energy resolution faster than \(\gamma_{1}\). Similarly as in subsection III.3, since this subset is only \(\Delta\gamma=0.0182\omega_{0}\) wide, we determined the optimal damping coefficient simply by finding graphically for which \(\gamma\) in the subset, the energy ratio (9) intersects the level \(10^{-6}\) the fastest. We find that \(\gamma_{opt}=0.9145\omega_{0}\) reaches the energy resolution at \(t=7.20\omega_{0}^{-1}\), faster than any other \(\gamma\). To confirm this, in Fig. 6 we show the energy ratio (9) as a function of \(\gamma\) at \(t=7.20\omega_{0}^{-1}\), and we see that the minimum for \(\gamma_{opt}=0.9145\omega_{0}\) touches the energy resolution level. In this example, \(\gamma_{opt}\) is \(1.52\%\) smaller than \(\gamma_{1}\) (this difference further decreases as we optimize towards lower energies, see appendix B) and the system with it takes \(3.26\%\) less time than the system with \(\gamma_{1}\) to reach the energy resolution level. Depending on the experimental conditions this can be significant, but for many applications it suffices to take \(\gamma_{1}\) as optimal. For example, in the next section, where we perform an experimental verification of our findings, the difference between \(\gamma_{opt}\) and \(\gamma_{1}\) proves to be undetectable since it results in a change that is within the experimental resolution of the setup. For all other types of initial conditions, the behavior with respect to energy is qualitatively the same as for the initial conditions \(x_{0}>0\) and \(v_{0}=0\) (see supplementary material, sections SII-SIV), with one exception. For the special case in which the initial displacement and velocity are of opposite signs and satisfy \(|v_{0}|>|x_{0}\omega_{0}|\), one can choose the damping coefficient \[\gamma=\frac{\left(\frac{|v_{0}|}{|v_{0}|}\right)^{2}+\omega_{0}^{2}}{2\frac{ |v_{0}|}{|v_{0}|}} \tag{25}\] in the overdamped regime, for which the factor multiplying the slowly decaying exponential in (5) cancels and only the quickly decaying exponential \(e^{(-\gamma-\alpha)t}\) is present. For this particular initial condition, the overdamped solution with the damping coefficient (25) returns to equilibrium sooner than any other solution (details are given in supplementary material, section SIV, subsection C). For the interested reader, in the supplementary material, sections SV and SVI, we show how the energy loss rates and the ratio of the critically damped to underdamped energy are related and we show the results of the optimisation with respect to \(\tau_{\lambda_{2}}\), i.e., allowing the system to pass the equilibrium position once, overshoot it, and reach the energy resolution of the experiment when coming into the equilibrium position the second time. It is expected that the advantage of the underdamped oscillator compared to the critically damped oscillator is smaller for the results obtained this way, since the energy of the underdamped oscillator is closer in value to the energy of the critically damped oscillator after each cycle, until it eventually surpasses it (see Fig. 3(a) and (b)). ## IV Experimental verification We tested our model using a series RLC circuit. We note that other systems are also suitable for this purpose, such as a physical pendulum with an eddy-current damping system that allows the damping conditions to be controlled with great precision [11]. The differential equation describing the circuit is \[\frac{\mathrm{d}^{2}I(t)}{\mathrm{d}t^{2}}+\frac{R}{L}\frac{\mathrm{d}I(t)}{ \mathrm{d}t}+\frac{1}{LC}I(t)=0\,. \tag{26}\] \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \(E(t)/E_{0}\) & \(\gamma_{1}\left[\omega_{0}\right]\) & \(\tau_{\lambda_{1}}[\omega_{0}^{-1}]\) & \(\tau_{\epsilon}[\omega_{0}^{-1}]\) & \(\left(\tau_{\epsilon}-\tau_{\lambda_{1}}\right)/\tau_{\epsilon}[\%]\) \\ \hline \(10^{-4}\) & 0.8688 & 5.30 & 6.96 & 23.85 \\ \hline \(10^{-6}\) & 0.9286 & 7.44 & 9.56 & 22.17 \\ \hline \(10^{-8}\) & 0.9555 & 9.63 & 12.10 & 20.35 \\ \hline \(10^{-10}\) & 0.9698 & 11.87 & 14.57 & 18.53 \\ \hline \(10^{-12}\) & 0.9782 & 14.12 & 17.03 & 17.09 \\ \hline \(10^{-14}\) & 0.9835 & 16.36 & 19.46 & 15.98 \\ \hline \(10^{-16}\) & 0.9872 & 18.69 & 21.88 & 14.54 \\ \hline \(10^{-18}\) & 0.9897 & 20.94 & 24.28 & 13.76 \\ \hline \end{tabular} \end{table} Table 1: The first column contains the ratio \(E(t)/E_{0}\) for which we declare that the system has reached an equilibrium state. The second column gives the first step optimal underdamped coefficient \(\gamma_{1}\) obtained from condition (24). The third column contains the time \(\tau_{\lambda_{1}}\) for which the underdamped oscillator has reached the equilibrium state in the sense of the condition \(E(t)=10^{-\delta}E_{0}\). The fourth column contains the time for which the critically damped oscillator reaches that same level of energy, and the fifth column contains the relative time advantage of the underdamped oscillator compared to the critically damped oscillator. This form of the equation is one of a damped harmonic oscillator (1), with \[\omega_{0}^{2}=\frac{1}{LC}\quad\text{ and }\quad 2\gamma=\frac{R}{L}\,. \tag{27}\] Since in this work we always expressed \(\gamma\) in units of \(\omega_{0}\), it is more convenient to write \[\gamma=\frac{R}{2L}\left(\omega_{0}\sqrt{LC}\right)=\frac{R}{2}\sqrt{\frac{C}{ L}}\omega_{0}=\zeta\omega_{0}\,. \tag{28}\] We note that the current in the circuit \(I(t)\) exhibits damped oscillations, but so does the voltage on the resistor \(RI(t)\), which can be measured with an oscilloscope. The circuit was constructed using a decade resistor with a range of resistances from \(0\,\Omega\) to \(1\,\mathrm{M}\Omega\) with a manufacturer stated tolerance of 1%, an inductor with inductance of \((14.80\pm 0.03)\,\mathrm{m}\mathrm{H}\) and a variable capacitor with a range of capacitances from \(0\,\mathrm{F}\) to \(10\,\mathrm{\mu}\mathrm{F}\). The voltage was provided by a square wave from an Agilent 33500B waveform generator with low frequency distortion (\(\pm\) 1 \(\mu\)Hz) and high voltage retention (1% of setting) [12]. The voltage on the resistor was measured using a UNI-T UTD2102CEX 100 MHz oscilloscope. The readout uncertainty on the oscilloscope depends on the readout scale and was set to 1/4 of the smallest readout unit both in time and voltage coordinates. Initially, the wave source frequency was set to \(200\,\mathrm{Hz}\) with a voltage step size of \(3.3\,\mathrm{V}\). The capacitance was set to \(1\,\mathrm{nF}\) and the resistance to \(800\,\Omega\). We verified that the circuit had no significant stray resistances and inductances with the Peak-Tech 2005 and Fluke 115 multimeters. We observed underdamped oscillations of the voltage accross the resistor, which is shown in Fig. 7. The axes on the image correspond to voltage and time. The blue line is the voltage from the switching voltage source (unit voltage step of \(1\,\mathrm{V}\)) and the red line is the voltage measured accross the resistor (unit voltage step of \(265\,\mathrm{mV}\)). Both measurements are measured on the same time axis with a unit measuring step of \(10\,\mathrm{\mu}\mathrm{s}\). Using the oscilloscope's measuring tools, we took five measurements of the time interval between two successive maxima or minima of the oscillations and took the inverse to obtain the frequency. This way, the frequency was found to be \((40.0\pm 0.2)\,\mathrm{kHz}\). Using this value, we determined the value of the capacitance of the circuit and obtained \((1.06\pm 0.01)\,\mathrm{nF}\). This is slightly different from the value set on the variable capacitor due to unavoidable stray capacitances in the circuit. Using these values, we can now find the damping factor \(\zeta\) of the circuit \[\zeta=\frac{R}{2}\sqrt{\frac{C}{L}}=\left(0.107\pm 0.001\right), \tag{29}\] which confirms that the oscillations take place well in the underdamped regime. By setting \(\zeta=1\), we next calculated the value of the resistance that corresponds to critically damped oscillations: \[R_{critical}=2\sqrt{\frac{L}{C}}=\left(7.47\pm 0.04\right)\,\mathrm{k}\Omega\,. \tag{30}\] At this point we note that the calculated uncertainty of \(R_{critical}\) is purely a consequence of measurement uncertainties and does not reflect the experimental resolution. Upon examination, we saw that a change in the resistance of the setup which is smaller than \(250\,\Omega\) in either direction results in a voltage change that is within the experimental resolution of the setup, which puts the resolution at the level of \(\approx 7\%\). After setting the value of the resistance to the critical value, we performed the measurement of the voltage across the resistor again. No oscillations were visible this time, as can be seen in Fig.8, which is the behavior one expects from a critically damped oscillator. We measured the time interval between the moment when the signal was at its maximum and the moment when it dropped to zero. Here, the value "zero" means "indistinguishable from the measuring resolution of the oscilloscope", which was verified by the measuring tool. This resolution is taken to be 1/4 of the smallest measur Figure 8: The readout on the oscilloscope in the case of the critically damped circuit with \(R=7.47\,\mathrm{k}\Omega\) (red curve). The time interval required for the voltage to drop from the maximal value to the resolution of the oscilloscope was measured at \((28.0\pm 0.4)\,\mathrm{\mu}\mathrm{s}\), which is the interval between two vertical red lines. The blue line is the switching voltage signal. Figure 7: An example of the underdamped oscillating measured accross the resistor with circuit parameters \(R=800\,\Omega\), \(L=14.8\,\mathrm{m}\mathrm{H}\) and \(C=1\,\mathrm{nF}\). The axes visible correspond to voltage and time. this case corresponds to \(\approx 13\) mV. Since the maximum voltage was \(\approx 800\) mV, the resolution is approximately 1.625% of the maximal voltage. The measured time interval between these two moments was equal to \((28.0\pm 0.4)\)\(\mu\)s, which, in units of inverse frequency is \((7.1\pm 0.1)\,\omega_{0}^{-1}\). The uncertainty in the measurement is a consequence of the time resolution of the measuring tool, which has a single step of \(\Delta t=0.2\)\(\mu\)s. To compare the measured time interval with a theoretical prediction, we note that we measured the time interval starting from the moment the oscillator was at its maximum displacement. This, in effect, sets the initial condition to that of section III, so we can use the results obtained there. Looking at Fig.3(a), we note that the at \(t=(7.1\pm 0.1)\,\omega_{0}^{-1}\), the energy of the critically damped oscillator is in the interval \([6.6,9.4]\cdot 10^{-5}\,E_{0}\). The underdamped \(\gamma_{1}=\zeta_{1}\omega_{0}\) for the center of this interval is obtained from (23), which translates to a value of resistance of \[R_{1}=2\zeta_{1}\sqrt{\frac{L}{\mathcal{C}}}=(6.52\pm 0.03)\,\mathrm{k}\Omega\,. \tag{31}\] This value was rounded to \(6.5\,\mathrm{k}\Omega\) due to the fact that our experimental sensitivity to resistance is of the order of 250 \(\Omega\). Note that since the experimental sensitivity to a change in resistance is of the order of \(\approx 7\%\), setting the value of the resistance to \(6.5\,\mathrm{k}\Omega\) in fact covers a range of \(\zeta\) which will include both \(\zeta_{1}\), as well as \(\zeta_{opt}\), which differs from it at the order of a few percent. We first set the value of the resistance to the predicted value of the optimal resistance of \(6.5\,\mathrm{k}\Omega\). In Fig. 9 we show the readout from the oscilloscope for this value of the resistance. The time required for the voltage to drop from its maximum value to the resolution of the oscilloscope was \((22.4\pm 0.4)\)\(\mu\)s. Compared to the value of the time interval for the critically damped system, this is a decrease of \((20\pm 3)\%\), in accordance with the findings given in Table 1. Next, we systematically changed the resistance in steps of \(250\,\Omega\), from \(4\,\mathrm{k}\Omega\) to \(8\,\mathrm{k}\Omega\). In each subsequent change, we ensured that the maximum voltage of the signal was always set to \(800\) mV. We confirmed that, within the experimental resolution, the resistance for which the voltage drops to zero the soonest is equal to \(6.5\,\mathrm{k}\Omega\), as predicted by (31). Decreasing the value of the resistance below the optimal value, we slowly start to notice the overshoot of the voltage signal under the reference voltage (see supplementary material, section SVII). For resistances above the critical value, the system becomes overdamped and the time for the signal to drop to the reference level increases, as compared to the critical case. ## V Conclusion The results presented here show that critical damping, contrary to popular belief, is never the best choice if we want the damped oscillator to reach an equilibrium state as soon as possible. For most initial conditions, a unique damping coefficient in the underdamped regime can be determined for which the oscillator reaches the equilibrium state in the fastest manner possible. An exception to this occurs for a specific choice of initial conditions, when an overdamped oscillator becomes the optimal choice to reach the equilibrium state the fastest. These findings can be significant for understanding all systems described by the damped harmonic oscillator model, such as shock absorbers in cars [13], RLC and other oscillating circuits, measuring devices [14], etc. Recently, the damped oscillator model was used to characterize the convergence of algorithms in machine learning, and a connection between the differential equations associated with the algorithms and the damped oscillator model is established [15]. Although the algorithms converge to solutions with some finite precision, critical damping is considered desirable in terms of convergence speed [15]. We envision that our results may be significant in that context as well. ## VI Acknowledgments The authors thank the editors for useful comments and suggestions that significantly improved the clarity and quality of this work. This work was supported by the QuantiXLie Center of Excellence, a project co-financed by the Croatian Government and European Union through the European Regional Development Fund, the Competitiveness and Cohesion Operational Programme (Grant No. KK.01.1.1.01.0004). ## VII Author contributions K.L. and N.P. share first authorship. K.L. came up with the idea for the paper and did the work on the theoretical side. N.P. did the work on the experimental side. K.L. and N.P. wrote the paper. D.J. participated in the discussions about theoretical issues and in writing of some parts of the paper. The authors have no conflicts to disclose. Figure 9: The readout on the oscilloscope in the case of the underdamped circuit with \(R=6.5\,\mathrm{k}\Omega\) (red curve). The time interval required for the voltage to drop from the maximal value to the resolution of the oscilloscope was measured to be \((22.4\pm 0.4)\)\(\mu\)s. ## Appendix A Lambert function In this section we discuss in more detail the solutions of equation (18). The solutions to this equation, \(\tau_{1}\) and \(\tau_{2}\), are moments in time for which the envelope \(A(t)\) of the under-damped oscillator equals the displacement \(x_{c}(t)\) of the oscillator in the critical regime. First, we note that (18) can be rewritten as \[\left(\frac{\omega}{\omega_{0}}+\omega\tau\right)e^{-(\omega_{0}-\gamma)\tau}=1, \tag{19}\] after which the substitution \[u\equiv-\left(\omega_{0}-\gamma\right)\tau+\frac{\gamma-\omega_{0}}{\omega_{ 0}} \tag{20}\] allows us to express (19) in the form called the Lambert equation [8] \[ue^{u}=y \tag{21}\] with the shorthand notation \[y\equiv\frac{\gamma-\omega_{0}}{\omega e^{\frac{\omega_{0}-\gamma}{\omega_{0} }}}. \tag{22}\] We note here that \(-1/e<y<0\), since \(0<\gamma<\omega_{0}\) in the underdamped regime. In this case, the Lambert equation (21) has two solutions for the unknown \(u^{8}\): \[u_{1}=W_{0}(y),\;u_{2}=W_{-1}(y), \tag{23}\] where \(W_{0}\) and \(W_{-1}\) are two branches of the Lambert W function. We can write this compactly as \(u_{1,2}=W_{0,-1}(y)\). By inserting the solutions \(u_{1}\) and \(u_{2}\) back into the substitution (20), we arrive at two solutions for crossing times \(\tau_{1}\) and \(\tau_{2}\), as written in (19): \[\tau_{1,2}=-\frac{1}{\omega_{0}-\gamma}W_{0,-1}\left(-\frac{\omega_{0}-\gamma }{\omega e^{\frac{\omega_{0}-\gamma}{\omega_{0}}}}\right)-\frac{1}{\omega_{0}}. \tag{24}\] ## Appendix B Energy ratio (9) as a function of \(\gamma\) at different time instants \(t\) In Fig. 12 we show the base 10 logarithm of the energy ratio (9) as a function of \(\gamma\in[0,1.5\omega_{0}]\) for five time instants. We see that as time increases, new local minima are formed, but the global minimum persists and its position approaches the critical damping value \(\gamma=\omega_{0}\). The black dot on each of the curves indicates the energy for \(\gamma\) for which the system is at the equilibrium position for the first time at that instant. We see that the subset of the damping coefficients, for which the energy is lower than the energy for this \(\gamma\), decreases in range as time increases. Thus, the difference between \(\gamma_{1}\) and \(\gamma_{opt}\) (from subsection III.4) decreases as we optimize towards lower and lower energies. Behavior for \(t>15\omega_{0}^{-1}\) is qualitatively the same.
2308.04113
Pentagon chain with spin orbit interactions: exact many-body ground states in the interacting case
Based on a positive semidefinite operator technique, exact ground states are deduced for the non-integrable conducting polymers possessing pentagon type of unit cell. The study is done in the presence of many-body spin-orbit interaction (SOI), local and nearest neighbor Coulomb repulsion (NNCR) and presence of external $E$ electric and $B$ magnetic fields, such that the effects of $B$ on both orbital and spin degrees of freedom is considered. The SOI, NNCR, and presented external field configurations presence in exact conducting polymer ground states is a novelty, so the development of the technique for the treatment possibility of such strongly correlated cases is presented in details. The deduced ground states show a broad spectrum of physical characteristics ranging from charge density waves, metal-insulator transitions, to interesting external field driven effects as e.g. modification possibility of a static charge distribution by a static external magnetic field.
Zsolt Gulacsi
2023-08-08T08:00:11Z
http://arxiv.org/abs/2308.04113v1
# Pentagon chain with spin orbit interactions: exact many-body ground states in the interacting case ###### Abstract Based on a positive semidefinite operator technique, exact ground states are deduced for the non-integrable conducting polymers possessing pentagon type of unit cell. The study is done in the presence of many-body spin-orbit interaction (SOI), local and nearest neighbor Coulomb repulsion (NNCR) and presence of external \(E\) electric and \(B\) magnetic fields, such that the effects of \(B\) on both orbital and spin degrees of freedom is considered. The SOI, NNCR, and presented external field configurations presence in exact conducting polymer ground states is a novelty, so the development of the technique for the treatment possibility of such strongly correlated cases is presented in details. The deduced ground states show a broad spectrum of physical characteristics ranging from charge density waves, metal-insulator transitions, to interesting external field driven effects as e.g. modification possibility of a static charge distribution by a static external magnetic field. Introduction Given by their multifunctional characteristics as simplistic synthesis, environmental stability, beneficial electronic, mechanic and optical properties, low cost and weight, or biocompatibility, the conducting polymers are largely used on a broad spectrum of applications in advanced technology as: electrochemistry applications, electrochemical sensing, energy storage, supercapacitors, batteries, sensors, fuel or solar cells, or drug delivery in organism, etc [1]. If many-body spin-orbit interaction (SOI) is present in these systems, their applicability is enlarged, and new properties come in. E.g. applications in spintronics become possible [2]; the possibilities to relax the rigid mathematical conditions leading to flat bands become available [3], allowing the emergence of different ordered phases by small perturbations; paves the way for spin orbitronics in plastic materials, and leads even to inverse spin Hall effect opening the doors for topological behavior [2]. It must be underlined, that even if the SOI coupling (\(\lambda\)) sometimes is small in some organic materials, it can be substantially enhanced (even continuously tuned) by external fields [3], pulsed ferromagnetic resonance [2], or introduction of intrachain atoms that considerably enhance the spin-orbit coupling (e.g. platinum) [4]. It should be also noted, that the importance of SOI is stressed as well by the fact, that even in the case when the spin-orbit coupling is small, i.e. \(\lambda<<1\), its effect is major, because it breaks the spin-projection double degeneracy of each band [5]. Furthermore, usually these systems are interacting, hence the leading term of the Coulomb interaction in many-body systems, the on-site Coulomb repulsion \(U>0\), attains even high values in organic materials [6]. In these conditions also the nearest neighbor Coulomb repulsion \(V>0\) satisfying \(V<U\) influences the physical behavior. So in the description of these systems, the two extreme characteristic parameters \((\lambda,U)\) satisfy the \(U>>\lambda\) relation. Consequently the perturbative treatment becomes questionable in both low and high coupling constant limits, enforcing special treatment for obtaining exact results for a good quality description. The "special treatment" is accentuated here because these systems are usually non-integrable, so Bethe-anzats type of treatment in such cases is inapplicable. In these condition exact results for conducting polymers in the presence of SOI till present are not known. The aim of this paper is to break this state of facts, and to present the first exact results for conducting polymers in the presented conditions (\(\lambda\), \(V\) and \(U\) present with \(U>>\lambda\) condition) in many-body, strongly correlated case. I note that one particle type exact solutions in the presence of SOI have been already published in the bosonic 1D situations [7], but many-body interacting fermionic exact solutions in the presence of SOI for conducting polymers are not known. An often studied representative of conducting polymers is the polyaminotriazole type of chain with pentagon unit cell, as shown in Fig.1, which will be analyzed also here. The procedure we use is based on positive semidefinite operator properties, for which you can find detailed description e.g. Ref. [8] or even review papers as Ref. [9]. The procedure, which works for non-integrable systems as well, in principle looks as follows: First the Hamiltonian of the system is transformed in exact terms in a positive semidefinite form \(\hat{H}=\sum_{\nu}\hat{P}_{\nu}+C\), where \(C\) is a scalar, and \(\hat{P}_{\nu}\) are positive semidefinite operators. The transformation of the Hamiltonian connects the parameters of the Hamiltonian to the parameters of the \(\hat{P}_{\nu}\) operators via a nonlinear system of matching equations, which must be solved in order to complete the transformation process. The positive semidefinite operators by definition satisfy the \(\langle\Psi|\hat{P}_{\nu}|\Psi\rangle\geq 0\) relation, hence their minimum possible eigenvalue is zero. This property underlines the advantages, and force of attraction of the positive semidefinite operator procedure: instead to try to deduce an arbitrary valued ground state, we can concentrate to the deduction of a ground state that has a well defined position, which often represents a much doable task. Since we may think, that this job requires to a priory know, or guess the value of the \(C\) scalar from the transformation, I must underline that this is not true. \(C\) is connected in several equations to the parameters of the operators \(\hat{P}_{\nu}\), and to the finally deduced ground state expression (which is also connected to the parameters of \(\hat{P}_{\nu}\)), so \(C\) becomes to be known only at the end of the calculation. Consequently \(\hat{H}^{\prime}=\hat{H}-C\) has the minimum possible eigenvalue zero corresponding to the ground state eigenfunction \(|\Psi_{0}\rangle\). This last, is deduced from \((\sum_{\nu}\hat{P}_{\nu})|\Psi_{0}\rangle=0\), using elevated techniques, see e.g. Ref. [8; 9]. The procedure provides valuable results even in three dimensions [10], or two dimensional disordered systems [11]. Concerning the technique used, it must be underlined [9] that contrary to the early stage of the method of the positive semidefinite operators where in the first step the ground state has been constructed, and the Hamiltonian adjusted to it (usually by extension terms in the Hamiltonian), the here used procedure starts from a given Hamiltonian, and deduce coherently the ground state corresponding to it without to know it a priory, and without to use extension terms. Concerning the pentagon chains, till now only nearest neighbor hoppings and the Hubbard interactions have been considered. The novelty in the present paper is that besides the Hamiltonian terms mentioned above, in deducing the exact ground states, we also consider SOI, nearest-neighbor Coulomb repulsion, and presence of external fields, from which the effect of the external magnetic field is considered both in its effect on the orbital motion of carriers (by Peierls phase factors), and its effect on the spin degrees of freedom (by the Zeeman term). In these conditions the paper focuses on the development of the technique such to construct a procedure which allows the treatment of the system in the presented circumstances. Hence one concentrates on the transformation of the Hamiltonian in positive semidefinite form, solution strategy of the matching equations, and construction process of the exact ground states, items, that for a coherent presentation need a given, and not negligible room. Hence, the presentation of the physical properties of the deduced ground states and phases is only sketched (I also note that this job needs extensive further supplementary study), the detailed presentation of the physical properties of the deduced exact ground states is left to a future publication. Nevertheless, the presented deduced physical properties underline a broad spectrum of interesting characteristics as e.g. emerging charge density wave phases, modification possibility of a static charge distribution by an external static magnetic field, switching possibilities between insulating and conducting phases by external magnetic fields, creation of effective flat bands provided not by the bare band structure, but by the two-particle local or non-local Coulomb interactions, peculiar Figure 1: The pentagon unit cell. The numbers denote the n value, the in cell numbering of sites. The origin of the system of coordinates from where one analyses the cell is denoted by 0. The hopping matrix elements \(t,t_{c},t_{h},t_{f}\)are indicated on bonds, \({\bf i}\) denotes the lattice site to which the cell corresponds, and \({\bf a}\) is the lattice constant. ferromagnetic states, etc. The remaining part of the paper is structured as follows: Section II presents the studied system, the transformation of the Hamiltonian in positive semidefinite form, and the solutions of the matching equations in the presence and absence of external fields. Section III presents the deduction procedure of the exact ground states, and the obtained ground states themselves. Section IV presents the summary and conclusions, while the attached six Appendices at the end of the paper contain the mathematical details. ## II The system analyzed ### The case of the zero external magnetic field The Hamilton of the system is \(\hat{H}=\hat{H}_{K}+\hat{H}_{U}+\hat{H}_{V}\), where one has \[\hat{H}_{K} = \sum_{i}\sum_{\sigma,\sigma^{\prime}}[t^{\sigma,\sigma^{\prime}} (\hat{c}^{\dagger}_{i,1,\sigma}\hat{c}_{i,5,\sigma^{\prime}}+\hat{c}^{\dagger}_ {i,2,\sigma}\hat{c}_{i,1,\sigma^{\prime}}+\hat{c}^{\dagger}_{i,4,\sigma}\hat{c }_{i,3,\sigma^{\prime}}+\hat{c}^{\dagger}_{i,5,\sigma}\hat{c}_{i,4,\sigma^{ \prime}})+t^{\sigma,\sigma^{\prime}}_{h}\hat{c}^{\dagger}_{i,3,\sigma}\hat{c}_ {i,2,\sigma^{\prime}}\] \[+ t^{\sigma,\sigma^{\prime}}_{f}\hat{c}^{\dagger}_{i,6,\sigma} \hat{c}_{i,5,\sigma^{\prime}}+t^{\sigma,\sigma^{\prime}}_{c}\hat{c}^{\dagger }_{i+4,7,\sigma}\hat{c}_{i,4,\sigma^{\prime}}+h.c.]+\hat{H}_{\epsilon},\quad \hat{H}_{\epsilon}=\sum_{i,n}\epsilon^{\sigma,\sigma^{\prime}}\hat{c}^{ \dagger}_{i,n,\sigma}\hat{c}_{i,n,\sigma^{\prime}},\] \[\hat{H}_{U} = \sum_{i}\sum_{n=1,2..6}U_{n}\hat{n}^{\dagger}_{i,n}\hat{n}^{ \dagger}_{i,n},\quad\hat{H}_{V}=V\sum_{<i,j>}\sum_{<n,n^{\prime}>}\hat{n}_{i, n}\hat{n}_{j,n^{\prime}} \tag{1}\] In Fig.1 one presents the unit cell of the system containing the in cell numbering of sites (\(n\)) in between 1 and 6. The hopping matrix elements, and the on-site one particle potentials have (\(\sigma,\sigma^{\prime}\)) spin projection indices in order to allow the presence of the many-body SOI. To match better the experimental situations, the antenna (describing the upper connected atoms), i.e. bond 5-6, the lower flank (bond 2-3), and the inter-cell connection (bond 4-7), are such taken to allow different hopping matrix elements, again for allowing comparison to different experimental realizations of the pentagon chain. The \(\hat{H}_{\epsilon}\) term collects the on-site one particle potential terms. The \(U_{n}>0\) coefficients are representing the on site Coulomb repulsions (Hubbard interaction), while \(V\geq 0\) represents the nearest-neighbor Coulomb repulsion. In order to transform in exact terms \(\hat{H}\) in a positive semidefinite form, one introduces ten block operators \(\hat{B}_{i,z,\sigma}\), \(z=1,2,...5\) as follows: \[\hat{B}_{i,1,\uparrow}=a_{1,1}\hat{c}_{i,1,\uparrow}+a_{1,2}\hat{ c}_{i,2,\uparrow}+a_{1,5}\hat{c}_{i,5,\uparrow}+b_{1,1}\hat{c}_{i,1,\downarrow},\] \[\hat{B}_{i,1,\downarrow}=a_{1,1}\hat{c}_{i,1,\downarrow}+a_{1,2} \hat{c}_{i,2,\downarrow}+a_{1,5}\hat{c}_{i,5,\downarrow}+b^{\prime}_{1,1}\hat{ c}_{i,1,\uparrow},\] \[\hat{B}_{i,2,\uparrow}=a_{2,2}\hat{c}_{i,2,\uparrow}+a_{2,3}\hat{c}_{i,3, \uparrow}+a_{2,5}\hat{c}_{i,5,\uparrow},\] \[\hat{B}_{i,2,\downarrow}=a_{2,2}\hat{c}_{i,2,\downarrow}+a_{2,3} \hat{c}_{i,3,\downarrow}+a_{2,5}\hat{c}_{i,5,\downarrow},\] \[\hat{B}_{i,3,\uparrow}=a_{3,3}\hat{c}_{i,3,\uparrow}+a_{3,4} \hat{c}_{i,4,\uparrow}+a_{3,5}\hat{c}_{i,5,\uparrow}+b_{3,4}\hat{c}_{i,4, \downarrow},\] \[\hat{B}_{i,3,\downarrow}=a_{3,3}\hat{c}_{i,3,\downarrow}+a_{3,4} \hat{c}_{i,4,\downarrow}+a_{3,5}\hat{c}_{i,5,\downarrow}+b_{3,4}^{\prime}\hat{c }_{i,4,\uparrow},\] \[\hat{B}_{i,4,\uparrow}=a_{4,5}\hat{c}_{i,5,\uparrow}+a_{4,6} \hat{c}_{i,6,\uparrow},\] \[\hat{B}_{i,4,\downarrow}=a_{4,5}\hat{c}_{i,5,\downarrow}+a_{4,6} \hat{c}_{i,6,\downarrow},\] \[\hat{B}_{i,5,\uparrow}=a_{5,4}\hat{c}_{i,4,\uparrow}+a_{5,7} \hat{c}_{i+a,7,\uparrow}+b_{5,4}\hat{c}_{i,4,\downarrow}+b_{5,7}\hat{c}_{i+a, 7,\downarrow},\] \[\hat{B}_{i,5,\downarrow}=a_{5,4}\hat{c}_{i,4,\downarrow}+a_{5,7} \hat{c}_{i+a,7,\downarrow}+b_{5,4}^{\prime}\hat{c}_{i,4,\uparrow}+b_{5,7}^{ \prime}\hat{c}_{i+a,7,\uparrow}. \tag{2}\] Here \(i\) represents the lattice site, \(z\) denotes the block operator index (\(z=1,2,...5\)), so \((z,\sigma)\) has 10 different values, while \(\sigma\) represents the spin projection. The coefficients \(a_{z,n}\) and \(b_{z,n}\) called block operator parameters, are indexed by the block operator index \(z\), and \(n\), the on cell position on which the operator following the coefficient acts. The coefficients \(a_{z,n}\) are present in terms whose spin projection is equal to the spin projection of the block operator, while the \(b_{z,n}\) coefficients denote terms whose spin projection is opposite to the spin projection of the block operator. One has totally 21 block operator parameters (\(a_{z,n}\) values: 13; and \(b_{z,n}\) values: 8), which at the moment are unknown, but can be determined from the transformation of the starting Hamiltonian (1), to the positive semidefinite Hamiltonian \[\hat{H}=\sum_{i,z,\sigma}\hat{B}_{i,z,\sigma}^{\dagger}\hat{B}_{i,z,\sigma}+ \hat{H}_{U}+\hat{H}_{V}. \tag{3}\] At this step I must note that the presented expression of the block operators in (2) has been optimized for the analyzed problem. This optimization has in fact two main aspects, namely: i) Because of the translational symmetry, the block operator parameters are the same in each unit cell, i.e. are \(i\) independent. ii) The opposite spin components in the block operators are optimized in order to minimally describe the requirements raised by the presence of SOI which leads in fact to spin-flip hoppings. Given by this, we have much less \(b_{z,n}\) coefficients in block operators than \(a_{z,n}\) coefficients, which simplifies and helps the mathematical treatment. If (3) is the same relation as (1), it must be a relationship in between the block operator parameters and the starting parameters of the Hamiltonian (1). These relations called the matching equations are presented in the following subsection. ### Matching equations in zero external magnetic field. The matching equations are obtained by effectuating the \(\hat{B}^{\dagger}_{i,z,\sigma}\hat{B}_{i,z,\sigma}\) product together with the \(\sum_{i,z,\sigma}\) sum operation in (3), and equating each obtained term to the corresponding term in (1). E.g. the coefficient of the operator \(\hat{c}^{\dagger}_{i,1,\uparrow}\hat{c}_{i,5,\uparrow}\) in (3) is \(a^{*}_{1,1}a_{1,5}\), while in (1) is \(t^{\uparrow,\uparrow}_{1,5}\), the relation being spin projection independent, from where the third line of (4) follows, etc. For the hoppings without spin-flip one obtains ten equations: \[0=t^{\uparrow,\uparrow}_{2,5}=t^{\downarrow,\downarrow}_{2,5}=a ^{*}_{1,2}a_{1,5}+a^{*}_{2,2}a_{2,5},\] \[0=t^{\uparrow,\uparrow}_{3,5}=t^{\downarrow,\downarrow}_{3,5}=a ^{*}_{3,3}a_{3,5}+a^{*}_{2,3}a_{2,5},\] \[t=t^{\uparrow,\uparrow}_{1,5}=t^{\downarrow,\downarrow}_{1,5}=a ^{*}_{1,1}a_{1,5},\] \[t=t^{\uparrow,\uparrow}_{2,1}=t^{\downarrow,\downarrow}_{2,1}=a ^{*}_{1,2}a_{1,1},\] \[t=t^{\uparrow,\uparrow}_{4,3}=t^{\downarrow,\downarrow}_{4,3}=a ^{*}_{3,4}a_{3,3},\] \[t=t^{\uparrow,\uparrow}_{5,4}=t^{\downarrow,\downarrow}_{5,4}=a ^{*}_{3,5}a_{3,4},\] \[t_{h}=t^{\uparrow,\uparrow}_{3,2}=t^{\downarrow,\downarrow}_{3,2} =a^{*}_{2,3}a_{2,2},\] \[t_{f}=t^{\uparrow,\uparrow}_{6,5}=t^{\downarrow,\downarrow}_{6,5 }=a^{*}_{4,6}a_{4,5},\] \[t_{c}=t^{\uparrow,\uparrow}_{c}=t^{\uparrow,\uparrow}_{7,4}=a^{ *}_{5,7}a_{5,4}+b^{\prime}{}_{5,7}^{*}b^{\prime}_{5,4},\] \[t_{c}=t^{\downarrow,\downarrow}_{c}=t^{\downarrow,\downarrow}_{7,4 }=a^{*}_{5,7}a_{5,4}+b^{*}_{5,7}b_{5,4}. \tag{4}\] The spin-flip hoppings provide also ten equations as follows \[t^{\uparrow,\downarrow}_{1,5}=b^{\prime}{}_{1,1}^{*}a_{1,5},\] \[t^{\downarrow,\uparrow}_{1,5}=\frac{t^{\uparrow,\downarrow}_{1,5 }}{\alpha}=b^{*}_{1,1}a_{1,5},\] \[t^{\uparrow,\downarrow}_{2,1}=a^{*}_{1,2}b_{1,1},\] \[t^{\downarrow,\uparrow}_{2,1}=\frac{t^{\uparrow,\downarrow}_{2,1 }}{\alpha}=a^{*}_{1,2}b^{\prime}{}_{1,1},\] \[t^{\uparrow,\downarrow}_{5,4}=a^{*}_{3,5}b_{3,4},\] \[t^{\downarrow,\uparrow}_{5,4}=\frac{t^{\uparrow,\downarrow}_{5,4 }}{\alpha}=a^{*}_{3,5}b^{\prime}{}_{3,4},\] \[t^{\uparrow,\downarrow}_{4,3}=b^{\prime}{}_{3,4}a_{3,3},\] \[t^{\downarrow,\uparrow}_{4,3}=\frac{t^{\uparrow,\downarrow}_{4,3 }}{\alpha}=b^{*}_{3,4}a_{3,3},\] \[t^{\uparrow,\downarrow}_{c}=t^{\uparrow,\downarrow}_{7,4}=b^{ \prime}{}_{5,7}^{*}a_{5,4}+a^{*}_{5,7}b_{5,4},\] \[t_{c}^{\downarrow,\uparrow}=t_{7,4}^{\downarrow,\uparrow}=\frac{t_{7,4}^{ \uparrow,\downarrow}}{\alpha_{c}}=b_{5,7}^{*}a_{5,4}+a_{5,7}^{*}b^{\prime}{}_{5, 4}. \tag{5}\] In Eqs(5) the coefficients \(\alpha,\alpha_{c}\) take into consideration the difference between the hoppings \(t_{n,n^{\prime}}^{\uparrow,\downarrow}\) and \(t_{n,n^{\prime}}^{\downarrow,\uparrow}\) given by the SOI. In the present case, taking into account only the Rashba term for conducting polymers [12], one has \(\alpha=\alpha_{c}=-1\). The non spin-flip on-site one particle potentials provide eight matching equations which are as follows: \[\epsilon_{4}=\epsilon_{6}^{\uparrow,\uparrow}=\epsilon_{6}^{ \downarrow,\downarrow}=|a_{4,6}|^{2},\] \[\epsilon_{3}=\epsilon_{5}^{\uparrow,\uparrow}=\epsilon_{5}^{ \downarrow,\downarrow}=|a_{4,5}|^{2}+|a_{1,5}|^{2}+|a_{2,5}|^{2}+|a_{3,5}|^{2},\] \[\epsilon_{2}=\epsilon_{2}^{\uparrow,\uparrow}=\epsilon_{2}^{ \downarrow,\downarrow}=|a_{1,2}|^{2}+|a_{2,2}|^{2},\] \[\epsilon_{2}=\epsilon_{3}^{\uparrow,\uparrow}=\epsilon_{3}^{ \downarrow,\downarrow}=|a_{2,3}|^{2}+|a_{3,3}|^{2},\] \[\epsilon_{1}=\epsilon_{1}^{\uparrow,\uparrow}=|a_{1,1}|^{2}+|a_{ 5,7}|^{2}+|b^{\prime}_{1,1}|^{2}+|b^{\prime}_{5,7}|^{2},\] \[\epsilon_{1}=\epsilon_{1}^{\downarrow,\downarrow}=|a_{1,1}|^{2}+|a _{5,7}|^{2}+|b_{1,1}|^{2}+|b_{5,7}|^{2},\] \[\epsilon_{1}=\epsilon_{4}^{\uparrow,\uparrow}=|a_{3,4}|^{2}+|a_{ 5,4}|^{2}+|b^{\prime}_{3,4}|^{2}+|b^{\prime}_{5,4}|^{2},\] \[\epsilon_{1}=\epsilon_{4}^{\downarrow,\downarrow}=|a_{3,4}|^{2}+|a _{5,4}|^{2}+|b_{3,4}|^{2}+|b_{5,4}|^{2}. \tag{6}\] For spin-flip on-site one particle potential one automatically has \(\epsilon_{6}^{\uparrow,\downarrow}=\epsilon_{6}^{\downarrow,\uparrow}=0,\ \epsilon_{5}^{ \uparrow,\downarrow}=\epsilon_{5}^{\downarrow,\uparrow}=0,\ \epsilon_{3}^{ \uparrow,\downarrow}=\epsilon_{3}^{\downarrow,\uparrow}=0,\ \epsilon_{2}^{ \uparrow,\downarrow}=\epsilon_{2}^{\downarrow,\uparrow}=0,\ \mbox{while}\ \epsilon_{1}^{ \downarrow,\uparrow}=(\epsilon_{1}^{\uparrow,\downarrow})^{*},\ \epsilon_{4}^{ \downarrow,\uparrow}=(\epsilon_{4}^{\uparrow,\downarrow})^{*}\) is satisfied. The last two relations further provide two matching equations: \[\epsilon_{1}^{\uparrow,\downarrow}=a_{1,1}^{*}b_{1,1}+a_{5,7}^{ *}b_{5,7}+b^{\prime}{}_{1,1}^{*}a_{1,1}+b^{\prime}{}_{5,7}^{*}a_{5,7},\] \[\epsilon_{4}^{\uparrow,\downarrow}=a_{3,4}^{*}b_{3,4}+a_{5,4}^{ *}b_{5,4}+b^{\prime}{}_{3,4}^{*}a_{3,4}+b^{\prime}{}_{5,4}^{*}a_{5,4}. \tag{7}\] The special treatment of \(\epsilon_{1}^{\uparrow,\downarrow},\epsilon_{4}^{\uparrow,\downarrow}\) is motivated by the fact that the block operators connected to the bond \(4-7\), or sites \(1,4\) contain both spin projection components. But this is done only for the help of the mathematical treatment, and during the solving process of the matching equations, the left sides of Eqs.(7) will be considered zero. Consequently, one has totally \(30\) matching equations. These are coupled, complex algebraic and non-linear equations, which must be solved for the block operator parameters in order to indeed transform in exact terms the Hamiltonian from Eq.(1), to the Hamiltonian from Eq.(3). Having in mind the transformation to the Hamiltonian form from (3) one realizes, that based on the presented procedure another transformation of the Hamiltonian (1) in positive semidefinite form is possible (see Appendix A), namely \[\hat{H}=\sum_{i,z,\sigma}\hat{B}_{i,z,\sigma}\hat{B}_{i,z,\sigma}^{\dagger}+ \sum_{i}\sum_{n=1,2,..6}\hat{P}_{i,n}+\sum_{<l,k>}V\hat{R}_{<l,k>}+C, \tag{8}\] where \(C=q_{U}N-N_{c}\sum_{z,\sigma}y_{z,\sigma}-N_{c}\sum_{n=1,2,..6}U_{n}-28VN_{c}\), \(N\) being the number of electrons, \(N_{c}\) the number of lattice sites, \(y_{z,\sigma}=\{\hat{B}_{i,z,\sigma},\hat{B}_{i,z,\sigma}^{\dagger}\}\) and \(q_{U}\) being a constant which enters in the matching equations. In deducing \(\hat{B}_{i,z,\sigma}\), the procedure described in this section must be applied, the results being identical, but the Hamiltonian parameters must be changed according to (10), namely \(t_{i,j}^{\sigma,\sigma}\rightarrow-t_{i,j}^{\sigma,\sigma}\), \(t_{i,j}^{\sigma,\sigma^{\prime}}\rightarrow-t_{i,j}^{\sigma,\sigma^{\prime}}\), \(\epsilon_{i}^{\sigma,\sigma}\to q_{U}-(\epsilon_{i}^{\sigma,\sigma}+U _{i}+2Vr_{n})\), \(\epsilon^{\sigma,\sigma^{\prime}}\rightarrow-\epsilon^{\sigma,\sigma^{ \prime}}\), where \(\sigma\neq\sigma^{\prime}\) holds. Here, \(\hat{P}_{i,n}=\hat{n}_{i,n}^{\dagger}\hat{n}_{i,n}^{\downarrow}-(\hat{n}_{i,n }^{\dagger}+\hat{n}_{i,n}^{\downarrow})+1\) represents a positive semidefinite operator which provides its minimum possible eigenvalue zero, when at least one electron is present at the site \((i,n)\) (i.e. lattice site i, and in-cell position n). Furthermore, \(\hat{R}_{<l,k>}=\hat{n}_{l}\hat{n}_{k}-2(\hat{n}_{l}+\hat{n}_{k})+4\) is a positive semidefinite operator which provides its minimum possible eigenvalue zero, when the \(l,k\) nearest neighbor sites are both at least once occupied, but such that at least one of them is doubly occupied. At this step I must underline that a given starting Hamiltonian can be transformed in different positive semidefinite forms, each transformation placing the corresponding ground state in different regions of the parameter space [15]. ### Solution of matching equations in zero external magnetic field. The detailed solution of the matching equations is presented in Appendix B. The final result looks as follows: \[a_{1,1}=\frac{|t|e^{i\theta_{1}}}{\sqrt{\epsilon_{2}-t_{h}}},\:a _{1,2}=a_{1,5}=sign(t)\sqrt{\epsilon_{2}-t_{h}}\:e^{i\theta_{1}},\] \[a_{2,5}=-\frac{\epsilon_{2}-t_{h}}{\sqrt{t_{h}}}e^{i\theta_{2}}, \:a_{2,2}=a_{2,3}=\sqrt{t_{h}}\:e^{i\theta_{2}},\] \[a_{3,4}=\frac{|t|}{\sqrt{\epsilon_{2}-t_{h}}}e^{i\theta_{3}},\: a_{3,3}=a_{3,5}=sign(t)\sqrt{\epsilon_{2}-t_{h}}\:e^{i\theta_{3}},\] \[a_{4,6}=\sqrt{\epsilon_{4}}\:e^{i\theta_{4}},\:a_{4,5}=\frac{t_ {f}}{\sqrt{\epsilon_{4}}}\:e^{i\theta_{4}},\] \[a_{5,4}=\frac{\sqrt{\gamma_{1}}}{\sqrt{1+\frac{(\sqrt{\gamma_{1} \gamma_{2}}-t_{c})^{2}}{t_{c}^{\dagger,2}}}}e^{i\theta_{5}},\:a_{5,7}=\frac{ \sqrt{\gamma_{2}}}{\sqrt{1+\frac{(\sqrt{\gamma_{1}\gamma_{2}}-t_{c})^{2}}{t_ {c}^{\dagger,2}}}}e^{i\theta_{5}},\] \[b_{1,1} = sign(t)\frac{t_{2,1}^{\uparrow,\downarrow}}{\sqrt{\epsilon_{2}-t_{h }}}e^{i\theta_{1}},\;b^{\prime}_{1,1}=-sign(t)\frac{t_{2,1}^{\uparrow,\downarrow} }{\sqrt{\epsilon_{2}-t_{h}}}e^{i\theta_{1}},\] \[b_{3,4} = sign(t)\frac{t_{5,4}^{\uparrow,\downarrow}}{\sqrt{\epsilon_{2}-t _{h}}}e^{i\theta_{3}},\;b^{\prime}_{3,4}=-sign(t)\frac{t_{5,4}^{\uparrow, \downarrow}}{\sqrt{\epsilon_{2}-t_{h}}}e^{i\theta_{3}},\] \[b_{5,4} = \frac{\gamma_{1}\sqrt{\gamma_{2}}-t_{c}\sqrt{\gamma_{1}}}{t_{c}^ {\uparrow,\downarrow}\sqrt{1+\frac{(\sqrt{\gamma_{1}\gamma_{2}}-t_{c})^{2}}{ t_{c}^{\uparrow,\downarrow}}}}e^{i\theta_{5}},\;b^{\prime}_{5,4}=-\frac{\gamma_{1} \sqrt{\gamma_{2}}-t_{c}\sqrt{\gamma_{1}}}{t_{c}^{\uparrow,\downarrow}\sqrt{1+ \frac{(\sqrt{\gamma_{1}\gamma_{2}}-t_{c})^{2}}{t_{c}^{\uparrow,\downarrow}}}}e ^{i\theta_{5}},\] \[b_{5,7} = \frac{t_{c}\sqrt{\gamma_{2}}-\gamma_{2}\sqrt{\gamma_{1}}}{t_{c}^ {\uparrow,\downarrow}\sqrt{1+\frac{(\sqrt{\gamma_{1}\gamma_{2}}-t_{c})^{2}}{ t_{c}^{\uparrow,\downarrow}}}}e^{i\theta_{5}},\;b^{\prime}_{5,7}=-\frac{t_{c}\sqrt{ \gamma_{2}}-\gamma_{2}\sqrt{\gamma_{1}}}{t_{c}^{\uparrow,\downarrow}\sqrt{1+ \frac{(\sqrt{\gamma_{1}\gamma_{2}}-t_{c})^{2}}{t_{c}^{\uparrow,\downarrow}}}}e ^{i\theta_{5}}, \tag{9}\] where \(\theta_{1},\theta_{2},...\theta_{5}\) are arbitrary phases. The deduced solution is placed in the following parameter space domain \[\epsilon_{1}>0,\;\epsilon_{2}>0,\;\epsilon_{3}>0,\;\epsilon_{4}>0, \;t_{h}>0,\;\epsilon_{2}-t_{h}>0,\] \[\gamma_{1}=\epsilon_{1}-\frac{t^{2}+t_{2,1}^{\uparrow,\downarrow ^{2}}}{\epsilon_{2}-t_{h}}>0,\;\gamma_{2}=\epsilon_{1}-\frac{t^{2}+t_{5,4}^{ \uparrow,\downarrow^{2}}}{\epsilon_{2}-t_{h}}>0,\] \[\epsilon_{3}=\frac{t_{f}^{2}}{\epsilon_{4}}+\frac{\epsilon_{2}^ {2}-t_{h}^{2}}{t_{h}},\;\gamma_{1}\gamma_{2}=t_{c}^{2}+t_{c}^{\uparrow, \downarrow^{2}}. \tag{10}\] ### The case of the non-zero external magnetic field We analyze also the case when an external magnetic field acts on the system perpendicular to the plane containing the polymer, i.e, perpendicular to the unit cell. The effect of the magnetic field is taken into account via i) the Peierls phase factors attached to the hopping matrix elements in describing the action on the orbital motion, and ii) the Zeeman term describing the effect on the spin degrees of freedom. In order to be explicitly clear, for \(B\neq 0\) introducing the notation \(\hat{H}_{1}=\hat{H}_{K}-\hat{H}_{\epsilon}\), one must transcribe carefully the \(\hat{H}_{1}\) part of the one particle Hamiltonian as follows \[\hat{H}_{1} = \sum_{i}\sum_{\sigma,\sigma^{\prime}}[t_{1,5}^{\sigma,\sigma^{ \prime}}e^{i\phi_{1,5}}\hat{c}_{i,1,\sigma}^{\dagger}\hat{c}_{i,5,\sigma^{ \prime}}+t_{2,1}^{\sigma,\sigma^{\prime}}e^{i\phi_{2,1}}\hat{c}_{i,2,\sigma} ^{\dagger}\hat{c}_{i,1,\sigma^{\prime}}+t_{4,3}^{\sigma,\sigma^{\prime}}e^{i \phi_{4,3}}\hat{c}_{i,4,\sigma}^{\dagger}\hat{c}_{i,3,\sigma^{\prime}}+t_{5,4 }^{\sigma,\sigma^{\prime}}e^{i\phi_{5,4}}\hat{c}_{i,5,\sigma}^{\dagger}\hat{c} _{i,4,\sigma^{\prime}} \tag{11}\] \[+ t_{7,4}^{\sigma,\sigma^{\prime}}e^{i\phi_{7,4}}\hat{c}_{i+a,7, \sigma}^{\dagger}\hat{c}_{i,4,\sigma^{\prime}}+t_{3,2}^{\sigma,\sigma^{\prime}} e^{i\phi_{3,2}}\hat{c}_{i,3,\sigma}^{\dagger}\hat{c}_{i,2,\sigma^{\prime}}+t_{6,5}^{ \sigma,\sigma^{\prime}}e^{i\phi_{6,5}}\hat{c}_{i,6,\sigma}^{\dagger}\hat{c}_{i,5,\sigma^{\prime}}+h.c.],\] hence the total Hamiltonian becomes \[\hat{H}=\hat{H}_{1}+\hat{H}_{\epsilon}+\hat{H}_{U}+\hat{H}_{V}+\hat{H}_{Z}. \tag{12}\] For the Peierls phase factors one has \(\phi_{3,2}=\phi_{1}\), \(\phi_{4,3}=\phi_{2,1}=\phi_{2}\), \(\phi_{5,4}=\phi_{1,5}=\phi_{3}\), \(\phi_{5,6}=\phi_{7,4}=0\), and \(\phi=\phi_{1}+2\phi_{2}+2\phi_{3}=2\pi\Phi/\Phi_{0}\), where \(\Phi\) is the magnetic flux threading the unit cell, and \(\Phi_{0}=hc/e\) is the flux quantum. Separately for each phase, one has \(\phi_{\alpha}=2\pi BS_{\alpha}/\Phi_{0}\), where \(\alpha=1,2,3\), furthermore \(S_{1}=A(0,2,3)\), \(S_{2}=A(0,3,4)\), \(S_{3}=A(0,4,5)\), where \(A(i,j,k)\) represents the area of the triangle defined by the points \((i,j,k)\) in Fig.1 presenting the unit cell. One trivially has \(S=S_{1}+2S_{2}+2S_{3}\). The Zeeman term in Eq.(12) has the standard form \(\hat{H}_{Z}=-h\sum_{i,n}(\hat{n}_{i,n,\uparrow}-\hat{n}_{i,n,\downarrow})\). I note that \(\hat{H}_{Z}\) maintains the structure of \(\hat{H}\) since only it renormalizes the on-site one particle potentials in \(\hat{H}_{\epsilon}\) by \(\epsilon_{i}^{\sigma,\sigma}\rightarrow\epsilon_{i}^{\sigma,\sigma}+\mu h\), where \(\mu=-1\) for \(\sigma=\uparrow\) and \(\mu=+1\) for \(\sigma=\downarrow\). ### Matching equations in non-zero external magnetic field. In order to transform the Hamiltonian from (12) in positive semidefinite form one uses the same block operators as before, given in (2). The transformation in positive semidefinite form of the \(\hat{H}\) can be given with the condition to consider in Eq.(2) different \(a_{k,n}\) coefficients for \(\hat{B}_{i,k,\uparrow}\) (these will be denoted by \(a_{k,n,u}\)) and \(\hat{B}_{i,k,\downarrow}\) (these will be denoted by \(a_{k,n,d}\neq a_{k,n,u}\)), e.g. \[\hat{B}_{i,1,\uparrow}=a_{1,1,u}\hat{c}_{i,1,\uparrow}+a_{1,2,u} \hat{c}_{i,2,\uparrow}+a_{1,5,u}\hat{c}_{i,5,\uparrow}+b_{1,1}\hat{c}_{i,1, \downarrow},\] \[\hat{B}_{i,1,\downarrow}=a_{1,1,d}\hat{c}_{i,1,\downarrow}+a_{1,2,d}\hat{c}_{i,2,\downarrow}+a_{1,5,d}\hat{c}_{i,5,\downarrow}+b^{\prime}_{1,1} \hat{c}_{i,1,\uparrow},\,etc. \tag{13}\] The transformation in the form positive semidefinite form given in (3), or (8) (with the conditions mentioned below (8)) requires the following matching equations: The first ten matching equations from (4) for hoppings without spin-flip become now 18 equations (18, since also in Eq.(4) \(t_{c}^{\uparrow,\uparrow}\) and \(t_{c}^{\downarrow,\downarrow}\) had different equations) as follows \[t_{1,5}^{\uparrow,\uparrow}e^{i\phi_{1,5}}=te^{i\phi_{3}}=a_{1,1,u}^{*}a_{1,5,u},\] \[t_{1,5}^{\downarrow,\downarrow}e^{i\phi_{1,5}}=te^{i\phi_{3}}=a_{1,1,d}^{*}a_{1,5,d},\] \[t_{2,1}^{\uparrow,\uparrow}e^{i\phi_{2,1}}=te^{i\phi_{2}}=a_{1,2,u}^{*}a_{1,1,u},\] \[t_{2,1}^{\downarrow,\uparrow}e^{i\phi_{2,1}}=te^{i\phi_{2}}=a_{1,2,d}^{*}a_{1,1,d},\] \[t_{4,3}^{\uparrow,\uparrow}e^{i\phi_{4,3}}=te^{i\phi_{2}}=a_{3,4,u}^{*}a_{3,3,u},\] \[t_{4,3}^{\downarrow,\downarrow}e^{i\phi_{4,3}}=te^{i\phi_{2}}=a_{3,4,d}^{*}a_{3,3,d},\] \[t_{5,4}^{\uparrow,\uparrow}e^{i\phi_{5,4}}=te^{i\phi_{3}}=a_{3,5,u}^{*}a_{3,4,u},\] \[t_{5,4}^{\downarrow,\downarrow}e^{i\phi_{5,4}}=te^{i\phi_{3}}=a_{3,5,d}^{*}a_{3,4,d},\] \[t_{3,2}^{\uparrow,\uparrow}e^{i\phi_{3,2}}=t_{h}e^{i\phi_{1}}=a_{2,3,u}^{*}a_{2,2,u},\] \[t_{3,2}^{\uparrow,\downarrow}e^{i\phi_{3,2}}=t_{h}e^{i\phi_{1}}=a_{2,3,d}^{*}a_{2,2,d},\] \[t_{6,5}^{\uparrow,\uparrow}e^{i\phi_{6,5}}=t_{f}=a_{4,6,u}^{*}a_ {4,5,u},\] \[t_{6,5}^{\downarrow,\uparrow}e^{i\phi_{6,5}}=t_{f}=a_{4,6,d}^{*}a _{4,5,d},\] \[t_{7,4}^{\uparrow,\uparrow}e^{i\phi_{7,4}}=t_{c}=a_{5,7,u}^{*}a_ {5,4,u}+{b^{\prime}}_{5,7}^{*}b_{5,4}^{\prime},\] \[t_{7,4}^{\downarrow,\downarrow}e^{i\phi_{7,4}}=t_{c}=a_{5,7,d}^{* }a_{5,4,d}+b_{5,7}^{*}b_{5,4},\] \[0=t_{2,5}^{\uparrow,\uparrow}=a_{1,2,u}^{*}a_{1,5,u}+a_{2,2,u}^{ *}a_{2,5,u},\] \[0=t_{2,5}^{\downarrow,\downarrow}=a_{1,2,d}^{*}a_{1,5,d}+a_{2,2, d}^{*}a_{2,5,d},\] \[0=t_{3,5}^{\uparrow,\uparrow}=a_{3,3,u}^{*}a_{3,5,u}+a_{2,3,u}^{ *}a_{2,5,u},\] \[0=t_{3,5}^{\downarrow,\downarrow}=a_{3,3,d}^{*}a_{3,5,d}+a_{2,3, d}^{*}a_{2,5,d}. \tag{14}\] The second group of ten matching equations relating the spin-flip contributions describing the SOI interaction, and substituting (5), become \[t_{1,5}^{\uparrow,\downarrow}e^{i\phi_{1,5}}=-\lambda e^{i\phi_ {3}}={b^{\prime}}_{1,1}^{*}a_{1,5,d},\] \[t_{1,5}^{\downarrow,\uparrow}e^{i\phi_{1,5}}=\frac{t_{1,5}^{ \uparrow,\downarrow}e^{i\phi_{1,5}}}{\alpha}=\lambda e^{i\phi_{3}}==b_{1,1}^{ *}a_{1,5,u},\] \[t_{2,1}^{\uparrow,\downarrow}e^{i\phi_{2,1}}=\lambda e^{i\phi_{2 }}=a_{1,2,u}^{*}b_{1,1},\] \[t_{2,1}^{\downarrow,\uparrow}e^{i\phi_{2,1}}=\frac{t_{2,1}^{ \uparrow,\downarrow}e^{i\phi_{2,1}}}{\alpha}=-\lambda e^{i\phi_{2}}=a_{1,2,d}^ {*}b^{\prime}_{1,1},\] \[t_{5,4}^{\uparrow,\downarrow}e^{i\phi_{5,4}}=-\lambda e^{i\phi_ {3}}=a_{3,5,u}^{*}b_{3,4},\] \[t_{5,4}^{\downarrow,\uparrow}e^{i\phi_{5,4}}=\frac{t_{5,4}^{ \uparrow,\downarrow}e^{i\phi_{5,4}}}{\alpha}=\lambda e^{i\phi_{3}}=a_{3,5,d}^ {*}b^{\prime}_{3,4},\] \[t_{4,3}^{\uparrow,\downarrow}e^{i\phi_{4,3}}=\lambda e^{i\phi_{2 }}={b^{\prime}}_{3,4}^{*}a_{3,3,d},\] \[t_{4,3}^{\downarrow,\uparrow}e^{i\phi_{4,3}}=\frac{t_{4,3}^{ \uparrow,\downarrow}e^{i\phi_{4,3}}}{\alpha}=-\lambda e^{i\phi_{2}}=b_{3,4}^ {*}a_{3,3,u},\] \[t_{7,4}^{\uparrow,\downarrow}e^{i\phi_{4,7}}=\lambda_{c}={b^{ \prime}}_{5,7}^{*}a_{5,4,d}+a_{5,7,u}^{*}b_{5,4},\] \[t_{7,4}^{\downarrow,\uparrow}e^{i\phi_{7,4}}=\frac{t_{7,4}^{ \uparrow,\downarrow}e^{i\phi_{7,4}}}{\alpha_{c}}=-\lambda_{c}=b_{5,7}^{*}a_{5,4,u}+a_{5,7,d}^{*}b^{\prime}_{5,4}. \tag{15}\] Here, taking into account symmetry considerations, we denoted the spin-flip hopping strengths by \(\lambda\) and \(\lambda_{c}\) according to the relations \(t_{1,5}^{\uparrow,\downarrow}e^{i\phi_{1,5}}=-\lambda e^{i\phi_{3}}\), \(t_{5,4}^{\uparrow,\downarrow}e^{i\phi_{5,4}}=-\lambda e^{i\phi_{3}}\), \(t_{2,1}^{\uparrow,\downarrow}e^{i\phi_{2,1}}=\lambda e^{i\phi_{2}}\), \(t_{4,3}^{\uparrow,\downarrow}e^{i\phi_{4,3}}=\lambda e^{i\phi_{2}}\), \(t_{7,4}^{\uparrow,\downarrow}e^{i\phi_{7,4}}=\lambda_{c}e^{i\phi_{7,4}}= \lambda_{c}.\) Finally, the last ten matching equations provided by the on-site one-particle potentials are now 14 equations, as follows \[\epsilon_{6}^{\uparrow,\uparrow}=\epsilon_{6}-h=|a_{4,6,u}|^{2},\] \[\epsilon_{6}^{\downarrow,\downarrow} = \epsilon_{6}+h=|a_{4,6,d}|^{2},\] \[\epsilon_{5}^{\uparrow,\uparrow} = \epsilon_{5}-h=|a_{4,5,u}|^{2}+|a_{1,5,u}|^{2}+|a_{2,5,u}|^{2}+|a_{ 3,5,u}|^{2},\] \[\epsilon_{5}^{\downarrow,\downarrow} = \epsilon_{5}+h=|a_{4,5,d}|^{2}+|a_{1,5,d}|^{2}+|a_{2,5,d}|^{2}+|a_ {3,5,d}|^{2},\] \[\epsilon_{2}^{\uparrow,\uparrow} = \epsilon_{2}-h=|a_{1,2,u}|^{2}+|a_{2,2,u}|^{2},\] \[\epsilon_{2}^{\downarrow,\downarrow} = \epsilon_{2}+h=|a_{1,2,d}|^{2}+|a_{2,2,d}|^{2},\] \[\epsilon_{3}^{\uparrow,\uparrow} = \epsilon_{3}-h=|a_{2,3,u}|^{2}+|a_{3,3,u}|^{2},\] \[\epsilon_{3}^{\downarrow,\downarrow} = \epsilon_{3}+h=|a_{2,3,d}|^{2}+|a_{3,3,d}|^{2},\] \[\epsilon_{1}^{\uparrow,\uparrow} = \epsilon_{1}-h=|a_{1,1,u}|^{2}+|a_{5,7,u}|^{2}+|b_{1,1}^{\prime}| ^{2}+|b_{5,7}^{\prime}|^{2},\] \[\epsilon_{1}^{\downarrow,\downarrow} = \epsilon_{1}+h=|a_{1,1,d}|^{2}+|a_{5,7,d}|^{2}+|b_{1,1}|^{2}+|b_{5,7}|^{2},\] \[\epsilon_{4}^{\uparrow,\uparrow} = \epsilon_{4}-h=|a_{3,4,u}|^{2}+|a_{5,4,u}|^{2}+|b_{3,4}^{\prime}| ^{2}+|b_{5,4}^{\prime}|^{2},\] \[\epsilon_{4}^{\downarrow,\downarrow} = \epsilon_{4}+h=|a_{3,4,d}|^{2}+|a_{5,4,d}|^{2}+|b_{3,4}|^{2}+|b_{ 5,4}|^{2},\] \[\epsilon_{1}^{\uparrow,\downarrow} = 0=a_{1,1,u}^{*}b_{1,1}+a_{5,7,u}^{*}b_{5,7}+b_{1,1}^{\prime\,*}a_ {1,1,d}+b_{5,7}^{\prime\,*}a_{5,7,d},\] \[\epsilon_{4}^{\uparrow,\downarrow} = 0=a_{3,4,u}^{*}b_{3,4}+a_{5,4,u}^{*}b_{5,4}+b_{3,4}^{\prime\,*}a_ {3,4,d}+b_{5,4}^{\prime\,*}a_{5,4,d}, \tag{16}\] where one has only 14 equations, since in Eqs.(6,7), the two equalities in (7) and the last four equalities from (6) (so totally six equalities) provide only six equations in (16), i.e. not multiplies. Finally, one has in the \({\bf B}\neq 0\) case 42 coupled, non-linear complex algebraic matching equations. I only mention, that numerical software for solving such system of equations is not present today. ### Solution of the matching equations in the presence of the external magnetic field The matching equations at \({\bf B}\neq 0\) are solved in the Appendix C. The solution looks as follows: In the case of the first eight block operators, the block operator parameters are given by the following relations: The coefficients of the \(z=1\) block operators \(\hat{B}_{i,z=1,\sigma}\) are given by \[a_{1,1,u} = [\frac{t^{2}(T_{h}+\epsilon_{3}-h)}{(\epsilon_{2}-h)(\epsilon_{3 }-h)-t_{h}^{2}}]^{1/2}e^{i\chi_{1,u}},\qquad a_{1,1,d}=[\frac{t^{2}(T_{h}+ \epsilon_{3}+h)}{(\epsilon_{2}+h)(\epsilon_{3}+h)-t_{h}^{2}}]^{1/2}e^{i\chi_{1,d}},\] \[a_{1,5,u} = te^{i\phi_{3}}[\frac{(\epsilon_{2}-h)(\epsilon_{3}-h)-t_{h}^{2} }{t^{2}(T_{h}+\epsilon_{3}-h)}]^{1/2}e^{i\chi_{1,u}},\quad a_{1,5,d}=te^{i\phi_ {3}}[\frac{(\epsilon_{2}+h)(\epsilon_{3}+h)-t_{h}^{2}}{t^{2}(T_{h}+\epsilon_ {3}+h)}]^{1/2}e^{i\chi_{1,d}},\] \[a_{1,2,u}=te^{-i\phi_{2}}[\frac{(\epsilon_{2}-h)(\epsilon_{3}-h)-t_{ h}^{2}}{t^{2}(T_{h}+\epsilon_{3}-h)}]^{1/2}e^{i\chi_{1,u}},\quad a_{1,2,d}=te^{-i \phi_{2}}[\frac{(\epsilon_{2}+h)(\epsilon_{3}+h)-t_{h}^{2}}{t^{2}(T_{h}+ \epsilon_{3}+h)}]^{1/2}e^{i\chi_{1,d}},\] \[b_{1,1}=\frac{\lambda}{t}\,[\frac{t^{2}(T_{h}+\epsilon_{3}-h)}{( \epsilon_{2}-h)(\epsilon_{3}-h)-t_{h}^{2}}]^{1/2}e^{i\chi_{1,u}},\quad b_{1,1}^ {\prime}=-\frac{\lambda}{t}\,[\frac{t^{2}(T_{h}+\epsilon_{3}+h)}{(\epsilon_{2 }+h)(\epsilon_{3}+h)-t_{h}^{2}}]^{1/2}e^{i\chi_{1,d}}, \tag{17}\] where \(T_{h}=t_{h}e^{-i\phi}\geq 0\) must be satisfied. The coefficients of the \(z=2\) block operators become \[a_{2,2,u}=[\frac{(\epsilon_{3}-h)T_{h}+t_{h}^{2}}{T_{h}+\epsilon _{3}-h}]^{1/2}e^{i\chi_{2,u}},\quad\quad\quad a_{2,2,d}=[\frac{(\epsilon_{3} +h)T_{h}+t_{h}^{2}}{T_{h}+\epsilon_{3}+h}]^{1/2}e^{i\chi_{2,d}},\] \[a_{2,3,u}=t_{h}e^{-i\phi_{1}}[\frac{T_{h}+\epsilon_{3}-h}{( \epsilon_{3}-h)T_{h}+t_{h}^{2}}]^{1/2}e^{i\chi_{2,u}},\quad a_{2,3,d}=t_{h}e^{ -i\phi_{1}}[\frac{T_{h}+\epsilon_{3}+h}{(\epsilon_{3}+h)T_{h}+t_{h}^{2}}]^{1/2 }e^{i\chi_{2,d}},\] \[a_{2,5,u}=-e^{i(\phi_{2}+\phi_{3})}\frac{[(\epsilon_{2}-h)( \epsilon_{3}-h)-t_{h}^{2}]}{\sqrt{T_{h}(\epsilon_{2}-h)+t_{h}^{2}}\sqrt{ \epsilon_{3}-h+T_{h}}}\,e^{i\chi_{2,u}},\] \[a_{2,5,d}=-e^{i(\phi_{2}+\phi_{3})}\frac{[(\epsilon_{2}+h)( \epsilon_{3}+h)-t_{h}^{2}]}{\sqrt{T_{h}(\epsilon_{2}+h)+t_{h}^{2}}\sqrt{ \epsilon_{3}+h+T_{h}}}\,e^{i\chi_{2,d}}. \tag{18}\] The coefficients of the \(z=3\) block operators become \[a_{3,3,u}=te^{i\phi_{2}}[\frac{T_{h}[(\epsilon_{2}-h)(\epsilon_{ 3}-h)-t_{h}^{2}]}{t^{2}[T_{h}(\epsilon_{2}-h)+t_{h}^{2}]}]^{1/2}e^{i\chi_{3,u} },\quad a_{3,3,d}=te^{i\phi_{2}}[\frac{T_{h}[(\epsilon_{2}+h)(\epsilon_{3}+h)-t _{h}^{2}]}{t^{2}[T_{h}(\epsilon_{2}+h)+t_{h}^{2}]}]^{1/2}e^{i\chi_{3,d}},\] \[a_{3,4,u}=[\frac{t^{2}[T_{h}(\epsilon_{2}-h)+t_{h}^{2}]}{T_{h}[ (\epsilon_{2}-h)(\epsilon_{3}-h)-t_{h}^{2}]}]^{1/2}e^{i\chi_{3,u}},\quad a_{3,4,d}=[\frac{t^{2}[T_{h}(\epsilon_{2}+h)+t_{h}^{2}]}{T_{h}[(\epsilon_{2}+h)( \epsilon_{3}+h)-t_{h}^{2}]}]^{1/2}e^{i\chi_{3,d}},\] \[a_{3,5,u}=te^{-i\phi_{3}}[\frac{T_{h}[(\epsilon_{2}-h)(\epsilon_ {3}-h)-t_{h}^{2}]}{t^{2}[T_{h}(\epsilon_{2}-h)+t_{h}^{2}]}]^{1/2}e^{i\chi_{3,u} },\quad a_{3,5,d}=te^{-i\phi_{3}}[\frac{T_{h}[(\epsilon_{2}+h)(\epsilon_{3}+h)-t _{h}^{2}]}{t^{2}[T_{h}(\epsilon_{2}+h)+t_{h}^{2}]}]^{1/2}e^{i\chi_{3,d}},\] \[b_{3,4}=-\frac{\lambda}{t}[\frac{t^{2}[T_{h}(\epsilon_{2}-h)+t_{ h}^{2}]}{T_{h}[(\epsilon_{2}-h)(\epsilon_{3}-h)-t_{h}^{2}]}]^{1/2}e^{i\chi_{3,u} },\quad b_{3,4}^{\prime}=\frac{\lambda}{t}[\frac{t^{2}[T_{h}(\epsilon_{2}+h)+t _{h}^{2}]}{T_{h}[(\epsilon_{2}+h)(\epsilon_{3}+h)-t_{h}^{2}]}]^{1/2}e^{i\chi_{3,d}}. \tag{19}\] The coefficients of the \(z=4\) block operators are the following ones \[a_{4,5,u}=\frac{t_{f}}{\sqrt{\epsilon_{6}-h}}e^{i\chi_{4,u}}, \quad a_{4,5,d}=\frac{t_{f}}{\sqrt{\epsilon_{6}+h}}e^{i\chi_{4,d}},\] \[a_{4,6,u}=\sqrt{\epsilon_{6}-h}e^{i\chi_{4,u}}, \quad a_{4,6,d}=\sqrt{\epsilon_{6}+h}e^{i\chi_{4,d}}. \tag{20}\] The calculation of the block operator coefficients connected to the block operators \(\hat{B}_{i,5,\sigma}\), i.e. \(z=5\) is much more tricky. First we should introduce the following notations \[I_{1,u}=\epsilon_{1}-h-|a_{1,1,u}|^{2}-|b_{1,1}^{\prime}|^{2}, \quad I_{1,d}=\epsilon_{1}+h-|a_{1,1,d}|^{2}-|b_{1,1}|^{2},\] \[I_{2,u}=\epsilon_{4}-h-|a_{3,4,u}|^{2}-|b_{3,4}^{\prime}|^{2}, \quad I_{2,d}=\epsilon_{4}+h-|a_{3,4,d}|^{2}-|b_{3,4}|^{2},\] \[V_{1}=-a_{1,1,u}^{*}b_{1,1}-b_{1,1}^{\prime*}a_{1,1,d},\quad V_{2 }=-a_{3,4,u}^{*}b_{3,4}-b_{3,4}^{\prime*}a_{3,4,d}. \tag{21}\] Given by (17-20), all quantities from (21) have known and real values. Now, based on (21) one defines the expressions \[W_{1}=\frac{t_{c}-zI_{1,u}}{V_{2}-z\lambda_{c}},\quad W_{2}=\frac{zt _{c}-I_{2,u}}{z\lambda_{c}-V_{2}},\] \[W_{3}=\frac{t_{c}-yI_{1,d}}{V_{2}+y\lambda_{c}},\quad W_{4}=\frac {I_{2,d}-yt_{c}}{V_{2}+y\lambda_{c}},\quad y=\frac{z\lambda_{c}-V_{2}}{\lambda_ {c}+zV_{1}}. \tag{22}\] Based on (22) the following relations providing the requested remaining block operator parameters hold \[b^{\prime}_{5,7}=W_{1}a_{5,4,d},\quad b^{\prime}_{5,4}=W_{2}a_{5,4,d},\quad b^{*}_{5,7}=W_{3}a^{*}_{5,4,u},\quad b^{*}_{5,4}=W_{4}a^{*}_{5,4,u},\] \[a^{*}_{5,7,u}=\frac{a^{*}_{5,4,u}}{z},\quad a_{5,7,d}=\frac{a_{5,4,d}}{y} \tag{23}\] The remaining two block operator parameters are given by \[a_{5,4,u}=[\frac{V_{2}W_{1}-yV_{1}W_{2}}{W_{4}W_{1}-W_{3}W_{2}( y/z)}]^{1/2}e^{i\chi_{5,u}},\quad a_{5,4,d}=[\frac{V_{2}W_{3}-zV_{1}W_{4}}{W_{2}W _{3}-W_{1}W_{4}(z/y)}]^{1/2}e^{i\chi_{5,d}}, \tag{24}\] In the provided expressions Eqs.(17-24), the phases \(\chi_{k,u},\chi_{k,d}\), \(k=1,2,...5\) are arbitrary, and z is an arbitrary real parameter restricted by the conditions \(|a_{5,4,u}|^{2},|a_{5,4,d}|^{2}>0\). The parameter space region where the solution is valid is fixed by the relations \(I_{\alpha,\nu}\geq 0\), where \(\alpha=1,2\), \(\nu=u,d\). These relations provide lower limits for the on-site one particle potentials \(\epsilon_{1}\) and \(\epsilon_{4}\). Furthermore on has \(T_{h}\geq 0\), and two remaining matching equations from (16), namely \(\epsilon_{5}+\mu(\nu)h=|a_{4,5,\nu}|^{2}+|a_{1,5,\nu}|^{2}+|a_{2,5,\nu}|^{2}+| a_{3,5,\nu}|^{2}\), where \(\mu(u)=-1\) and \(\mu(d)=+1\) which relates the \(\epsilon_{5}\) value. ## III Exact ground states Once we solved the matching equations, the exact transformation of the Hamiltonian in positive semidefinite form has been performed. Now based on the positive semidefinite form of the Hamiltonian, one can deduce the exact ground states. This step will be done on two lines. First we deduce the exact ground state corresponding to the transformed Hamiltonian (3), and in the second step, the exact ground state corresponding to the transformed Hamiltonian from (8). ### Ground states in the low density case One concentrates now on the transformed Hamiltonian \(\hat{H}\) from (3) used with periodic boundary conditions. The procedure is based on the deduction of a local operator \(\hat{X}_{i}^{\dagger}\) that satisfy the property \[\{\hat{B}_{j,z,\sigma},\hat{X}_{i}^{\dagger}\}=0 \tag{25}\] for all \(j,i\) lattice sites, all \(z=1,2,..5\), and all \(\sigma\). The logic in this choice is that based on (25), one has for \(|\Psi\rangle=\prod_{i}\hat{X}_{i}^{\dagger}|0\rangle\), where \(|0\rangle\) is the vacuum state with no electrons present, the property \[(\sum_{i,z,\sigma}\hat{B}_{i,z,\sigma}^{\dagger}\hat{B}_{i,z,\sigma})|\Psi \rangle=0 \tag{26}\] holds (since all \(\hat{B}_{i,z,\sigma}\) can be interchanged with \(\hat{X}_{i}^{\dagger}\)). Consequently, introducing \(|\Psi\rangle\) in the kernel of \(\hat{H}_{U}\) (i.e. the Hilbert subspace provided by all \(|\chi\rangle\) for which \(\hat{H}_{U}|\chi\rangle=0\)), and in the kernel of \(\hat{H}_{V}\), one has the ground state. In deducing \(\hat{X}_{i}^{\dagger}\), one takes into consideration a most general domain defined on two unit cells, as presented in Fig.2. The operator is defined as \[\hat{X}_{i}^{\dagger} = x_{1,\uparrow}^{*}\hat{c}_{i,1,\uparrow}^{\dagger}+x_{2, \uparrow}^{*}\hat{c}_{i,2,\uparrow}^{\dagger}+x_{3,\uparrow}^{*}\hat{c}_{i,3, \uparrow}^{\dagger}+x_{4,\uparrow}^{*}\hat{c}_{i,4,\uparrow}^{\dagger}+x_{5, \uparrow}^{*}\hat{c}_{i,5,\uparrow}^{\dagger}+x_{6,\uparrow}^{*}\hat{c}_{6,1,\uparrow}^{\dagger}+x_{4^{\prime},\uparrow}^{*}\hat{c}_{i+1,1,\uparrow}^{ \dagger} \tag{27}\] \[+ x_{2^{\prime},\uparrow}^{*}\hat{c}_{i+1,2,\uparrow}^{\dagger}+x _{3^{\prime},\uparrow}^{*}\hat{c}_{i+1,3,\uparrow}^{\dagger}+x_{5^{\prime}, \uparrow}^{*}\hat{c}_{i+1,5,\uparrow}^{\dagger}+x_{6^{\prime},\uparrow}^{*} \hat{c}_{i+1,6,\uparrow}^{*}+x_{7^{\prime},\uparrow}^{*}\hat{c}_{i+1,4, \uparrow}^{\dagger}+y_{1,\downarrow}^{*}\hat{c}_{i,1,\downarrow}^{\dagger}\] \[+ y_{2,\downarrow}^{*}\hat{c}_{i,2,\downarrow}^{\dagger}+y_{3, \downarrow}^{*}\hat{c}_{i,3,\downarrow}^{\dagger}+y_{4,\downarrow}^{*}\hat{c}_ {i,4,\downarrow}^{\dagger}+y_{5,\downarrow}^{*}\hat{c}_{i,5,\downarrow}^{ \dagger}+y_{6,\downarrow}^{*}\hat{c}_{i,6,\downarrow}^{\dagger}+y_{4^{\prime}, \downarrow}^{*}\hat{c}_{i+1,1,\downarrow}^{\dagger}+y_{2^{\prime},\downarrow} ^{*}\hat{c}_{i+1,2,\downarrow}^{\dagger}\] \[+ y_{3^{\prime},\downarrow}^{*}\hat{c}_{i+1,3,\downarrow}^{\dagger }+y_{5^{\prime},\downarrow}^{*}\hat{c}_{i+1,5,\downarrow}^{\dagger}+y_{6^{ \prime},\downarrow}^{*}\hat{c}_{i+1,6,\downarrow}^{\dagger}+y_{7^{\prime}, \downarrow}^{*}\hat{c}_{i+1,4,\downarrow}^{\dagger}.\] Now the (25) anticommutators are presented, calculated in order with \(\hat{B}_{i,1,\uparrow},\hat{B}_{i,1,\downarrow},\hat{B}_{i,2,\uparrow},\hat{B}_{i,2,\downarrow},...,\hat{B}_{i,5,\uparrow},\hat{B}_{i,5,\downarrow},\hat{B}_{i +1,1,\uparrow},\hat{B}_{i+1,1,\downarrow},\hat{B}_{i+1,2,\uparrow},\hat{B}_{i +1,2,\downarrow},...,\hat{B}_{i+1,5,\uparrow},\hat{B}_{i+1,5,\downarrow},\) \(\hat{B}_{i-1,5,\uparrow},\hat{B}_{i-1,5,\downarrow}\), in total 22 equations \[x_{2,\uparrow}^{*}a_{1,2,u}+x_{5,\uparrow}^{*}a_{1,5,u}+x_{1, \uparrow}^{*}a_{1,1,u}+y_{1,\downarrow}^{*}b_{1,1}=0,\] \[y_{1,\downarrow}^{*}a_{1,1,d}+y_{5,\downarrow}^{*}a_{1,5,d}+y_{2,\downarrow}^{*}a_{1,2,d}+x_{1,\uparrow}^{*}b_{1,1}^{\prime}=0,\] \[x_{2,\uparrow}^{*}a_{2,2,u}+x_{3,\uparrow}^{*}a_{2,3,u}+x_{5, \uparrow}^{*}a_{2,5,u}=0,\] \[y_{3,\downarrow}^{*}a_{2,3,d}+y_{5,\downarrow}^{*}a_{2,5,d}+y_{2,\downarrow}^{*}a_{2,2,d}=0,\] \[x_{3,\uparrow}^{*}a_{3,3,u}+x_{4,\uparrow}^{*}a_{3,4,u}+x_{5, \uparrow}^{*}a_{3,5,u}+y_{4,\downarrow}^{*}b_{3,4}=0,\] \[y_{3,\downarrow}^{*}a_{3,3,d}+y_{4,\downarrow}^{*}a_{3,4,d}+y_{5,\downarrow}^{*}a_{3,5,d}+x_{4,\uparrow}^{*}b_{3,4}^{\prime}=0,\] \[x_{5,\uparrow}^{*}a_{4,5,u}+x_{6,\uparrow}^{*}a_{4,6,u}=0,\] \[y_{5,\downarrow}^{*}a_{4,5,d}+y_{6,\downarrow}^{*}a_{4,6,d}=0,\] \[x_{4,\uparrow}^{*}a_{5,4,u}+x_{4,\uparrow}^{*}a_{5,7,u}+y_{4, \downarrow}^{*}b_{5,4}+y_{4,\downarrow}^{*}b_{5,7}=0,\] \[x_{4,\uparrow}^{*}b_{5,4}^{\prime}+x_{4,\uparrow}^{*}b_{5,7}^{ \prime}+y_{4,\downarrow}^{*}a_{5,4,d}+y_{4,\downarrow}^{*}a_{5,7,d}=0,\] \[x_{4,\uparrow}^{*}a_{1,1,u}+x_{2,\uparrow}^{*}a_{1,2,u}+x_{5, \uparrow}^{*}a_{1,5,u}+y_{4,\downarrow}^{*}b_{1,1}=0,\] \[x_{4,\uparrow}^{*}b_{1,1}^{\prime}+y_{4,\downarrow}^{*}a_{1,1,d} +y_{2,\downarrow}^{*}a_{1,2,d}+y_{5,\uparrow}^{*}a_{1,5,d}=0,\] \[x_{2^{\prime},\uparrow}^{*}a_{2,2,u}+x_{3^{\prime},\uparrow}^{*} a_{2,3,u}+x_{5^{\prime},\uparrow}^{*}a_{2,5,u}=0,\] \[y_{2^{\prime},\downarrow}^{*}a_{2,2,d}+y_{3^{\prime},\downarrow} ^{*}a_{2,3,d}+y_{5^{\prime},\downarrow}^{*}a_{2,5,d}=0,\] \[x_{3^{\prime},\uparrow}^{*}a_{3,3,u}+x_{5^{\prime},\uparrow}^{*} a_{3,5,u}+x_{7^{\prime},\uparrow}^{*}a_{3,4,u}+y_{7^{\prime},\downarrow}^{*}b_{3,4}=0,\] \[y_{3^{\prime},\downarrow}^{*}a_{3,3,d}+y_{5^{\prime},\downarrow} ^{*}a_{3,5,d}+x_{7^{\prime},\uparrow}^{*}b_{3,4}^{\prime}+y_{7^{\prime}, \downarrow}^{*}a_{3,4,d}=0,\] \[x_{5^{\prime},\uparrow}^{*}a_{4,5,u}+x_{6^{\prime}}^{*}a_{4,6,u} =0,\] \[y_{5^{\prime},\downarrow}^{*}a_{4,5,d}+y_{6^{\prime},\downarrow} ^{*}a_{4,6,d}=0,\] \[x_{7^{\prime},\uparrow}^{*}a_{5,4,u}+y_{7^{\prime},\downarrow}^{*} b_{5,4}=0,\] \[x_{7^{\prime},\uparrow}^{*}b_{5,4}^{\prime}+y_{7^{\prime},\downarrow} ^{*}a_{5,4,d}=0,\] \[x_{1,\uparrow}^{*}a_{5,7,u}+y_{1,\downarrow}^{*}b_{5,7}=0,\] \[x_{1,\uparrow}^{*}b_{5,7}^{\prime}+y_{1,\downarrow}^{*}a_{5,7,d} =0, \tag{28}\] In the \({\bf B}=0\) case when \(a_{i,j,u}=a_{i,j,d}=a_{i,j}\), \(h=\phi=0\) holds, because of \(b_{5,4}^{\prime}=-b_{5,4}\) and \(b_{5,7}^{\prime}=-b_{5,7}\), (see (9)), the last four equations from (28) provide \(x_{1,\uparrow}=y_{1,\downarrow}=x_{7^{\prime},\uparrow}=y_{7^{\prime},\downarrow}=0\), which is not satisfied automatically at \({\bf B}\neq 0\), i.e. \(a_{i,j,u}\neq a_{i,j,d}\). That is why, (28) must be separately solved in the presence, and in the absence of the external magnetic field. #### iv.2.1 Solution for \(\hat{X}_{i}^{\dagger}\) at \(\hat{B}=0\) The solutions of (28) deduced in Appendix D, can be given as follows \[x_{6,\uparrow}^{*}=\frac{t_{f}}{\epsilon_{4}}x_{2,\uparrow}^{*}, \ x_{6^{\prime},\uparrow}^{*}=\frac{t_{f}}{\epsilon_{4}}x_{3^{\prime},\uparrow }^{*},\ x_{5,\uparrow}^{*}=-x_{2,\uparrow}^{*},\ x_{5^{\prime},\uparrow}^{*} =-x_{3^{\prime},\uparrow}^{*},\ x_{3,\uparrow}^{*}=\frac{-\epsilon_{2}}{t_{h }}x_{2,\uparrow}^{*},\ x_{1,\uparrow}^{*}=0,\] \[x_{2^{\prime},\uparrow}^{*}=\frac{-\epsilon_{2}}{t_{h}}x_{3^{ \prime},\uparrow}^{*},\ x_{4,\uparrow}^{*}=\frac{\epsilon_{2}^{2}-t_{h}^{2}}{ t_{h}(t^{2}+\lambda^{2})}(tx_{2,\uparrow}^{*}-\lambda y_{2,\downarrow}^{*}),\ x_{4^{\prime},\uparrow}^{*}=\frac{ \epsilon_{2}^{2}-t_{h}^{2}}{t_{h}(t^{2}+\lambda^{2})}(tx_{3^{\prime},\uparrow }^{*}-\lambda y_{3^{\prime},\downarrow}^{*}),\] \[x_{7^{\prime},\uparrow}^{*}=0,\ y_{1,\downarrow}^{*}=0,\ y_{7^{ \prime},\downarrow}^{*}=0,\ y_{6,\downarrow}^{*}=\frac{t_{f}}{\epsilon_{4}}y_{ 2,\downarrow}^{*},\ y_{6^{\prime},\downarrow}^{*}=\frac{t_{f}}{\epsilon_{4}}y_{ 3^{\prime},\downarrow}^{*},\ y_{5,\downarrow}^{*}=-y_{2,\downarrow}^{*},\ y_{5 ^{\prime},\downarrow}^{*}=-y_{3^{\prime},\downarrow}^{*},\] \[y_{3,\downarrow}^{*}=-\frac{\epsilon_{2}}{t_{h}}y_{2,\downarrow} ^{*},\ y_{2^{\prime},\downarrow}^{*}=-\frac{\epsilon_{2}}{t_{h}}y_{3^{\prime}, \downarrow}^{*},\ y_{4,\downarrow}^{*}=\frac{\epsilon_{2}^{2}-t_{h}^{2}}{t_{h} (t^{2}+\lambda^{2})}(\lambda x_{2,\uparrow}^{*}+ty_{2,\downarrow}^{*}),\] \[y_{4^{\prime},\downarrow}^{*}=\frac{\epsilon_{2}^{2}-t_{h}^{2}}{ t_{h}(t^{2}+\lambda^{2})}(\lambda x_{3^{\prime},\uparrow}^{*}+ty_{3^{\prime}, \downarrow}^{*}), \tag{29}\] where the \(t^{\uparrow,\downarrow}=\lambda\) notation has been introduced, where \(\lambda\) represents the SOI coupling constant. From (29) is seen that all coefficients of \(\hat{X}_{i}^{\dagger}\) have been expressed in function of \(x_{2,\uparrow}^{*},x_{3^{\prime},\uparrow}^{*},y_{2,\downarrow}^{*},y_{3^{ \prime},\downarrow}^{*}\). These can be deduced based on the following relations \[x_{2,\uparrow}^{*}=\frac{-(\alpha\beta+\gamma\delta)x_{3^{\prime},\uparrow} ^{*}+(\gamma\beta-\alpha\delta)y_{3^{\prime},\downarrow}^{*}}{\alpha^{2}+ \gamma^{2}},\ y_{2,\downarrow}^{*}=\frac{-(\gamma\beta-\alpha\delta)x_{3^{ \prime},\uparrow}^{*}-(\alpha\beta+\gamma\delta)y_{3^{\prime},\downarrow}^{*} }{\alpha^{2}+\gamma^{2}}, \tag{30}\] where \(\alpha=ta_{5,4}+\lambda b_{5,4},\ \beta=ta_{5,7}+\lambda b_{5,7},\ \gamma=tb_{5,4}-\lambda a_{5,4},\ \delta=tb_{5,7}-\lambda a_{5,7},\) and \(x_{3^{\prime},\uparrow}^{*}=p\), \(y_{3^{\prime},\downarrow}^{*}=q\) are arbitrary parameters. The physical meaning of these two free \(p,q\) parameters can be given as follows: given two set values of these parameters: \(p_{1},q_{1}\) and \(p_{2},q_{2}\), one can construct two linearly independent \(\hat{X}_{i,1}^{\dagger}\) and \(\hat{X}_{i,2}^{\dagger}\) solutions, which provide two linearly independent Hilbert space vectors \(|\Psi_{1}^{0}\rangle=\hat{X}_{i,1}^{\dagger}|0\rangle\), and \(|\Psi_{2}^{0}\rangle=\hat{X}_{i,2}^{\dagger}|0\rangle\), where \(|0\rangle\) is the vacuum state with no fermions present. These can be orthonormalized based on the Gram-Schmidt procedure \[|\Psi_{1}\rangle=\frac{|\Psi_{1}^{0}\rangle}{\langle\Psi_{1}^{0}| \Psi_{1}^{0}\rangle},\quad|v\rangle=\langle\Psi_{2}^{0}|\Psi_{1}\rangle|\Psi_ {1}\rangle,\] \[|\Psi_{2}\rangle=\frac{|\Psi_{2}^{0}\rangle-|v\rangle}{(\langle \Psi_{2}^{0}|-\langle v|)(|\Psi_{2}^{0}\rangle-|v\rangle)}, \tag{31}\] obtaining \(\langle\Psi_{1}|\Psi_{2}\rangle=0\) together with \(\langle\Psi_{1}|\Psi_{1}\rangle=\langle\Psi_{2}|\Psi_{2}\rangle=1\). Please note that for this to be done, two free parameters are needed, since only one parameter, mathematically fixes the norm. Solution for \(\hat{X}_{i}^{\dagger}\) at \(\hat{B}\neq 0\) The here presented solutions deduced in Appendix E occur at \[a_{5,7,d}a_{5,7,u}\neq b_{5,7}b_{5,7}^{\prime},\quad a_{5,4,d}a_{5,4,u}\neq b_{5,4}b_{5,4}^{\prime}, \tag{32}\] where from the last four equations of (28) one has \(x_{1,\uparrow}=x_{7^{\prime},\uparrow}=y_{1,\downarrow}=y_{7^{\prime},\uparrow}=0\). The deduced coefficients of the operator \(\hat{X}_{i}^{\dagger}\) are as follows: The first 16 coefficients are given as \[x_{6,\uparrow}^{*}=\frac{t_{f}}{\epsilon_{6}-h}e^{-i(\phi_{2}+ \phi_{3})}x_{2,\uparrow}^{*},\;y_{6,\downarrow}^{*}=\frac{t_{f}}{\epsilon_{6}+ h}e^{-i(\phi_{2}+\phi_{3})}y_{2,\downarrow}^{*},\;x_{6^{\prime},\uparrow}^{*}= \frac{t_{f}}{\epsilon_{6}-h}e^{i(\phi_{2}+\phi_{3})}x_{3^{\prime},\uparrow}^{*},\] \[y_{6^{\prime},\downarrow}^{*}=\frac{t_{f}}{\epsilon_{6}+h}e^{i (\phi_{2}+\phi_{3})}y_{3^{\prime},\downarrow}^{*},\;x_{5,\uparrow}^{*}=-e^{-i (\phi_{2}+\phi_{3})}x_{2,\uparrow}^{*},\;y_{5,\downarrow}^{*}=-e^{-i(\phi_{2} +\phi_{3})}y_{2,\uparrow}^{*},\;x_{5^{\prime},\uparrow}^{*}=-e^{i(\phi_{2}+ \phi_{3})}x_{3^{\prime},\uparrow}^{*},\] \[y_{5^{\prime},\downarrow}^{*}=-e^{i(\phi_{2}+\phi_{3})}y_{3^{ \prime},\downarrow}^{*},\;x_{3,\uparrow}^{*}=-e^{-i(2\phi_{2}+2\phi_{3})} \frac{\epsilon_{2}-h}{T_{h}}x_{2,\uparrow}^{*},\;y_{3,\downarrow}^{*}=-e^{-i(2 \phi_{2}+2\phi_{3})}\frac{\epsilon_{2}+h}{T_{h}}y_{2,\downarrow}^{*},\] \[x_{2^{\prime},\uparrow}^{*}=-e^{i(2\phi_{2}+2\phi_{3})}\frac{( \epsilon_{2}-h)(\epsilon_{3}-h)-t_{h}^{2}+T_{h}(T_{h}+\epsilon_{3}-h)}{T_{h}( \epsilon_{3}-h)+t_{h}^{2}}x_{3^{\prime},\uparrow}^{*},\] \[y_{2^{\prime},\downarrow}^{*}=-e^{i(2\phi_{2}+2\phi_{3})}\frac{ (\epsilon_{2}+h)(\epsilon_{3}+h)-t_{h}^{2}+T_{h}(T_{h}+\epsilon_{3}+h)}{T_{h} (\epsilon_{3}+h)+t_{h}^{2}}y_{3^{\prime},\downarrow}^{*},\] \[y_{4,\downarrow}^{*}=ax_{2,\uparrow}^{*}+by_{2,\downarrow}^{*}, \;x_{4,\uparrow}^{*}=cx_{2,\uparrow}^{*}+dy_{2,\downarrow}^{*},\;y_{4^{\prime },\downarrow}^{*}=a^{\prime}x_{3^{\prime},\uparrow}^{*}+b^{\prime}y_{3^{\prime },\downarrow}^{*},\;x_{4^{\prime},\uparrow}^{*}=c^{\prime}x_{3^{\prime}, \uparrow}^{*}+d^{\prime}y_{3^{\prime},\downarrow}^{*}, \tag{33}\] where one has for the presented prefactors \[a=-\lambda e^{-i(\phi_{2}+2\phi_{3})}J_{-},\;b=te^{-i(\phi_{2}+2 \phi_{3})}J_{+},\;c=te^{-i(\phi_{2}+2\phi_{3})}J_{-},\;d=\lambda e^{-i(\phi_{2} +2\phi_{3})}J_{+},\] \[a^{\prime}=\lambda e^{i(\phi_{2}+2\phi_{3})}J_{-},\;b^{\prime}=te ^{i(\phi_{2}+2\phi_{3})}J_{+},\;c^{\prime}=te^{i(\phi_{2}+2\phi_{3})}J_{-},\;d ^{\prime}=-\lambda e^{i(\phi_{2}+2\phi_{3})}J_{+},\] \[J_{+}=\frac{(T_{h}+\epsilon_{2}+h)[(\epsilon_{2}+h)^{2}-t_{h}^{ 2}]}{(\lambda^{2}+t^{2})[T_{h}(\epsilon_{2}+h)+t_{h}^{2})]},\;J_{-}=\frac{(T_{ h}+\epsilon_{2}-h)[(\epsilon_{2}-h)^{2}-t_{h}^{2}]}{(\lambda^{2}+t^{2})[T_{h}( \epsilon_{2}-h)+t_{h}^{2})]}, \tag{34}\] One notes that in (34) the \(\epsilon_{2}=\epsilon_{3}\) equality has been used satisfying the symmetry properties of the system. All presented coefficients are expressed in function of the parameters \(x_{2,\uparrow}^{*},x_{3^{\prime},\uparrow}^{*},y_{2,\downarrow}^{*},y_{3^{ \prime},\downarrow}^{*}\). These are determined via \[x_{2,\uparrow}^{*}=\frac{Y^{\prime}\beta^{\prime}-X^{\prime} \delta^{\prime}}{\gamma^{\prime}\beta^{\prime}-\alpha^{\prime}\delta^{\prime}}, \;y_{2,\downarrow}^{*}=\frac{X^{\prime}\gamma^{\prime}-Y^{\prime}\alpha^{\prime }}{\gamma^{\prime}\beta^{\prime}-\alpha^{\prime}\delta^{\prime}},\;x_{3^{\prime },\uparrow}^{*}=p^{\prime},\;y_{3^{\prime},\downarrow}^{*}=q^{\prime},\] \[\alpha^{\prime}=ca_{5,4,u}+ab_{5,4},\;\beta^{\prime}=da_{5,4,u}+ bb_{5,4},\;\gamma^{\prime}=cb_{5,4}^{\prime}+aa_{5,4,d},\;\delta^{\prime}=db_{5,4}^{ \prime}+ba_{5,4,d},\] \[X^{\prime}=-p^{\prime}(a_{5,7,u}c^{\prime}+b_{5,7}a^{\prime})-q^{ \prime}(a_{5,7,u}d^{\prime}+b_{5,7}b^{\prime}),\] \[Y^{\prime}=-p^{\prime}(a_{5,7,d}a^{\prime}+b_{5,7}^{\prime}c^{ \prime})-q^{\prime}(a_{5,7,d}b^{\prime}+b_{5,7}^{\prime}d^{\prime}), \tag{35}\] where \(p^{\prime},q^{\prime}\) are arbitrary parameters. I note that the denominator present on the first line of (35) is nonzero, since \(\gamma^{\prime}\beta^{\prime}-\alpha^{\prime}\delta^{\prime}=e^{-2i(\phi_{2}+2\phi_ {3})}J_{+}J_{-}(\lambda^{2}+t^{2})(b_{5,4}b_{5,4}^{\prime}-a_{5,4,d}a_{5,4,u})\neq 0\), see Eq.(32). Concerning \(p^{\prime},q^{\prime}\) see the observation below (30) relating the \(p,q\) parameters. Other specific solutions for \(\hat{X}_{i}^{\dagger}\) In the Appendix F we present two more specific solutions for \(\hat{X}_{i}^{\dagger}\), one appearing i) in conditions when the requirement (32) is not satisfied, and ii) the domain on which \(\hat{X}_{i}^{\dagger}\) is defined, differs from the domain presented in Fig.2. These solutions need supplementary interdependences in between the parameters of the Hamiltonian (e.g. (32) taken as equality). These new interdependences not diminish substantially the emergence possibilities of the phases described by the \(\hat{X}_{i}^{\dagger}\) operators because e.g. the SOI coupling constants \(\lambda,\lambda_{c}\) can be continuously tuned by external electric fields [3]. #### iii.1.4 The deduced exact ground states Once the \(\hat{X}_{i}^{\dagger}\) operators have been obtained, we deduce now the exact ground states for the Hamiltonian presented in (3). For this reason let us consider the wave vector \[|\Phi_{1}\rangle=\prod_{i}^{\prime}\hat{X}_{i}^{\dagger}|0\rangle=....\hat{X}_{ i}^{\dagger}\hat{X}_{i+2}^{\dagger}\hat{X}_{i+4}^{\dagger}...|0\rangle \tag{36}\] where \(\prod_{i}^{\prime}\) represents a product on each second lattice site, and \(\hat{X}_{i}^{\dagger}\) is taken from (29,30) deduced at \({\bf B}=0\). Note that \(\hat{X}_{i}^{\dagger}\) places 1 electron on two nearest neighbor pentagons excepting their left and right end sites (sites 1 and 7' in Fig.2). Inside the block covered by \(\hat{X}_{i}^{\dagger}\) (sites 2,3,4,5,6,4',2',3',5',6' in Fig.2), there are not present two electrons (i.e. double occupancy is not present), hence \(\hat{H}_{U}|\Phi_{1}\rangle=0\) holds. Furthermore, inside this block there are not present two nearest neighboring sites occupied both at least once, so \(\hat{H}_{V}|\Psi_{1}\rangle=0\) also holds. In between two nearest neighbor \(\hat{X}_{i}^{\dagger}\) and \(\hat{X}_{i+2}^{\dagger}\) operators two empty sites are present (the horizontal bonds starting to the right from the site 7', and starting to the left from the site 1 in Fig.2), so on these sites \(\hat{H}_{U}|\Phi_{1}\rangle=0\), and \(\hat{H}_{V}|\Phi_{1}\rangle=0\) are also satisfied, consequently \[(\hat{H}_{U}+\hat{H}_{V})|\Phi_{1}\rangle=0 \tag{37}\] is satisfied. But \(\hat{X}_{i}^{\dagger}\) was deduced from the condition (26), so (37) type of relation is satisfied also for the first term of the Hamiltonian (3), as a consequence \(\hat{H}|\Phi_{1}\rangle=0\) holds. But since \(\hat{H}\) is build up only from positive semidefinite terms, zero is the smallest possible eigenvalue of \(\hat{H}\). As a consequence \(|\Phi_{1}\rangle=|\Phi_{g,1}\rangle\) is the ground state of \(\hat{H}\) at a number of \(N=N_{c}/2\) particles in the system. The system has totally \(6N_{c}\) sites, so the maximum number of electrons in the system is \(N_{max}=12N_{c}\), consequently \(N_{c}/2\) carriers represent \(N/N_{max}=1/24\) particle concentration (quarter filled lowest band). This concentration value is placed below system half filling (\(6N_{c}\) electrons), that is why is called "ground state in the low density" case. Physically this state represents a charge density wave, since on all second horizontal bonds (e.g. bond 7' to \({\bf i}+2{\bf a}\) in Fig.2) carriers are missing. This state is insulating. For the solution to occur, from the point of view of the two-particle interaction terms, \(U_{n}>0\), \(V>0\), but otherwise arbitrary are needed. One more problem must be treated, namely the interplay between \(|\Phi_{1}\rangle\) mentioned above, and \(|\Psi_{\alpha}\rangle\), \(\alpha=1,2\) presented in (31). For this to be understood, we must turn back to (9), which provide the solution of the matching equations used here, together with the conditions (10), these last providing the parameter space region where the solution is valid. Checking the conditions (10) for their meaning in the band structure [see Ref. [3]], one sees, that it provides a lowest flat band, which is double degenerate. Consequently, in \(|\Phi_{g,1}\rangle\) one can use at the sites i or \(\hat{X}_{i,1}^{\dagger}\) or \(\hat{X}_{i,2}^{\dagger}\), but not both, maintaining the quarter filling of the lowest band. I must also underline that a ground state solution of the type (36) holds as well bellow \(c_{0}=N/N_{max}=1/24\) system filling. In this case \[|\Phi_{g,1}(c<c_{0})\rangle=\prod_{i\in D}\hat{X}_{i}^{\dagger}|0\rangle, \tag{38}\] where \(D\) is a lattice sites manifold containing \(N<c_{0}N_{max}\) lattice sites \(D=i_{1},i_{2},...i_{N}\), where the sites contained in the \(D\) domain must be at least at a distance \(2{\bf a}\) from each other. In this case the charge density wave nature of the ground state is lost, and the ground state is built up from isolated clusters. The insulating nature of the system is maintained. Now let us consider the \({\bf B}\neq 0\) case. In this situation one works with \(\hat{X}_{i}^{\dagger}\) deduced in (33-35), and we consider here the \(\hat{H}_{Z}=0\) case only, so only the effect of the external magnetic field on the orbital motion of electrons is considered described by the Peierls phase factors. In this case, at \(c_{0}=1/24\) filling the ground state expression remains that given in (36), and at \(c<c_{0}\) filling, that given in (38). But the situation is that the solution of the matching equations is satisfied only for fixed \(\phi\) values (see Ref. [3]). Hence when the magnetic field is switched on, the charge density wave phase disappears, and reappears at a fixed external magnetic field value. This is an interesting example which shows how a static magnetic fields can influence a static electric charge distribution. Furthermore, if (36) and (38) are not allowed solutions at \(\phi\neq 0\), the lowest band will be dispersive, and the system becomes conducting. This provides a nice example demonstrating that a static magnetic field is able to turn an insulating phase to a conducting phase. The presented examples may have broad application possibilities in leading technologies. When the conditions presented in the Appendix F.1 are satisfied, the expression of the ground state wave vectors from (36) and (38) remain valid at \(V=0\) only, because in this case, along the horizontal bonds connecting the unit cells, two single occupied nearest neighbor sites can be present in the system. Because in this case also the sites 1 and 7' in Fig.2 could be occupied, also the charge density wave nature of the ground state is lost. When the conditions presented in the Appendix F.2 are present, the \(\hat{X}_{i}^{\dagger}\) operator acts only on the four middle sites in each unit cell (2,3,5,6 in Fig.2). Now the (36), and (38) ground state expressions remain valid also at \(V\neq 0\), and could contain even \(N_{c}\) operators, so in this case the ground state can be written up to a carrier concentration \(N/N_{max}=1/12\). One unit cell will be ferromagnetic, but the cells will be uncorrelated, so the ground state will be paramagnetic. A possible exception from this rule is provided by the situation when the lowest two bands touch each other: this situation will provide a ferromagnetic state based on a similar mechanism as described in Ref. [16]. ### Ground states in the high density case. The Hamiltonian one analyses now is presented in (8) used with periodic boundary conditions. The ground state construction now begins with \[|\Phi_{1}^{\prime}\rangle=\prod_{i=1}^{N_{c}}\prod_{z=1}^{5}\prod_{\sigma= \uparrow,\downarrow}\hat{B}_{i,z,\sigma}^{\dagger}|0\rangle. \tag{39}\] This strategy is based on the fact that \(\hat{B}_{i,z,\sigma}^{\dagger}\hat{B}_{i,z,\sigma}^{\dagger}=0\), hence the first term from (8) satisfies \[(\sum_{i,z,\sigma}\hat{B}_{i,z,\sigma}\hat{B}_{i,z,\sigma}^{\dagger})|\Phi_{1 }^{\prime}\rangle=0. \tag{40}\] The problem to be further solved now, is to introduce \(|\Phi_{1}^{\prime}\rangle\) in the kernel of the remaining positive semidefinite operators from the Hamiltonian, i.e. \(\sum_{i}\sum_{n=1,2,..6}U_{n}\hat{P}_{i,n}\) and \(\sum_{<l,k>}V\hat{R}_{<l,k>}\). The first term from these two requires at least one electron on each site. The study of (39) shows that \(|\Phi_{1}\rangle\) already introduces in the system \(10N_{c}\) electrons, and with periodic boundary conditions this can leave one empty site per unit cell. So in order to do not have empty sites in the system, we must multiply the operators present in \(|\Phi_{1}\rangle\) by a term \[\hat{B}^{\dagger}=\prod_{i}c^{\dagger}_{i,n(i),\sigma(i)}(1-\hat{n}_{i,n(i),- \sigma(i)}), \tag{41}\] which introduces in each unit cell one electron with an arbitrary spin \(\sigma(i)\) on an arbitrary site \(n(i)\), which is empty. The \(\hat{B}^{\dagger}\) operator from (41) presents another novelty in comparison to its previous applications (see e.g. Ref. [8; 9; 10]). Namely, together with the operators present in (39), preserves in the presented system only sites containing at least one electron, but supplementary such that two single occupied sites cannot be in nearest neighbor positions. Hence, the wave vector \[|\Phi_{1}\rangle=(\prod_{i=1}^{N_{c}}\prod_{z=1}^{5}\prod_{\sigma=\uparrow, \downarrow}\hat{B}^{\dagger}_{i,z,\sigma})\hat{B}^{\dagger}|0\rangle, \tag{42}\] has at least one electron on each site, consequently \[(\sum_{i}\sum_{n=1,2,..6}U_{n}\hat{P}_{i,n})|\Phi_{1}\rangle=0, \tag{43}\] holds. But besides this, one also has \[(\sum_{<l,k>}V\hat{R}_{<l,k>})|\Phi_{1}\rangle=0, \tag{44}\] because \(|\Phi_{1}\rangle\) not only contains sites at least once occupied, but supplementary not contains single occupied states in nearest neighbor positions. Since \(|\Phi_{1}\rangle\) also satisfies (40), so \((\hat{H}-C)|\Phi_{1}\rangle=0\), hence \(|\Phi_{1}\rangle\) is the ground state of the system, i.e. \(|\Phi_{g,1}\rangle=|\Phi_{1}\rangle\), and the ground state energy is \(E_{g}=C\). Since now \(N/N_{max}>5/6\) is above system half filling (\(N/N_{max}=1/2\)), the here presented ground state is called high density ground state. The \(\hat{B}_{i,z,\sigma}\) operators present in (42) are those defined in (2), the matching equations remain at \({\bf B}=0\) the relations (4-7) and at \({\bf B}\neq 0\) the equations (13-16), but with the substitutions presented in (A9). Consequently, now the matching equations contain all coupling constants, including those present in SOI \((\lambda,\lambda_{c})\), in \(\hat{H}_{U}\), and \(\hat{H}_{V}\). At \({\bf B}=0\) using for the block operator coefficients of \(\hat{B}_{i,z,\sigma}\) deduced from (4-7), given by the conditions from (10), one finds again an effective flat band which is created not by the bare band structure, but by the \(U,V\) interaction terms. The ground state \(|\Phi_{g,1}\rangle\) presented in (42) is nonmagnetic, and corresponds to \(c_{1}=N/N_{max}=11/12\) filling (upper band half filled). Turning on the external magnetic field, the mathematical expression of the deduced ground state remains the expression (42), but now the block operator parameters must be taken from the matching equations solution presented in (17- 24). The ground state can be obtained also in the region \(c_{1}<c\leq 1\). For this, in the case of the carrier concentration \(c=(N_{1}+N^{\prime})/N_{max}\), where \(N_{1}=c_{1}N_{max}\), and \(1\leq N^{\prime}\leq N_{c})\), a supplementary operator \(\hat{B}_{c}^{\dagger}=\prod_{j=1}^{N^{\prime}}\hat{c}_{k_{j},\sigma_{j}}^{\dagger}\) must be added to the operators present in (42), where \(k_{j},\sigma_{j}\) are arbitrary momenta and spin projections. The ground state becomes in this case of the form \[|\Phi_{g,1}(c>c_{1})\rangle=(\prod_{i=1}^{N_{c}}\prod_{z=1}^{5}\prod_{\sigma= \uparrow,\downarrow}\hat{B}_{i,z,\sigma}^{\dagger})\hat{B}^{\dagger}\hat{B}_ {c}^{\dagger}|0\rangle, \tag{45}\] which is a nonmagnetic and itinerant ground state. ## IV Summary and Conclusions The paper analyses a pentagon chain type of conducting polymer in the presence of spin-orbit interactions SOI \((\lambda,\lambda_{c})\), on-site (U) and nearest neighbor (V) Coulomb repulsions, in the presence of external fields \({\bf E},{\bf B}\). Both fields are applied perpendicular to the plane containing the chain. \({\bf E}\) continuously tunes the SOI interaction strength, while the action of \({\bf B}\) is taken into account both at the level of the orbital motion of electrons (Peierls phase factors), and its action on the spin degrees of freedom (Zeeman term). Usually \(\lambda<<U\) holds. Besides, if the SOI interaction strength is even small, it has essential effects on the physical properties of the system, since breaks the spin projection double degeneracy of each band. In these conditions, the perturbative description of the system in both small and high coupling constant limits is questionable, hence exact methods are used in the study. On the other side this job has its difficulties because the system is nonintegrable, so special technique must be applied in order to follow this route. On this line one uses the method based on positive semidefinite operator properties, which provides the first exact ground states for conducting polymers in the presence of SOI. Furthermore, for the first time this exact procedure is used in the case of conducting polymers in the presence of both U, and V Coulomb interactions. Besides, in this technique also for the first time the common action of \({\bf B}\) on spin and orbital degrees of freedom is considered. In these conditions the paper concentrates on the development of the technique used in the presented conditions: the transformation of the Hamiltonian in positive semidefinite form, the solution technique of the matching equations, the construction strategy of the exact ground states. The study of the physical properties of the deduced ground states is only sketched, since this job requires considerable future work, hence the detailed presentation of the physical properties of all deduced phases is left for a future publication. Even in these conditions, the presented physical properties show a broad spectrum of interesting physical characteristics as e.g. emerging charge density wave phases, modification possibility of a static charge distribution by an external static magnetic field, transition possibilities between insulating and conducting phases provided by external magnetic fields, emergence of effective flat bands provided by the two-particle local or non-local Coulomb interaction terms, peculiar ferromagnetic states, etc. ## Appendix A Another transformation in positive semidefinite form We wrote the system Hamiltonian from Eq.(1) in the form \(\hat{H}=\hat{H}_{K}+\hat{H}_{U}+\hat{H}_{V}\). First we concentrate on the transformation of the \(\hat{H}_{U}\) Hubbard term. It can be observed that \[\hat{H}_{U} = \sum_{i}\sum_{n=1,2,..6}U_{n}\hat{n}_{i,n}^{\uparrow}\hat{n}_{i,n }^{\downarrow}=\sum_{i}\sum_{n=1,2,..6}U_{n}[\hat{n}_{i,n}^{\uparrow}\hat{n}_{ i,n}^{\downarrow}-(\hat{n}_{i,n}^{\uparrow}+\hat{n}_{i,n}^{\downarrow})+1] \tag{10}\] \[+ \sum_{i}\sum_{n=1,2,..6}U_{n}(\hat{n}_{i,n}^{\uparrow}+\hat{n}_{ i,n}^{\uparrow})-\sum_{i}\sum_{n=1,2,..6}U_{n}=\sum_{i}\sum_{n=1,2,..6}U_{n} \hat{P}_{i,n}\] \[+ \sum_{i}\sum_{n=1,2,..6}U_{n}(\hat{n}_{i,n}^{\uparrow}+\hat{n}_{ i,n}^{\downarrow})-N_{c}\sum_{n=1,2,..6}U_{n},\] where \(N_{c}\) represents the number of lattice sites (cells), while \(\hat{P}_{i,n}=[\hat{n}_{i,n}^{\uparrow}\hat{n}_{i,n}^{\downarrow}-(\hat{n}_{i, n}^{\uparrow}+\hat{n}_{i,n}^{\uparrow})+1]\) is a positive semidefinite operator which provides its minimum possible eigenvalue zero, when one has at least one electron on the \((i,n)\) site. After this step one transforms the \(\hat{H}_{V}\) term. For this, we introduce the site notation \((l,k)\) which cover all sites, not only the lattice sites. In these conditions, denoting the nearest neighbor \(l,k\) sites (which are taken into consideration only once) by \(<l,k>\), one has \[\hat{H}_{V} = V\sum_{<k,l>}\hat{n}_{l}\hat{n}_{k}=V\sum_{<l,k>}[\hat{n}_{l} \hat{n}_{k}-2(\hat{n}_{l}+\hat{n}_{k})+4]+2V\sum_{<l,k>}(\hat{n}_{l}+\hat{n}_ {k}) \tag{11}\] \[- 4\sum_{<l,k>}V=\sum_{<l,k>}V\hat{R}_{<l,k>}+\sum_{i}\sum_{n=1,2,.. 6}2Vr_{n}\hat{n}_{i,n}-28N_{c}V\] where we have used the fact that 7 bonds are present in an unit cell, so \(\sum_{<l,k>}=7\sum_{i}\) where i runs over the lattice sites, furthermore \(\sum_{<l,k>}(\hat{n}_{l}+\hat{n}_{k})=\sum_{i}\sum_{n=1,2,..6}r_{n}\hat{n}_{i,n}\), where the coefficients \(r_{1}=3,r_{2}=2,r_{3}=2,r_{4}=3,r_{5}=3,r_{6}=1\) show how many time a given site in an unit cell appears in the \(\sum_{<l,k>}\) sum. Here the \(\hat{R}_{<l,k>}=[\hat{n}_{l}\hat{n}_{k}-2(\hat{n}_{l}+\hat{n}_{k})+4]\) is a positive semidefinite operator which provides its minimum possible eigenvalue zero, when the \(l,k\) nearest neighbor sites are at least once occupied, but such that at least one of them is doubly occupied. Now we introduce in the Hamiltonian \(\hat{H}\) from (1) the \(\hat{H}_{U}\) term from (11) and the \(\hat{H}_{V}\) term from (12) obtaining \[\hat{H} = \hat{H}_{K}+(\sum_{i}\sum_{n=1,2,..6}U_{n}\hat{P}_{i,n}+\sum_{i} \sum_{n=1,2,..6}U_{n}(\hat{n}_{i,n}^{\uparrow}+\hat{n}_{i,n}^{\downarrow})-N_{ c}\sum_{n=1,2,..6}U_{n}) \tag{13}\] \[+ (\sum_{<l,k>}V\hat{R}_{<l,k>}+\sum_{i}\sum_{n=1,2,..6}2Vr_{n}\hat {n}_{i,n}-28N_{c}V)\] This Hamiltonian from Eq.(13) will be written in the positive semidefinite form \[\hat{H}=\sum_{i,z,\sigma}\hat{B}^{\prime}_{i,z,\sigma}\hat{B}^{\prime\dagger}_ {i,z,\sigma}+\sum_{i}\sum_{n=1,2,..6}U_{n}\hat{P}_{i,n}+\sum_{<l,k>}V\hat{R}_ {<l,k>}+C \tag{14}\] where \(C\) is a constant, and \(\hat{B}^{\prime}_{i,z,\sigma}\) has exactly the form presented in Eq.(2) with new coefficients. Denoting the anticommutators \(\{\hat{B}^{\prime}_{i,z,\sigma},\hat{B}^{\prime\dagger}_{i,z,\sigma}\}=y_{z,\sigma}\) (note that \(y_{z,\sigma}\) is the same in each cell placed at the lattice site i), Eq.(14) becomes of the form \[\hat{H}=-\sum_{i,z,\sigma}\hat{B}^{\prime\dagger}_{i,z,\sigma}\hat{B}^{\prime }_{i,z,\sigma}+N_{c}\sum_{z,\sigma}y_{z,\sigma}+\sum_{i}\sum_{n=1,2,..6}U\hat{ P}_{i,n}+\sum_{<l,k>}V\hat{R}_{<l,k>}+C \tag{15}\] Now since the equations (13) and (15) contain the same Hamiltonian in the left side, equating the right sides on finds \[\hat{H}_{K}+\sum_{i,\sigma}(U_{n}+2Vr_{n})\hat{n}^{\sigma}_{i,n} -q_{U}\sum_{i,\sigma}\hat{n}^{\sigma}_{i,n}+q_{U}N-N_{c}\sum_{n=1,2,..6}U_{n}- 28N_{c}V=\] \[-\sum_{i,z,\sigma}\hat{B}^{\prime\dagger}_{i,z,\sigma}\hat{B}^{ \prime}_{i,z,\sigma}+N_{c}\sum_{z,\sigma}y_{z,\sigma}+C \tag{16}\] where \(N\) represents the number of electrons, and \(q_{U}\) is a constant (note that we added to the left side \(-q_{U}N+q_{U}N=0\)). Now we deduce the block operators \(\hat{B}^{\prime}_{i,z,\sigma}\) based on the transformation \[\sum_{i,z,\sigma}\hat{B}^{\prime\dagger}_{i,z,\sigma}\hat{B}^{\prime}_{i,z, \sigma}=-[\hat{H}_{K}+\sum_{i,n,\sigma}(U_{n}+2Vr_{n})\hat{n}^{\sigma}_{i,n}-q _{U}\sum_{i,\sigma}\hat{n}^{\sigma}_{i,n}] \tag{17}\] which taken into account in Eq.(16) provides for the constant \(C\) the value \[C=q_{U}N-N_{c}\sum_{z,\sigma}y_{z,\sigma}-N_{c}\sum_{n=1,2,..6}U_{n}-28VN_{c} \tag{18}\] In conclusions, based on (10) the starting Hamiltonian from (1) can be transformed in the positive semidefinite form (10), where the constant \(C\) is given in (11), and the block operator coefficient must be deduced based on (12). But one realizes that the block operators \(\hat{B}^{\prime}_{i,z,\sigma}\) are exactly the block operators \(\hat{B}_{i,z,\sigma}\) from (2) whose matching equations are presented in Eqs.(4-7) with the condition that at the level of the Hamiltonian parameters the following changes are made \[t^{\sigma,\sigma}_{i,j}\rightarrow-t^{\sigma,\sigma}_{i,j},\quad t^{\sigma, \sigma^{\prime}}_{i,j}\rightarrow-t^{\sigma,\sigma^{\prime}}_{i,j},\quad \epsilon^{\sigma,\sigma}_{i}\to q_{U}-(\epsilon^{\sigma,\sigma}_{i }+U_{i}+2Vr_{i}),\quad\epsilon^{\sigma,\sigma^{\prime}}\rightarrow-\epsilon^{ \sigma,\sigma^{\prime}} \tag{13}\] where \(\sigma^{\prime}\neq\sigma\) holds. Hence with the changes from Eq.(13) the Hamiltonian from (1) becomes of the form \[\hat{H}=\sum_{i,z,\sigma}\hat{B}_{i,z,\sigma}\hat{B}^{\dagger}_{i,z,\sigma}+ \sum_{i}\sum_{n=1,2,..6}U_{n}\hat{P}_{i,n}+\sum_{<l,k>}V\hat{R}_{<l,k>}+C, \tag{14}\] where the constant \(C\) is given in (11), and the matching equations for the \(\hat{B}_{i,z,\sigma}\) block operators defined in (2) remain does presented in Eqs.(4-7) at \({\bf B}=0\) and Eqs.(13- 16) at \({\bf B}\neq 0\) conditioned by the interchanges described in (13). ## Appendix B Solution of the matching equations at \(B=0\) #### b.0.1 The first ten equations. One notes that during this solution one uses \(t=t^{*}\), and concentrates on Eqs.(4). Since a solution of the matching equations is presented here for the first time in the paper, more details are mentioned during the presentation. From the rows 3-8 of (4), six \(a_{i,j}\) coefficients can be expressed in function of other \(a_{i^{\prime},j^{\prime}}\) coefficients, reducing in this manner the number of unknown variables by six: \[a_{1,5}=\frac{t}{a^{*}_{1,1}},\;a^{*}_{1,2}=\frac{t}{a_{1,1}},\; a_{3,3}=\frac{t}{a^{*}_{3,4}},\;a^{*}_{3,5}=\frac{t}{a_{3,4}},\;a^{*}_{2,3}= \frac{t_{h}}{a_{2,2}},\;a_{4,5}=\frac{t_{f}}{a^{*}_{4,6}},\] \[a_{1,2}=a_{1,5},\quad a_{3,3}=a_{3,5}. \tag{15}\] Using these relations, the remaining four equalities of (4) provide \[t_{h}>0. \tag{16}\] Indeed \[a_{2,5}=-\frac{t^{2}}{a_{2,2}^{*}|a_{1,1}|^{2}},\] \[t_{h}=\frac{|a_{2,2}|^{2}|a_{1,1}|^{2}}{|a_{3,4}|^{2}},\;=>\;|a_{3, 4}|^{2}=\frac{|a_{2,2}|^{2}|a_{1,1}|^{2}}{t_{h}},\] \[t_{c}=a_{5,7}^{*}a_{5,4}+b_{5,7}^{*}b_{5,4},\] \[b_{5,7}^{*}b_{5,4}={b^{\prime}}_{5,7}^{*}b_{5,4}^{\prime}, \tag{10}\] The here obtained first two relations are such obtained, that one keeps the first equality of (4), and one divides the first two equalities of (4). #### b.2.2 The second ten equations. In what will follows on concentrates to Eqs.(5). The procedure used here is as follows: From consecutive two equations of (5) we form pairs. From the first component of the pair we express the \(b\) coefficient. After this step by dividing the two components of the pair, one connects \(b^{\prime}\) to \(b\). In this manner, the following ten equations are obtained: \[{b^{\prime}}_{1,1}^{*}=\frac{t_{1,5}^{\uparrow,\downarrow}}{a_{1,5}},\] \[{b^{\prime}}_{1,1}^{*}=\alpha b_{1,1}^{*}\] \[b_{1,1}=\frac{t_{2,1}^{\uparrow,\downarrow}}{a_{1,2}^{*}},\] \[b_{1,1}=\alpha b_{1,1}^{\prime}\] \[b_{3,4}=\frac{t_{5,4}^{\uparrow,\downarrow}}{a_{3,5}^{*}},\] \[b_{3,4}=\alpha b_{3,4}^{\prime}\] \[{b^{\prime}}_{3,4}^{*}=\frac{t_{4,3}^{\uparrow,\downarrow}}{a_{ 3,3}^{*}},\] \[{b^{\prime}}_{3,4}^{*}=\alpha b_{3,4}^{*}\] \[t_{c}^{\uparrow,\downarrow}={b^{\prime}}_{5,7}^{*}a_{5,4}+a_{5,7}^{*}b_{5,4},\] \[\alpha_{c}=\frac{{b^{\prime}}_{5,7}^{*}a_{5,4}+a_{5,7}^{*}b_{5,4} }{b_{5,7}^{*}a_{5,4}+a_{5,7}^{*}b_{5,4}}, \tag{11}\] Now we divide the second and fourth line, and similarly the sixth and eight line respectively, obtaining \[|b_{1,1}^{\prime}|^{2}=|b_{1,1}|^{2},\quad|b_{3,4}^{\prime}|^{2}=|b_{3,4}|^{2 },\;=>\;|\alpha|^{2}=1 \tag{12}\] Introducing now the phase factors \(\phi_{\alpha},\phi_{1},\phi_{2}\), based on (B5), the following relations are obtained: \[b^{\prime}_{1,1}=e^{i\phi_{1}}b_{1,1},\quad b^{\prime}_{3,4}=e^{i \phi_{2}}b_{3,4},\quad\alpha=e^{i\phi_{\alpha}}, \tag{100}\] But, from the fourth and sixth line of (B4) one has \(1=e^{i\phi_{\alpha}}e^{i\phi_{1}}\), and \(1=e^{i\phi_{\alpha}}e^{i\phi_{2}}\) respectively, so \[e^{i\phi_{1}}=e^{-i\phi_{\alpha}},\quad e^{i\phi_{2}}=e^{-i\phi_ {\alpha}}, \tag{101}\] consequently, from (100) one obtains \[b^{\prime}_{1,1}=e^{-i\phi_{\alpha}}b_{1,1},\quad b^{\prime}_{3,4 }=e^{-i\phi_{\alpha}}b_{3,4}. \tag{102}\] I further note that from (B4), we further remain with the following relations \[b_{1,1}=\frac{t^{\uparrow,\downarrow}_{2,1}}{a^{*}_{1,2}}=\frac {t^{\uparrow,\downarrow^{*}}_{1,5}}{a^{*}_{1,5}}e^{i\phi_{\alpha}},\] \[b_{3,4}=\frac{t^{\uparrow,\downarrow}_{5,4}}{a^{*}_{3,5}}=\frac {t^{\uparrow,\downarrow^{*}}_{4,3}}{a^{*}_{3,3}}e^{i\phi_{\alpha}},\] \[t^{\uparrow,\downarrow}_{c}=b^{\prime*}_{5,7}a_{5,4}+a^{*}_{5,7 }b_{5,4},\] \[\alpha_{c}=\frac{b^{\prime*}_{5,7}a_{5,4}+a^{*}_{5,7}b_{5,4}}{b^{ *}_{5,7}a_{5,4}+a^{*}_{5,7}b^{\prime}_{5,4}}. \tag{103}\] Based on the second line of (8) we further have: \[t^{\uparrow,\downarrow}_{2,1}=t^{\uparrow,\downarrow^{*}}_{1, 5}e^{i\phi_{\alpha}},\quad t^{\uparrow,\downarrow}_{5,4}=t^{\uparrow, \downarrow^{*}}_{4,3}e^{i\phi_{\alpha}}, \tag{104}\] and what still remains to be analyzed from the second group of ten equations can be summarized as follows \[b_{1,1}=\frac{t^{\uparrow,\downarrow}_{2,1}}{a^{*}_{1,2}},\] \[b_{3,4}=\frac{t^{\uparrow,\downarrow}_{5,4}}{a^{*}_{3,5}},\] \[t^{\uparrow,\downarrow}_{c}=b^{\prime*}_{5,7}a_{5,4}+a^{*}_{5,7 }b_{5,4},\] \[\alpha_{c}=\frac{b^{\prime*}_{5,7}a_{5,4}+a^{*}_{5,7}b_{5,4}}{b^{ *}_{5,7}a_{5,4}+a^{*}_{5,7}b^{\prime}_{5,4}}, \tag{105}\] The ten equations provided by the on-site one particle potentials. _3.a) The relations relating the b coefficients._ Taking into account that \(\epsilon_{1}^{\uparrow,\uparrow}=\epsilon_{1}^{\downarrow,\downarrow}\), and \(\epsilon_{4}^{\uparrow,\uparrow}=\epsilon_{4}^{\downarrow,\downarrow}\), based on the 5-6 and 7-8 equalities of (6), one finds \[|b_{1,1}^{\prime}|^{2}+|b_{5,7}^{\prime}|^{2}=|b_{1,1}|^{2}+|b_{5,7 }|^{2},\] \[|b_{3,4}^{\prime}|^{2}+|b_{5,4}^{\prime}|^{2}=|b_{3,4}|^{2}+|b_{5, 4}|^{2}, \tag{101}\] so taking into account (100), we find \[|b_{5,7}^{\prime}|^{2}=|b_{5,7}|^{2},\quad|b_{5,4}^{\prime}|^{2}=|b_{5,4}|^{2}. \tag{102}\] Now as in the case of (101), we introduce \[b_{5,4}^{\prime}=e^{i\phi_{3}}b_{5,4},\quad b_{5,7}^{\prime}=e^{i\phi_{4}}b_{5,7}. \tag{103}\] Introducing these results in the last equality of (102), one finds \[b_{5,7}^{*}b_{5,4}=e^{-i\phi_{4}}b_{5,7}^{*}e^{i\phi_{3}}b_{5,4},\;=>\;e^{i \phi_{3}}=e^{i\phi_{4}}. \tag{104}\] Introducing these results in the last equality of (100), one obtains \[\alpha_{c}=\frac{e^{-i\phi_{3}}b_{5,7}^{*}a_{5,4}+a_{5,7}^{*}b_{5,4}}{b_{5,7}^ {*}a_{5,4}+a_{5,7}^{*}b_{5,4}e^{i\phi_{3}}},\;=>\;\alpha_{c}=e^{-i\phi_{3}}\;= >\;\alpha_{c}=e^{i\phi_{\alpha c}}=e^{-i\phi_{3}}. \tag{105}\] After this step, using \(\epsilon_{i}^{\sigma,-\sigma}=0\), \(i=1,4\), the first two equalities of (7) provide information relating the \(b_{5,4}\) and \(b_{5,7}\) coefficients as follows \[(a_{5,7}^{*}b_{5,7}+b_{5,7}^{*}a_{5,7}e^{i\phi_{\alpha c}})=-(a_{ 1,1}^{*}b_{1,1}+b_{1,1}^{*}a_{1,1}e^{i\phi_{\alpha}}),\] \[(a_{5,4}^{*}b_{5,4}+b_{5,4}^{*}a_{5,4}e^{i\phi_{\alpha c}})=-(a_{ 3,4}^{*}b_{3,4}+b_{3,4}^{*}a_{3,4}e^{i\phi_{\alpha}}). \tag{106}\] The obtained equalities (106) have an interesting structure, E.g. the first equality and its complex conjugate provide \[(a_{5,7}^{*}b_{5,7}+b_{5,7}^{*}a_{5,7}e^{i\phi_{\alpha c}})=-(a_ {1,1}^{*}b_{1,1}+b_{1,1}^{*}a_{1,1}e^{i\phi_{\alpha}}),\] \[(a_{5,7}b_{5,7}^{*}+b_{5,7}a_{5,7}^{*}e^{-i\phi_{\alpha c}})=-(a_ {1,1}b_{1,1}^{*}+b_{1,1}a_{1,1}^{*}e^{-i\phi_{\alpha}})\;=>\] \[e^{-i\phi_{\alpha c}}(a_{5,7}b_{5,7}^{*}e^{i\phi_{\alpha c}}+b_{ 5,7}a_{5,7}^{*})=-e^{-i\phi_{\alpha}}(a_{1,1}b_{1,1}^{*}e^{i\phi_{\alpha}}+b_ {1,1}a_{1,1}^{*}). \tag{107}\] In this case based on the first row of (101), the third row will satisfy \[e^{-i\phi_{\alpha_{c}}}=e^{-i\phi_{\alpha}},\,=>\,\,e^{i\phi_{\alpha_{c}}}=e^{i \phi_{\alpha}}. \tag{102}\] Similar result is obtained from the second equality of (100), hence (100) can be written as \[(a_{5,7}^{*}b_{5,7}+b_{5,7}^{*}a_{5,7}e^{i\phi_{\alpha}}) =-(a_{1,1}^{*}b_{1,1}+b_{1,1}^{*}a_{1,1}e^{i\phi_{\alpha}}),\] \[(a_{5,4}^{*}b_{5,4}+b_{5,4}^{*}a_{5,4}e^{i\phi_{\alpha}}) =-(a_{3,4}^{*}b_{3,4}+b_{3,4}^{*}a_{3,4}e^{i\phi_{\alpha}}). \tag{103}\] _3.b) The relations relating the a coefficients_ Now one analyses the equalities 3-4 of (6) in which we use as well (100) obtaining \[\epsilon_{2} = \frac{t^{2}}{|a_{1,1}|^{2}}+|a_{2,2}|^{2},\] \[\epsilon_{2} = \frac{t_{h}^{2}}{|a_{2,2}|^{2}}+\frac{t^{2}}{|a_{3,4}|^{2}},\,=> \,\epsilon_{2}=\frac{t_{h}^{2}}{|a_{2,2}|^{2}}+\frac{t^{2}t_{h}}{|a_{1,1}|^{2} |a_{2,2}|^{2}} \tag{104}\] where in the last step one uses the second line of (101). From here, introducing the notations\(X=\frac{t^{2}}{|a_{1,1}|^{2}}\), and \(Y=|a_{2,2}|^{2}\), the following system of linear equations is found \[\epsilon_{2} = X+Y,\] \[Y = \frac{t_{h}^{2}}{\epsilon_{2}}+\frac{t_{h}X}{\epsilon_{2}}, \tag{105}\] from where \[X=\frac{t^{2}}{|a_{1,1}|^{2}}=\epsilon_{2}-t_{h},\,=>\,\,a_{1,1} =\frac{|t|e^{i\theta_{1}}}{\sqrt{\epsilon_{2}-t_{h}}},\,\epsilon_{2}>t_{h}>0,\] \[Y=|a_{2,2}|^{2}=t_{h},\,=>\,\,a_{2,2}=\sqrt{t_{h}}e^{i\theta_{2}}. \tag{106}\] Here, \(\theta_{1},\theta_{2}\) are arbitrary phases. Now, from the first two equations of (101), one finds \[a_{2,5}=-\frac{\epsilon_{2}-t_{h}}{\sqrt{t_{h}}}e^{i\theta_{2}},\quad a_{3,4} =\frac{|t|}{\sqrt{\epsilon_{2}-t_{h}}}e^{i\theta_{3}} \tag{107}\] where again, \(\theta_{3}\) is an arbitrary phase. One underlines, that after all these steps, from (105) only the third equation remains non-analyzed. Now, using the first line of (6), from (100) one finds \[a_{1,2}=a_{1,5}=sign(t)\sqrt{\epsilon_{2}-t_{h}}\,e^{i\theta_{1 }},a_{3,3}=a_{3,5}=sign(t)\sqrt{\epsilon_{2}-t_{h}}\,e^{i\theta_{3}},\] \[a_{2,2}=a_{2,3}=\sqrt{t_{h}}\,e^{i\theta_{2}},\,a_{4,6}=\sqrt{ \epsilon_{4}}\,e^{i\theta_{4}},\,a_{4,5}=\frac{t_{f}}{\sqrt{\epsilon_{4}}}\,e ^{i\theta_{4}}, \tag{108}\] where \(\theta_{4}\) is an arbitrary phase. 3.c) The results relating the b coefficients Using (100), the first two equalities of (102) provide \[b_{1,1}=sign(t)\frac{t_{2,1}^{\uparrow,\downarrow}}{\sqrt{\epsilon_{2}-t_{h}}}e^{ i\theta_{1}},\,b_{1,1}^{\prime}=e^{-i\phi_{\alpha}}b_{1,1},\,b_{3,4}=sign(t) \frac{t_{5,4}^{\uparrow,\downarrow}}{\sqrt{\epsilon_{2}-t_{h}}}e^{i\theta_{3} },\,b_{3,4}^{\prime}=e^{-i\phi_{\alpha}}b_{3,4}. \tag{103}\] One further notes, that from (102), only one equality remains to be analyzed, namely the third one. _3.d) The remaining equations_ After these steps we underline, that till now only seven equations have not been used, namely, the following ones \[t_{c}=a_{5,7}^{*}a_{5,4}+b_{5,7}^{*}b_{5,4},\] \[t_{c}^{\uparrow,\downarrow}=b_{5,7}^{*}a_{5,4}e^{i\phi_{\alpha} }+a_{5,7}^{*}b_{5,4},\] \[(a_{5,7}^{*}b_{5,7}+b_{5,7}^{*}a_{5,7}e^{i\phi_{\alpha}})=-(a_{1,1}^{*}b_{1,1}+b_{1,1}^{*}a_{1,1}e^{i\phi_{\alpha}}),\] \[(a_{5,4}^{*}b_{5,4}+b_{5,4}^{*}a_{5,4}e^{i\phi_{\alpha}})=-(a_{3,4}^{*}b_{3,4}+b_{3,4}^{*}a_{3,4}e^{i\phi_{\alpha}}),\] \[\epsilon_{3}=\epsilon_{5}^{\uparrow,\uparrow}=\epsilon_{5}^{ \downarrow,\downarrow}=|a_{4,5}|^{2}+|a_{1,5}|^{2}+|a_{2,5}|^{2}+|a_{3,5}|^{2},\] \[\epsilon_{1}=\epsilon_{1}^{\downarrow,\downarrow}=|a_{1,1}|^{2}+| a_{5,7}|^{2}+|b_{1,1}|^{2}+|b_{5,7}|^{2},\] \[\epsilon_{1}=\epsilon_{4}^{\downarrow,\downarrow}=|a_{3,4}|^{2}+| a_{5,4}|^{2}+|b_{3,4}|^{2}+|b_{5,4}|^{2}. \tag{104}\] Introducing here the deduced results one finds the following system of equations \[t_{c}=a_{5,7}^{*}a_{5,4}+b_{5,7}^{*}b_{5,4},\] \[t_{c}^{\uparrow,\downarrow}=b_{5,7}^{*}a_{5,4}e^{i\phi_{\alpha} }+a_{5,7}^{*}b_{5,4},\] \[(a_{5,7}^{*}b_{5,7}+b_{5,7}^{*}a_{5,7}e^{i\phi_{\alpha}})=-\frac {t}{\epsilon_{2}-t_{h}}(t_{2,1}^{\uparrow,\downarrow}+{t_{2,1}^{\uparrow, \downarrow}}^{*}e^{i\phi_{\alpha}}),\] \[(a_{5,4}^{*}b_{5,4}+b_{5,4}^{*}a_{5,4}e^{i\phi_{\alpha}})=-\frac {t}{\epsilon_{2}-t_{h}}(t_{5,4}^{\uparrow,\downarrow}+{t_{5,4}^{\uparrow, \downarrow}}^{*}e^{i\phi_{\alpha}}),\] \[\epsilon_{3}=\frac{t_{f}^{2}}{\epsilon_{4}}+\frac{\epsilon_{2}^{ 2}-t_{h}^{2}}{t_{h}},\] \[\epsilon_{1}-\frac{t^{2}+{t_{2,1}^{\uparrow,\downarrow}}^{2}}{ \epsilon_{2}-t_{h}}=|a_{5,7}|^{2}+|b_{5,7}|^{2},\] \[\epsilon_{1}-\frac{t^{2}+{t_{5,4}^{\uparrow,\downarrow}}^{2}}{ \epsilon_{2}-t_{h}}=|a_{5,4}|^{2}+|b_{5,4}|^{2}. \tag{105}\] In our case \(t^{\uparrow,\downarrow}\) are real parameters, and \(e^{i\phi_{\alpha}}=-1\) holds. Hence the third and fourth equations of (B) provide \[a^{*}_{5,7}b_{5,7}=a_{5,7}b^{*}_{5,7},\] \[a^{*}_{5,4}b_{5,4}=a_{5,4}b^{*}_{5,4}. \tag{100}\] But this means that \(a_{5,7},b_{5,7}\) have equal phases, and similarly, \(a_{5,4},b_{5,4}\) have equal phases. I.e. \[a_{5,7}=|a_{5,7}|e^{i\chi},\quad b_{5,7}=|b_{5,7}|e^{i\chi},\] \[a_{5,4}=|a_{5,4}|e^{i\eta},\quad b_{5,4}=|b_{5,4}|e^{i\eta}, \tag{101}\] Introducing now (B) in the first equality of (B), one finds \[t_{c}=|a_{5,7}|e^{-i\chi}|a_{5,4}|e^{i\eta}+|b_{5,7}|e^{-i\chi}|b_ {5,4}|e^{i\eta},\] \[\chi=\eta\pm 2n\pi,n\in N. \tag{102}\] Now one introduces two notations, namely \(\gamma_{1}\) and \(\gamma_{2}\) defined as \[\gamma_{1}=\epsilon_{1}-\frac{t^{2}+t_{2,1}^{\uparrow,2}}{ \epsilon_{2}-t_{h}},\] \[\gamma_{2}=\epsilon_{1}-\frac{t^{2}+t_{5,4}^{\uparrow,2}}{ \epsilon_{2}-t_{h}} \tag{103}\] In this manner, the first, and last two equations from (B) build up a closed system of equations \[t_{c}=|a_{5,7}||a_{5,4}|+|b_{5,7}||b_{5,4}|,\] \[t_{c}^{\uparrow,\downarrow}=|a_{5,7}||b_{5,4}|-|b_{5,7}||a_{5,4}|,\] \[\gamma_{1}=|a_{5,7}|^{2}+|b_{5,7}|^{2},\] \[\gamma_{2}=|a_{5,4}|^{2}+|b_{5,4}|^{2}. \tag{104}\] In what will follows, we must solve the system of equations (B) In order to do this job, we must observe, that (B) not contains independent equations because it uses a relation in between the Hamiltonian parameters. Indeed, taking the square of the first two equations and adding them, one finds \[|a_{5,7}|^{2}|a_{5,4}|^{2}+|b_{5,7}|^{2}|b_{5,4}|^{2}+|a_{5,7}|^{2 }|b_{5,4}|^{2}+|b_{5,7}|^{2}|a_{5,4}|^{2}=t_{c}^{2}+t_{c}^{\uparrow,\downarrow 2 },\rightarrow\] \[|a_{5,7}|^{2}(|a_{5,4}|^{2}+|b_{5,4}|^{2})+|b_{5,7}|^{2}|(|a_{5,4}|^{2}+|b_{5,4}|^{2})=t_{c}^{2}+t_{c}^{\uparrow,\downarrow 2},\rightarrow\] \[\gamma_{1}\gamma_{2}=t_{c}^{2}+t_{c}^{\uparrow,\downarrow 2}. \tag{105}\] From the second equation of (B33) \(|b_{5,4}|=\frac{t_{c}^{\uparrow\downarrow}+|b_{5,7}||a_{5,4}|}{|a_{5,7}|}\), and this introduced in the first equality, allows to obtain the expression of \(|b_{5,7}|\) in function of the \(|a|\) coefficients. Using also the \(\gamma_{2}\) expression, i.e. the fourth equation of (B33), one obtains \[|b_{5,7}|=\frac{t_{c}|a_{5,7}|-\gamma_{2}|a_{5,4}|}{t_{c}^{\uparrow\downarrow}}.\] (B35) Similarly, from the second equation of (B33) one has \(|b_{5,7}|=\frac{-t_{c}^{\uparrow\downarrow}+|b_{5,4}||a_{5,7}|}{|a_{5,4}|}\). This introduced in the first equation one can express \(|b_{5,7}|\) in function of \(|a|\) parameters, and using the \(\gamma_{1}\) expression from the third equation one finds \[|b_{5,4}|=\frac{\gamma_{1}|a_{5,7}|-t_{c}|a_{5,4}|}{t_{c}^{\uparrow\downarrow}}.\] (B36) In this moment the last two equations from (B33) must be analyzed. Introducing here all previously deduced results, and introducing the notations \(x=|a_{5,4}|\), \(y=|a_{5,7}|\), the following system of equations are obtained \[\gamma_{1} =x^{2}+\bigg{(}\frac{\gamma_{1}y-t_{c}x}{t_{c}^{\uparrow, \downarrow}}\bigg{)}^{2},\] \[\gamma_{2} =y^{2}+\bigg{(}\frac{t_{c}y-\gamma_{2}x}{t_{c}^{\uparrow, \downarrow}}\bigg{)}^{2}.\] (B37) Now it can be observed that the here obtained two equations, via \(y\to\sqrt{\gamma_{2}/\gamma_{1}}\:x\), \(x\to\sqrt{\gamma_{1}/\gamma_{2}}\:y\), transform in themselves, hence \[|a_{5,7}|=y=\sqrt{\gamma_{2}/\gamma_{1}}\:x,\:x=|a_{5,4}|\] (B38) Using (B38)), the relation (B36) can be written as \[\gamma_{1}=x^{2}+\frac{x^{2}}{t_{c}^{\uparrow,\downarrow}}^{2}[\gamma_{1}\sqrt {\frac{\gamma_{2}}{\gamma_{1}}}-t_{c}]^{2}.\] (B39) Based on (B39) and using (B34), one finds \[|a_{5,4}|=x=\frac{\sqrt{\gamma_{1}}}{\sqrt{1+\frac{(\sqrt{\gamma_ {1}\gamma_{2}}-t_{c})^{2}}{t_{c}^{\uparrow,\downarrow}}^{2}}},\quad|a_{5,7}|= y=\frac{\sqrt{\gamma_{2}}}{\sqrt{1+\frac{(\sqrt{\gamma_{1}\gamma_{2}}-t_{c})^{2}}{t_ {c}^{\uparrow,\downarrow}}^{2}}},\] \[\gamma_{1}\gamma_{2}=t_{c}^{2}+t_{c}^{\uparrow,\downarrow}{}^{2}.\] (B40) ### Summary of the obtained results #### b.1.1 The deduced block operator parameters. The analyzed matching equations provided the following results: \[a_{1,1} =\frac{|t|e^{i\theta_{1}}}{\sqrt{\epsilon_{2}-t_{h}}},\,a_{1,2}=a_{ 1,5}=sign(t)\sqrt{\epsilon_{2}-t_{h}}\;e^{i\theta_{1}},\] \[a_{2,5} =-\frac{\epsilon_{2}-t_{h}}{\sqrt{t_{h}}}e^{i\theta_{2}},\;a_{2,2 }=a_{2,3}=\sqrt{t_{h}}\;e^{i\theta_{2}},\] \[a_{3,4} =\frac{|t|}{\sqrt{\epsilon_{2}-t_{h}}}e^{i\theta_{3}},\;a_{3,3}=a _{3,5}=sign(t)\sqrt{\epsilon_{2}-t_{h}}\;e^{i\theta_{3}},\] \[a_{4,6} =\sqrt{\epsilon_{4}}\;e^{i\theta_{4}},\;a_{4,5}=\frac{t_{f}}{ \sqrt{\epsilon_{4}}}\;e^{i\theta_{4}},\] \[a_{5,4} =\frac{\sqrt{\gamma_{1}}}{\sqrt{1+\frac{(\sqrt{\gamma_{1}\gamma _{2}}-t_{c})^{2}}{t_{c}^{\uparrow,\downarrow^{2}}}}}e^{i\theta_{5}},\;a_{5,7} =\frac{\sqrt{\gamma_{2}}}{\sqrt{1+\frac{(\sqrt{\gamma_{1}\gamma_{2}}-t_{c})^{ 2}}{t_{c}^{\uparrow,\downarrow^{2}}}}}e^{i\theta_{5}},\] \[b_{1,1} =sign(t)\frac{t_{2,1}^{\uparrow,\downarrow}}{\sqrt{\epsilon_{2}- t_{h}}}e^{i\theta_{1}},\;b_{1,1}^{\prime}=-sign(t)\frac{t_{2,1}^{\uparrow, \downarrow}}{\sqrt{\epsilon_{2}-t_{h}}}e^{i\theta_{1}},\] \[b_{3,4} =sign(t)\frac{t_{5,4}^{\uparrow,\downarrow}}{\sqrt{\epsilon_{2}- t_{h}}}e^{i\theta_{3}},\;b_{3,4}^{\prime}=-sign(t)\frac{t_{5,4}^{\uparrow, \downarrow}}{\sqrt{\epsilon_{2}-t_{h}}}e^{i\theta_{3}},\] \[b_{5,4} =\frac{\gamma_{1}\sqrt{\gamma_{2}}-t_{c}\sqrt{\gamma_{1}}}{t_{c} ^{\uparrow,\downarrow}\sqrt{1+\frac{(\sqrt{\gamma_{1}\gamma_{2}}-t_{c})^{2}}{ t_{c}^{\uparrow,\downarrow^{2}}}}}e^{i\theta_{5}},\;b_{5,4}^{\prime}=-\frac{ \gamma_{1}\sqrt{\gamma_{2}}-t_{c}\sqrt{\gamma_{1}}}{t_{c}^{\uparrow,\downarrow }\sqrt{1+\frac{(\sqrt{\gamma_{1}\gamma_{2}}-t_{c})^{2}}{t_{c}^{\uparrow, \downarrow^{2}}}}}e^{i\theta_{5}},\] \[b_{5,7} =\frac{t_{c}\sqrt{\gamma_{2}}-\gamma_{2}\sqrt{\gamma_{1}}}{t_{c} ^{\uparrow,\downarrow}\sqrt{1+\frac{(\sqrt{\gamma_{1}\gamma_{2}}-t_{c})^{2}}{ t_{c}^{\uparrow,\downarrow^{2}}}}}e^{i\theta_{5}},\;b_{5,7}^{\prime}=-\frac{t_{c}\sqrt{ \gamma_{2}}-\gamma_{2}\sqrt{\gamma_{1}}}{t_{c}^{\uparrow,\downarrow}\sqrt{1+ \frac{(\sqrt{\gamma_{1}\gamma_{2}}-t_{c})^{2}}{t_{c}^{\uparrow,\downarrow^{2}} }}}e^{i\theta_{5}}. \tag{101}\] The deduced solution is placed in the following parameter domain \[\epsilon_{1}>0,\;\epsilon_{2}>0,\;\epsilon_{3}>0,\;\epsilon_{4}>0, \;t_{h}>0,\;\epsilon_{2}-t_{h}>0,\] \[\gamma_{1}=\epsilon_{1}-\frac{t^{2}+t_{2,1}^{\uparrow,\downarrow^ {2}}}{\epsilon_{2}-t_{h}}>0,\;\gamma_{2}=\epsilon_{1}-\frac{t^{2}+t_{5,4}^{ \uparrow,\downarrow^{2}}}{\epsilon_{2}-t_{h}}>0,\] \[\epsilon_{3}=\frac{t_{f}^{2}}{\epsilon_{4}}+\frac{\epsilon_{2}^{ 2}-t_{h}^{2}}{t_{h}},\;\gamma_{1}\gamma_{2}=t_{c}^{2}+t_{c}^{\uparrow, \downarrow^{2}}. \tag{102}\] I note that from symmetry considerations one has \(t_{2,1}^{\uparrow,\downarrow}=t_{5,4}^{\uparrow,\downarrow}=t^{\uparrow, \downarrow}\), hence \(\gamma_{1}=\gamma_{2}=\gamma\) holds \[\gamma_{1}=\gamma_{2}=\gamma. \tag{103}\] The check of the solutions of the matching equations Using this matching equation solution, one exemplifies how the obtained results can be checked. First we check the obtained results for the \(z=5\) block. One starts with the equation written for the \(t_{c}\) hopping element. \[t_{c}=a_{5,7}^{*}a_{5,4}+b_{5,7}^{*}b_{5,4}=\frac{\sqrt{\gamma_{1} \gamma_{2}}}{1+\frac{(\sqrt{\gamma_{1}\gamma_{2}}-t_{c})^{2}}{t_{c}^{\uparrow, \downarrow}}}+\frac{(\gamma_{1}\sqrt{\gamma_{2}}-t_{c}\sqrt{\gamma_{1}})(t_{ c}\sqrt{\gamma_{2}}-\gamma_{2}\sqrt{\gamma_{1}})}{t_{c}^{\uparrow, \downarrow}}=\] \[\frac{t_{c}^{\uparrow,\downarrow}}{t_{c}^{\uparrow,\downarrow}} \sqrt{\gamma_{1}\gamma_{2}}+(2t_{c}\gamma_{1}\gamma_{2}-t_{c}^{2}\sqrt{\gamma _{1}\gamma_{2}}-\gamma_{1}\gamma_{2}\sqrt{\gamma_{1}\gamma_{2}})}{t_{c}^{ \uparrow,\downarrow}}=\] \[\frac{(\gamma_{1}\gamma_{2}-t_{c}^{2})\sqrt{\gamma_{1}\gamma_{2} }+(2t_{c}\gamma_{1}\gamma_{2}-t_{c}^{2}\sqrt{\gamma_{1}\gamma_{2}}-\gamma_{1} \gamma_{2}\sqrt{\gamma_{1}\gamma_{2}})}{(\gamma_{1}\gamma_{2}-t_{c}^{2})+( \sqrt{\gamma_{1}\gamma_{2}}-t_{c})^{2}}=\] \[\frac{t_{c}(2\gamma_{1}\gamma_{2}-2t_{c}\sqrt{\gamma_{1}\gamma_{ 2}})}{(2\gamma_{1}\gamma_{2}-2t_{c}\sqrt{\gamma_{1}\gamma_{2}})}=t_{c} \tag{101}\] After this step the equation of \(t_{c}^{\uparrow,\downarrow}\) follows: \[t_{c}^{\uparrow,\downarrow}=b_{5,7}^{\prime*}a_{5,4}+a_{5,7}^{*} b_{5,4}=-\frac{(t_{c}\sqrt{\gamma_{2}}-\gamma_{2}\sqrt{\gamma_{1}})\sqrt{ \gamma_{1}}}{t_{c}^{\uparrow,\downarrow}(1+\frac{(\sqrt{\gamma_{1}\gamma_{2}} -t_{c}\sqrt{\gamma_{1}})\sqrt{\gamma_{2}}}{t_{c}^{\uparrow,\downarrow}})}+ \frac{(\gamma_{1}\sqrt{\gamma_{2}}-t_{c}\sqrt{\gamma_{1}})\sqrt{\gamma_{2}}}{ t_{c}^{\uparrow,\downarrow}(1+\frac{(\sqrt{\gamma_{1}\gamma_{2}}-t_{c})^{2}}{t_{c}^{ \uparrow,\downarrow}})}=\] \[t_{c}^{\uparrow,\downarrow}\frac{(-t_{c}\sqrt{\gamma_{1}\gamma_ {2}}+\gamma_{1}\gamma_{2})+(\gamma_{1}\gamma_{2}-t_{c}\sqrt{\gamma_{1}\gamma_ {2}})}{t_{c}^{\uparrow,\downarrow}}=t_{c}^{\uparrow,\downarrow}\frac{2\gamma_{ 1}\gamma_{2}-2t_{c}\sqrt{\gamma_{1}\gamma_{2}}}{(\gamma_{1}\gamma_{2}-t_{c}^{ 2})+(\sqrt{\gamma_{1}\gamma_{2}}-t_{c})^{2}}=\] \[t_{c}^{\uparrow,\downarrow}\frac{2\gamma_{1}\gamma_{2}-2t_{c} \sqrt{\gamma_{1}\gamma_{2}}}{2\gamma_{1}\gamma_{2}-2t_{c}\sqrt{\gamma_{1} \gamma_{2}}}=t_{c}^{\uparrow,\downarrow}. \tag{102}\] After this step one checks the two equations obtained for \(\epsilon_{1}\), (see also (101)) \[\gamma=\gamma_{1}=|a_{5,7}|^{2}+|b_{5,7}|^{2}=\frac{\gamma_{2}}{ 1+\frac{(\sqrt{\gamma_{1}\gamma_{2}}-t_{c})^{2}}{t_{c}^{\uparrow,\downarrow} }+\frac{(t_{c}\sqrt{\gamma_{2}}-\gamma_{2}\sqrt{\gamma_{1}})^{2}}{t_{c}^{ \uparrow,\downarrow}}=}\] \[\gamma_{2}\frac{t_{c}^{\uparrow,\downarrow}{}^{2}+(t_{c}^{2}+ \gamma_{2}\gamma_{1}-2t_{c}\sqrt{\gamma_{2}\gamma_{1}})}{t_{c}^{\uparrow, \downarrow}{}^{2}+(\sqrt{\gamma_{2}\gamma_{1}}-t_{c})^{2}}=\gamma_{2}=\gamma. \tag{103}\] Similar result is obtained from the second equality written for \(\epsilon_{1}\) (see the last equality of (100)) And finally, the \(\epsilon_{1}^{\uparrow,\downarrow}\) equation (see (7)) \[\epsilon_{1}^{\uparrow,\downarrow}=a_{1,1}^{*}b_{1,1}+a_{5,7}^{*} b_{5,7}+b_{1,1}^{\prime*}a_{1,1}+b_{5,7}^{\prime*}a_{5,7}=\] \[|a_{1,1}||b_{1,1}|-|a_{1,1}||b_{1,1}|+|a_{5,7}||b_{5,7}|-|a_{5,7} ||b_{5,7}|=0. \tag{104}\] The other matching conditions are trivially satisfied. Final results relating the block operator parameters If one uses all the deduced results, including the symmetry considerations, the block operator parameters can be expressed in final form as follows: \[a_{1,1}=\frac{|t|e^{i\theta_{1}}}{\sqrt{\epsilon_{2}-t_{h}}},\,a_{1,2}=a_{1,5}=sign(t)\sqrt{\epsilon_{2}-t_{h}}\,e^{i\theta_{1}},\] \[a_{2,5}=-\frac{\epsilon_{2}-t_{h}}{\sqrt{t_{h}}}e^{i\theta_{2}}, \,a_{2,2}=a_{2,3}=\sqrt{t_{h}}\,e^{i\theta_{2}},\] \[a_{3,4}=\frac{|t|}{\sqrt{\epsilon_{2}-t_{h}}}e^{i\theta_{3}},\,a_ {3,3}=a_{3,5}=sign(t)\sqrt{\epsilon_{2}-t_{h}}\,e^{i\theta_{3}},\] \[a_{4,6}=\sqrt{\epsilon_{4}}\,e^{i\theta_{4}},\,a_{4,5}=\frac{t_{ f}}{\sqrt{\epsilon_{4}}}\,e^{i\theta_{4}},\] \[a_{5,4}=a_{5,7}=\sqrt{\frac{\gamma+t_{c}}{2}}e^{i\theta_{5}},\] \[b_{1,1}=sign(t)\frac{t^{\uparrow,\downarrow}}{\sqrt{\epsilon_{2 }-t_{h}}}e^{i\theta_{1}},\,b^{\prime}_{1,1}=-sign(t)\frac{t^{\uparrow, \downarrow}}{\sqrt{\epsilon_{2}-t_{h}}}e^{i\theta_{1}},\] \[b_{3,4}=sign(t)\frac{t^{\uparrow,\downarrow}}{\sqrt{\epsilon_{2 }-t_{h}}}e^{i\theta_{3}},\,b^{\prime}_{3,4}=-sign(t)\frac{t^{\uparrow, \downarrow}}{\sqrt{\epsilon_{2}-t_{h}}}e^{i\theta_{3}},\] \[b_{5,4}=\frac{\gamma-t_{c}}{t^{\uparrow,\downarrow}_{c}}\sqrt{ \frac{\gamma+t_{c}}{2}}e^{i\theta_{5}},\,b^{\prime}_{5,4}=-\frac{\gamma-t_{c} }{t^{\uparrow,\downarrow}_{c}}\sqrt{\frac{\gamma+t_{c}}{2}}e^{i\theta_{5}},\] \[b_{5,7}=\frac{t_{c}-\gamma}{t^{\uparrow,\downarrow}_{c}}\sqrt{ \frac{\gamma+t_{c}}{2}}e^{i\theta_{5}},\,b^{\prime}_{5,7}=-\frac{t_{c}-\gamma} {t^{\uparrow,\downarrow}_{c}}\sqrt{\frac{\gamma+t_{c}}{2}}e^{i\theta_{5}}. \tag{100}\] The conditions under which the deduced solution hold (i.e. the parameter space domain where the solution is valid) can be given as follows \[\epsilon_{1}>0,\,\epsilon_{4}>0,\,t_{h}>0,\,\epsilon_{2}-t_{h}>0,\] \[\gamma=\epsilon_{1}-\frac{t^{2}+t^{\uparrow,\downarrow}}{\epsilon _{2}-t_{h}},\gamma+t_{c}>0,\] \[\epsilon_{3}=\frac{t_{f}^{2}}{\epsilon_{4}}+\frac{\epsilon_{2}^{ 2}-t_{h}^{2}}{t_{h}},\,\gamma^{2}=t_{c}^{2}+t_{c}^{\uparrow,\downarrow}{}^{2}. \tag{101}\] ## Appendix C Solution of the matching equations at \(B\neq 0\) We start by (15) from where all b coefficients can be written in terms of the a coefficients \[{b^{\prime}}_{1,1}^{*}=-\frac{\lambda e^{i\phi_{3}}}{a_{1,5,d}}, \,b^{*}_{1,1}=\frac{\lambda e^{i\phi_{3}}}{a_{1,5,u}},\,b^{*}_{1,1}=\frac{ \lambda e^{i\phi_{2}}}{a^{*}_{1,2,u}},\,b^{\prime}_{1,1}=-\frac{\lambda e^{i \phi_{2}}}{a^{*}_{1,2,d}},\,b_{3,4}=-\frac{\lambda e^{i\phi_{3}}}{a^{*}_{3,5, u}}, \tag{102}\] \[{b^{\prime}}_{3,4}=\frac{\lambda e^{i\phi_{3}}}{a^{*}_{3,5,d}}, \,{b^{\prime}}_{3,4}^{*}=\frac{\lambda e^{i\phi_{2}}}{a_{3,3,d}},\,b^{*}_{3,4 }=-\frac{\lambda e^{i\phi_{2}}}{a_{3,3,u}},\,{b^{\prime}}_{5,7}^{*}=\frac{ \lambda_{c}-a^{*}_{5,7,u}b_{5,4}}{a_{5,4,d}},\,b^{\prime}_{5,4}=\frac{-\lambda _{c}-a_{5,4,u}b^{*}_{5,7}}{a^{*}_{5,7,d}},\] so we can concentrate (at least up to the study of the last two bond operator parameters from (2)) to the \(a_{i,j,\alpha}\) parameters. Now, we write the first 12 equations from (101) in the following form \[a_{1,5,u}=\frac{te^{i\phi_{3}}}{a_{1,1,u}^{*}},\;a_{1,2,u}^{*}= \frac{te^{i\phi_{2}}}{a_{1,1,u}^{*}},\;a_{1,5,d}=\frac{te^{i\phi_{3}}}{a_{1,1,d} ^{*}},\;a_{1,2,d}^{*}=\frac{te^{i\phi_{2}}}{a_{1,1,d}^{*}},\] \[a_{3,3,u}=\frac{te^{i\phi_{2}}}{a_{3,4,u}^{*}},\;a_{3,5,u}^{*}= \frac{te^{i\phi_{3}}}{a_{3,4,u}},\;a_{3,3,d}=\frac{te^{i\phi_{2}}}{a_{3,4,d}^{* }},\;a_{3,5,d}^{*}=\frac{te^{i\phi_{3}}}{a_{3,4,d}^{*}},\] \[a_{2,3,u}^{*}=\frac{t_{h}e^{i\phi_{1}}}{a_{2,2,u}},\;a_{2,3,d}^{ *}=\frac{t_{h}e^{i\phi_{1}}}{a_{2,2,d}},\;a_{4,5,u}=\frac{t_{f}}{a_{4,6,u}^{*} },\;a_{4,5,d}=\frac{t_{f}}{a_{4,6,d}^{*}}. \tag{102}\] From the last two equations of (100) and the first two equations of (16) we automatically obtains \[a_{4,5,u}=\frac{t_{f}e^{i\chi_{4,u}}}{\sqrt{\epsilon_{6}-h}},\;a_{4,5,d}= \frac{t_{f}e^{i\chi_{4,d}}}{\sqrt{\epsilon_{6}+h}},\;a_{4,6,u}=e^{i\chi_{4,u}} \sqrt{\epsilon_{6}-h},\;a_{4,6,d}=e^{i\chi_{4,d}}\sqrt{\epsilon_{6}+h}, \tag{103}\] where \(\chi_{4,\nu}\), \(\nu=u,d\) are arbitrary phases. So the \(z=4\) block operator parameters have been deduced. One what will follows one concentrates on the \(z=1,2,3\) block operators. We present in details the \(\nu=u\) case, and underline that the deduction of the parameters in the \(\nu=d\) case is done perfectly similarly. One starts with the second and fourth equality from the bottom of (14) obtaining \[a_{2,5,u}=-\frac{a_{1,2,u}^{*}a_{1,5,u}}{a_{2,2,u}^{*}}=-\frac{1 }{a_{2,2,u}^{*}}\frac{t^{2}e^{i(\phi_{2}+\phi_{3})}}{|a_{1,1,u}|^{2}},\] \[a_{2,5,u}^{*}=-\frac{a_{3,5,u}^{*}a_{3,3,u}}{a_{2,3,u}}=-\frac{1 }{a_{2,3,u}}\frac{t^{2}e^{i(\phi_{2}+\phi_{3})}}{|a_{3,4,u}|^{2}}=-\frac{a_{2, 2,u}^{*}}{t_{h}e^{-i\phi_{1}}}\frac{t^{2}e^{i(\phi_{2}+\phi_{3})}}{|a_{3,4,u} |^{2}},\] \[|a_{2,5,u}|^{2}=\frac{t^{4}e^{i(\phi_{1}+2\phi_{2}+2\phi_{3})}}{t _{h}|a_{1,1,u}|^{2}|a_{3,4,u}|^{2}}=\frac{t^{4}}{T_{h}|a_{1,1,u}|^{2}|a_{3,4,u }|^{2}},\,T_{h}=\frac{|a_{2,2,u}|^{2}|a_{1,1,u}|^{2}}{|a_{3,4,u}|^{2}}. \tag{104}\] where \(T_{h}=t_{h}e^{-i\phi},\phi=\phi_{1}+2\phi_{2}+2\phi_{3}>0\) must hold. Here, in the first two rows we have used (103) as well, while in the last row, we multiplied the first two rows. I also mention, that from the second equality of the first row from (104) one has \(|a_{2,5,u}|^{2}=t^{4}/(|a_{2,2,u}|^{2}|a_{1,1,u}|^{4})\), from where the last equality of (104) follows. Now we use the 5th and 7th equation from (16), obtaining (in the last step, the last equality of (104) is used) \[\epsilon_{2}-h=|a_{1,2,u}|^{2}+|a_{2,2,u}|^{2}=\frac{t^{2}}{|a_{1,1,u}|^{2}}+|a_{2,2,u}|^{2},\] \[\epsilon_{3}-h=|a_{2,3,u}|^{2}+|a_{3,3,u}|^{2}=\frac{t_{h}^{2}}{ |a_{2,2,u}|^{2}}+\frac{t^{2}}{|a_{3,4,u}|^{2}}=\frac{t_{h}^{2}}{|a_{2,2,u}|^{ 2}}+\frac{t^{2}T_{h}}{|a_{2,2,u}|^{2}|a_{1,1,u}|^{2}}. \tag{105}\] From (C5) now \(a_{1,1,u},a_{2,2,u}\) can be expressed \[a_{1,1,u}=\sqrt{\frac{t^{2}(\epsilon_{3}-h+T_{h})}{(\epsilon_{2}-h)(\epsilon_{3}-h )-t_{h}^{2})}}e^{i\chi_{1,u}},\quad a_{2,2,u}=\sqrt{\frac{T_{h}(\epsilon_{2}-h )+t_{h}^{2}}{\epsilon_{3}-h+T_{h}}}e^{i\chi_{2,u}}.\] (C6) Based on the last two equations of (C4) now \(a_{3,4,u}\) (from the \(T_{h}\) expression), then \(a_{2,5,u}\) (from the first line of (C4)) can be expressed \[a_{3,4,u}=\sqrt{\frac{t^{2}[T_{h}(\epsilon_{2}-h)+t_{h}^{2}]}{T_{h}[(\epsilon_ {2}-h)(\epsilon_{3}-h)-t_{h}^{2}]}}e^{i\chi_{3,u}},\;a_{2,5,u}=-e^{i(\phi_{2}+ \phi_{3})}\frac{e^{i\chi_{2,u}[(\epsilon_{2}-h)(\epsilon_{3}-h)-t_{h}^{2}]}}{ \sqrt{T_{h}(\epsilon_{2}-h)+t_{h}^{2}}\sqrt{\epsilon_{3}-h+T_{h}}}.\] (C7) At this moment \(a_{1,2,u},a_{1,5,u},a_{2,3,u},a_{3,4,u},a_{3,5,u}\) can be deduced from (C2), while \(b_{1,1},b_{3,4}\) are directly obtained from (C1). Starting from the 6th and 8th equalities of (16),and the first and third equality from the bottom of (14) a similar deduction provides the coefficients with index d, for z=1,2,3. In the case of the block eoperator coefficients connected to the bond 4-7, namely the z=5 block operators, the procedure is more complicated, and different. The used equations in this case are the last six equalities from (16), the last two equalities of (15) and the two equations containing \(t_{c}\) from (14). These equations can be written as \[I_{1,u}=|a_{5,7,u}|^{2}+|b^{\prime}_{5,7}|^{2},\quad I_{1,d}=|a_ {5,7,d}|^{2}+|b_{5,7}|^{2},\] \[I_{2,u}=|a_{5,4,u}|^{2}+|b^{\prime}_{5,4}|^{2},\quad I_{2,d}=|a_ {5,4,d}|^{2}+|b_{5,4}|^{2},\] \[V_{1}=a^{*}_{5,7,u}b_{5,7}+b^{\prime*}_{5,7}a_{5,7,d},\quad V_{2 }=a^{*}_{5,4,u}b_{5,4}+b^{\prime*}_{5,4}a_{5,4,d},\] \[\lambda_{c}=b^{\prime*}_{5,7}a_{5,4,d}+a^{*}_{5,7,u}b_{5,4},\quad -\lambda_{c}=b^{*}_{5,7}a_{5,4,u}+a^{*}_{5,7,d}b^{\prime}_{5,4},\] \[t_{c}=a^{*}_{5,7,u}a_{5,4,u}+b^{\prime*}_{5,7}b^{\prime}_{5,4}, \quad t_{c}=a^{*}_{5,7,d}a_{5,4,d}+b^{*}_{5,7}b_{5,4},\] (C8) where \(I_{1,u},I_{2,u},I_{1,d},I_{2,d},V_{1},V_{2}\) have been defined in (21), and based on the results presented above (i.e. (C1)- (C7)), are known quantities. I exemplify the deduction procedure leading to the \(W_{1}\) expression from (22) and the first equality from (23), namely \(b^{\prime}_{5,7}=W_{1}a_{5,4,d}\). One starts from the third and fourth row of (C8) which provide \[b_{5,4}=\frac{V_{2}-b^{\prime*}_{5,4}a_{5,4,u}}{a^{*}_{5,4,u}}= \frac{\lambda_{c}-b^{\prime*}_{5,7}a_{5,4,d}}{a^{*}_{5,7,u}}\] \[b_{5,7}=\frac{V_{1}-b^{\prime*}_{5,7}a_{5,7,d}}{a^{*}_{5,7,u}}= \frac{-\lambda_{c}-b^{\prime*}_{5,4}a_{5,7,d}}{a^{*}_{5,4,u}}\] (C9) from where one finds \[\frac{V_{2}}{a^{*}_{5,4,u}}-b^{\prime*}_{5,4}\frac{a_{5,4,d}}{a^{*}_{5,4,u}}= \frac{\lambda_{c}}{a^{*}_{5,7,u}}-b^{\prime*}_{5,7}\frac{a_{5,4,d}}{a^{*}_{5,7,u}},\] \[\frac{V_{1}}{a_{5,7,u}^{*}}-b_{5,7}^{\prime*}\frac{a_{5,7,d}}{a_{5,7,u}^{*}}=- \frac{\lambda_{c}}{a_{5,4,u}^{*}}-b_{5,4}^{\prime*}\frac{a_{5,7,d}}{a_{5,4,u}^{*}}, \tag{105}\] which gives \[b_{5,4}^{\prime*}\frac{1}{a_{5,4,u}^{*}}-b_{5,7}^{\prime*}\frac{1 }{a_{5,7,u}^{*}}=\frac{V_{2}}{a_{5,4,u}^{*}a_{5,4,d}}-\frac{\lambda_{c}}{a_{5, 7,u}^{*}a_{5,4,d}},\] \[b_{5,4}^{\prime*}\frac{1}{a_{5,4,u}^{*}}-b_{5,7}^{\prime*}\frac{1 }{a_{5,7,u}^{*}}=-\frac{V_{1}}{a_{5,7,u}^{*}a_{5,7,d}}-\frac{\lambda_{c}}{a_{5,4,u}^{*}a_{5,7,d}}. \tag{106}\] Since both left sides are the same, the two right sides must be equal, so one has \[V_{2}+V_{1}yz=\lambda_{c}(z-y),\quad z=\frac{a_{5,4,u}^{*}}{a_{5,7,u}^{*}},\, y=\frac{a_{5,4,d}}{a_{5,7,d}}. \tag{107}\] Then one uses further only the first equality of (106) which provides the first row below \[b_{5,4}^{\prime*} =b_{5,7}^{\prime*}\frac{a_{5,4,u}^{*}}{a_{5,7,u}^{*}}+\frac{V_{2} }{a_{5,4,d}}-\frac{\lambda_{c}}{a_{5,4,d}}\frac{a_{5,4,u}^{*}}{a_{5,7,u}^{*}}\] \[b_{5,4}^{\prime*} =\frac{t_{c}-a_{5,7,u}a_{5,4,u}^{*}}{b_{5,7}^{\prime}}=\frac{t_{ c}-z|a_{5,7,u}|^{2}}{b_{5,7}^{\prime}} \tag{108}\] where the second line is taken from the first \(t_{c}\) relation of the last row of Eqs.(102). From the equality of the two terms from (108), one finds \[t_{c}-z|a_{5,7,u}|^{2}=|b_{5,7}^{\prime}|^{2}z+\frac{b_{5,7}^{\prime}}{a_{5,4, d}}(V_{2}-z\lambda_{c}) \tag{109}\] At first view it seems that (109) provides a quadratic equation for \(b_{5,7}^{\prime}\), but is not so, since from the first line of (102) one has \(|a_{5,7,u}|^{2}=I_{1,u}-|b_{5,7}^{\prime}|^{2}\), hence \(|b_{5,7}^{\prime}|^{2}\) cancels out from (109), and we find \[b_{5,7}^{\prime}=W_{1}a_{5,4,d},\quad W_{1}=\frac{t_{c}-zI_{1,u}}{V_{2}-z \lambda_{c}}. \tag{110}\] The other relations from (22,23)can be deduced in similar manner. Indeed, expressing not \(b_{5,4},b_{5,7}\) but \(b_{5,4}^{\prime},b_{5,7}^{\prime}\) from (103), and using again the first \(t_{c}\) expression from the last row of (102), we find \(b_{5,4}^{\prime}=W_{2}a_{5,4,d}\), and similarly, using the second \(t_{c}\) expression from the last row of (102) we obtain \(b_{5,7}^{*}=W_{3}a_{5,4,u}^{*}\) and \(b_{5,4}^{*}=W_{4}a_{5,4,u}^{*}\). Now I explain how (24) is obtained, by exemplifying the deduction procedure of the first equality of (24): Deducing \(b_{5,7}^{\prime}=W_{1}a_{5,4,d}\) and \(b_{5,4}^{\prime}=W_{2}a_{5,4,d}\), one expresses \(b_{5,4}^{\prime}/b_{5,7}^{\prime}=W_{2}/W_{1}\). Now it must be taken into account that from the third line of (102) we also have a such ratio, namely \(b_{5,4}^{\prime*}/b_{5,7}^{\prime*}=(a_{5,7,d}/a_{5,4,d})(V_{2}-a_{5,4,u}^{*} b_{5,4})/(V_{1}-a_{5,7,u}^{*}b_{5,7})\). Equating these two ratios one obtain \[(\frac{W_{2}}{W_{1}})^{*}=\frac{V_{2}-a_{5,4,u}^{*}b_{5,4}}{V_{1}-a_{5,7,u}^{*}b_{5,7}}\frac{a_{5,7,d}}{a_{5,4,d}})=\frac{1}{y}\frac{V_{2}-|a_{5,4,u}|^ {2}(b_{5,4}/a_{5,4,u})}{V_{1}-a_{5,7,u}^{*}a_{5,4,u}(b_{5,7}/a_{5,4,u})}\] \[=\frac{1}{y}\frac{V_{2}-|a_{5,4,u}|^{2}W_{4}^{*}}{V_{1}-(1/z)|a_{5, 4,u}|^{2}W_{3}^{*}}, \tag{101}\] from where, taking into account that \(V_{1},V_{2}\) are real, one finds \[\frac{W_{2}}{W_{1}}=\frac{1}{y^{*}}\frac{V_{2}-|a_{5,4,u}|^{2}W_{4}}{[V_{1}-(1 /z^{*})|a_{5,4,u}|^{2}W_{3}]}, \tag{102}\] Taking into account that \(y,z\) have the same phase factor, one obtains from (102) the first equality of (24). By starting from the \(b_{5,4}/b_{5,7}\) ratio, the second equality of (24) can be similarly deduced. ## Appendix D Deduction of \(\hat{X}_{i}^{\dagger}\) at \(\mathbf{B}=0\) Here we solve the system of equations Eq.(28). The last four equations, as explained, provide \(x_{1,\uparrow}^{*}=x_{7^{\prime},\uparrow}^{*}=y_{1,\downarrow}^{*}=y_{7^{ \prime},\downarrow}^{*}=0\), hence one remains with 20 unknown parameters and 18 equations. One starts with the equations containing two terms, namely, the 7th,8th equalities and 17th,18th equalities which provide \[x_{6,\uparrow}^{*}=-\frac{t_{f}}{\epsilon_{4}}x_{5,\uparrow}^{*},\;x_{6^{ \prime},\uparrow}^{*}=-\frac{t_{f}}{\epsilon_{4}}x_{5^{\prime},\uparrow}^{*}, \;y_{6,\downarrow}^{*}=-\frac{t_{f}}{\epsilon_{4}}y_{5,\downarrow}^{*},\;y_{6^ {\prime},\uparrow}^{*}=-\frac{t_{f}}{\epsilon_{4}}y_{5^{\prime},\uparrow}^{*}. \tag{103}\] Now from the first, second and 15th,16th equalities one finds \[x_{5,\uparrow}^{*}=-x_{2,\uparrow}^{*},\;x_{5^{\prime},\uparrow}^{*}=-x_{3^{ \prime},\uparrow}^{*},\;y_{5,\downarrow}^{*}=-y_{2,\downarrow}^{*},\;y_{5^{ \prime},\uparrow}^{*}=-y_{3^{\prime},\downarrow}^{*}. \tag{104}\] After this step, introducing the notation \(A=(\epsilon_{2}^{2}-t_{h}^{2})/[t_{h}(t^{2}+\lambda^{2})]\), from the fifth and sixth equations we obtain \[x_{4,\uparrow}^{*}=A(tx_{2,\uparrow}^{*}-\lambda y_{2,\downarrow}^{*}),\;y_{4,\downarrow}^{*}=A(\lambda x_{2,\uparrow}^{*}+ty_{2,\downarrow}^{*}), \tag{105}\] and similarly, from the 11th and 12th equations one finds \[x_{4^{\prime},\uparrow}^{*}=A(tx_{3^{\prime},\uparrow}^{*}-\lambda y_{3^{ \prime},\downarrow}^{*}),\;y_{4^{\prime},\downarrow}^{*}=A(\lambda x_{3^{ \prime},\uparrow}^{*}+ty_{3^{\prime},\downarrow}^{*}). \tag{106}\] At this moment the relations presented in (29) have been obtained. The remaining equations are the 9th and 10th equalities, which become of the form \[x^{*}_{4,\uparrow}a_{5,4}+x^{*}_{4^{\prime},\uparrow}a_{5,7}+y^{* }_{4,\downarrow}b_{5,4}+y^{*}_{4^{\prime},\downarrow}b_{5,7}=0,\] \[-x^{*}_{4,\uparrow}b_{5,4}-x^{*}_{4^{\prime},\uparrow}b_{5,7}+y^ {*}_{4,\downarrow}a_{5,4}+y^{*}_{4^{\prime},\downarrow}a_{5,7}=0, \tag{101}\] where introducing (100,101), one obtains \[\alpha x^{*}_{2,\uparrow}+\gamma y^{*}_{2,\downarrow}=X=-\beta x ^{*}_{3^{\prime},\uparrow}-\delta y^{*}_{3^{\prime},\downarrow},\] \[-\gamma x^{*}_{2,\uparrow}+\alpha y^{*}_{2,\downarrow}=Y=\delta x ^{*}_{3^{\prime},\uparrow}-\beta y^{*}_{3^{\prime},\downarrow}, \tag{102}\] where \(\alpha,\beta,\gamma,\delta\) are defined below (30). From (102) one finds \[x^{*}_{2,\uparrow}=\frac{\alpha X-\gamma Y}{\alpha^{2}+\gamma^{2}},\quad y^{ *}_{2,\downarrow}=\frac{\gamma X+\alpha Y}{\alpha^{2}+\gamma^{2}}. \tag{103}\] Eq.(103) is presented in (30). ## Appendix E Deduction of \(\hat{X}_{i}^{\dagger}\) at \(\mathbf{B}\neq 0\) We solve now the system of equations Eq.(28) at \(\mathbf{B}\neq 0\) in conditions in which Eq.(32) is satisfied. From the 7th,8th equalities and 17th,18th equalities of (28) on finds \[x^{*}_{6,\uparrow}=-\frac{a_{4,5,u}}{a_{4,6,u}}x^{*}_{5,\uparrow},\;y^{*}_{6, \downarrow}=-\frac{a_{4,5,d}}{a_{4,6,d}}y^{*}_{5,\downarrow},\;x^{*}_{6^{ \prime},\uparrow}=-\frac{a_{4,5,u}}{a_{4,6,u}}x^{*}_{5^{\prime},\uparrow},\;y ^{*}_{6^{\prime},\downarrow}=-\frac{a_{4,5,d}}{a_{4,6,d}}y^{*}_{5^{\prime}, \downarrow}, \tag{104}\] while from the first, second and 15th,16th equalities one obtains \[x^{*}_{5,\uparrow}=-\frac{a_{1,2,u}}{a_{1,5,u}}x^{*}_{2,\uparrow},\;y^{*}_{5, \downarrow}=-\frac{a_{1,2,d}}{a_{1,5,d}}y^{*}_{2,\downarrow},\;x^{*}_{5^{ \prime},\uparrow}=-\frac{a_{3,3,u}}{a_{3,5,u}}x^{*}_{3^{\prime},\uparrow},\;y ^{*}_{5^{\prime},\downarrow}=-\frac{a_{3,3,d}}{a_{3,5,d}}y^{*}_{5^{\prime}, \downarrow}. \tag{105}\] Using these results in the third and fourth equations, and in the 13th and 14th equations of (28) one finds \[x^{*}_{3,\uparrow}=A_{3,\uparrow}x^{*}_{2,\uparrow},\;y^{*}_{3, \downarrow}=B_{3,\downarrow}y^{*}_{2,\downarrow},\;x^{*}_{2^{\prime},\uparrow }=A_{2^{\prime},\uparrow}x^{*}_{3^{\prime},\uparrow},\;y^{*}_{2^{\prime}, \downarrow}=B_{2^{\prime},\downarrow}y^{*}_{3^{\prime},\downarrow},\] \[A_{3,\uparrow}=\frac{1}{a_{2,3,u}}(\frac{a_{1,2,u}a_{2,5,u}}{a_{ 1,5,u}}-a_{2,2,u})=-e^{i\phi_{1}}\frac{\epsilon_{2}-h}{t_{h}}, \tag{106}\] \[B_{3,\downarrow}=\frac{1}{a_{2,3,d}}(\frac{a_{1,2,d}a_{2,5,d}}{a _{1,5,d}}-a_{2,2,d})=-e^{i\phi_{1}}\frac{\epsilon_{2}+h}{t_{h}},\] \[A_{2^{\prime},\uparrow}=\frac{1}{a_{2,2,u}}(\frac{a_{3,3,u}a_{2,5, u}}{a_{3,5,u}}-a_{2,3,u})=-e^{i(2\phi_{2}+2\phi_{3})}\frac{(\epsilon_{2}-h)( \epsilon_{3}-h)-t_{h}^{2}+T_{h}(T_{h}+\epsilon_{3}-h)}{T_{h}(\epsilon_{3}-h )+t_{h}^{2}},\] \[B_{2^{\prime},\downarrow}=\frac{1}{a_{2,2,d}}(\frac{a_{3,3,d}a_{ 2,5,d}}{a_{3,5,d}}-a_{2,3,d})=-e^{i(2\phi_{2}+2\phi_{3})}\frac{(\epsilon_{2}+h )(\epsilon_{3}+h)-t_{h}^{2}+T_{h}(T_{h}+\epsilon_{3}+h)}{T_{h}(\epsilon_{3}+h )+t_{h}^{2}}.\] Using the obtained results in the fifth and sixth equations, and in the 11th and 12th equations of (28) one obtains \[x^{*}_{4,\uparrow}=\frac{\bar{Y}b_{3,4}y^{*}_{2,\downarrow}-\bar{X }a_{3,4,d}x^{*}_{2,\uparrow}}{b_{3,4}b^{\prime}_{3,4}-a_{3,4,d}a_{3,4,u}},\;y^{* }_{4,\downarrow}=\frac{\bar{X}b^{\prime}_{3,4}x^{*}_{2,\uparrow}-\bar{Y}a_{3,4,u}y^{*}_{2,\downarrow}}{b_{3,4}b^{\prime}_{3,4}-a_{3,4,d}a_{3,4,u}},\] \[x^{*}_{4^{\prime},\uparrow}=\frac{\bar{V}b_{1,1}y^{*}_{3^{\prime},\downarrow}-\bar{Z}a_{1,1,d}x^{*}_{3^{\prime},\uparrow}}{b_{1,1}b^{\prime}_{ 1,1}-a_{1,1,d}a_{1,1,u}},\;y^{*}_{4^{\prime},\downarrow}=\frac{\bar{Z}b^{ \prime}_{1,1}x^{*}_{3^{\prime},\uparrow}-\bar{V}a_{1,1,u}y^{*}_{3^{\prime}, \downarrow}}{b_{1,1}b^{\prime}_{1,1}-a_{1,1,d}a_{1,1,u}}, \tag{100}\] where \[\bar{X}=\frac{a_{1,2,u}a_{3,5,u}}{a_{1,5,u}}-a_{3,3,u}A_{3,\uparrow }=a_{3,3,u}e^{-i(2\phi_{2}+2\phi_{3})}[1+\frac{\epsilon_{2}-h}{T_{h}}],\] \[\bar{Y}=\frac{a_{1,2,d}a_{3,5,d}}{a_{1,5,d}}-a_{3,3,d}B_{3, \downarrow}=a_{3,3,d}e^{-i(2\phi_{2}+2\phi_{3})}[1+\frac{\epsilon_{2}+h}{T_{h }}],\] \[\bar{Z}=\frac{a_{1,5,u}a_{3,3,u}}{a_{3,5,u}}-a_{1,2,u}A_{2^{\prime},\uparrow}=a_{1,2,u}e^{i(2\phi_{2}+2\phi_{3})}\frac{T_{h}^{2}+(\epsilon_{2}-h )(\epsilon_{3}-h)+2T_{h}(\epsilon_{3}-h)}{T_{h}(\epsilon_{3}-h)+t_{h}^{2}},\] \[\bar{V}=\frac{a_{1,5,d}a_{3,3,d}}{a_{3,5,d}}-a_{1,2,d}B_{2^{\prime },\downarrow}=a_{1,2,d}e^{i(2\phi_{2}+2\phi_{3})}\frac{T_{h}^{2}+(\epsilon_{2} +h)(\epsilon_{3}+h)+2T_{h}(\epsilon_{3}+h)}{T_{h}(\epsilon_{3}+h)+t_{h}^{2}}. \tag{101}\] From symmetry considerations taking \(\epsilon_{2}=\epsilon_{3}\), from the deduced result, excepting the last row, we obtain the results presented in (33). In order to obtain the last row of (33), one uses the 9th and 10th equalities of (28) obtaining \[\alpha^{\prime}x^{*}_{2,\uparrow}+\beta^{\prime}y^{*}_{2, \downarrow}=X^{\prime},\] \[\gamma^{\prime}x^{*}_{2,\uparrow}+\delta^{\prime}y^{*}_{2, \downarrow}=Y^{\prime}, \tag{102}\] where the prefactors, and the right side terms are presented in (35). From (102) one obtains the first line of (35), hence all coefficients of \(\hat{X}^{\dagger}_{i}\) have been deduced. ## Appendix F Other specific solutions for \(\hat{X}^{\dagger}_{i}\). ### Solution when (32) is not satisfied. The present case is deduced at \({\bf B}\neq 0\), considering \[a_{5,7,d}a_{5,7,u}=b_{5,7}b^{\prime}_{5,7},\quad a_{5,4,d}a_{5,4,u}=b_{5,4}b^ {\prime}_{5,4}. \tag{103}\] Following the notations from (22-24), the equalities (103) are satisfied at \[W_{2}W_{4}=1,\quad W_{1}W_{3}=\frac{1}{yz}. \tag{104}\] The deduction technique at start follows the steps presented in Appendix E up to Eq.(110), and provides \[x^{*}_{1,\uparrow}=-\frac{b_{5,7}}{a_{5,7,u}}y^{*}_{1,\downarrow}, \;x^{*}_{7^{\prime},\uparrow}=-\frac{b_{5,4}}{a_{5,4,u}}y^{*}_{7^{\prime}, \downarrow},\;x^{*}_{6,\uparrow}=-\frac{a_{4,5,u}}{a_{5,6,u}}\frac{(M_{1, \uparrow}-a_{1,2,u}x^{*}_{2,\uparrow})}{a_{1,5,u}},\;y^{*}_{6,\downarrow}=- \frac{a_{4,5,d}}{a_{5,6,d}}\frac{(M_{1,\downarrow}-a_{1,2,d}y^{*}_{2,\downarrow })}{a_{1,5,d}},\] \[x^{*}_{6^{\prime},\uparrow}=-\frac{a_{4,5,u}}{a_{4,6,u}}\frac{(M _{2,\uparrow}-a_{3,3u}x^{*}_{3^{\prime},\uparrow})}{a_{3,5,u}},\;y^{*}_{6^{ \prime},\downarrow}=-\frac{a_{4,5,d}}{a_{5,6,d}}\frac{(M_{2,\downarrow}-a_{3,3,d}y^{*}_{3^{\prime},\downarrow})}{a_{3,5,d}},\;x^{*}_{5,\uparrow}=\frac{(M_{ 1,\uparrow}-a_{1,2,u}x^{*}_{2,\uparrow})}{a_{1,5,u}},\] \[y^{*}_{5,\downarrow}=\frac{(M_{1,\downarrow}-a_{1,2,d}y^{*}_{2, \downarrow})}{a_{1,5,d}},\;x^{*}_{5^{\prime},\uparrow}=\frac{(M_{2,\uparrow} -a_{3,3,u}x^{*}_{3^{\prime},\uparrow})}{a_{3,5,u}},\;y^{*}_{5^{\prime}, \downarrow}=\frac{(M_{2,\downarrow}-a_{3,3,d}y^{*}_{3^{\prime},\downarrow})}{ a_{3,5,d}},\] \[x^{*}_{3,\uparrow}=A^{0}_{3,\uparrow}+A_{3,\uparrow}x^{*}_{2, \uparrow},\;y^{*}_{3,\downarrow}=B^{0}_{3,\downarrow}+B_{3,\downarrow}y^{*}_{2, \downarrow},\;x^{*}_{2^{\prime},\uparrow}=A^{0}_{2^{\prime},\uparrow}+A_{2^{ \prime},\uparrow}x^{*}_{3^{\prime},\uparrow},\;y^{*}_{2^{\prime},\downarrow}= B^{0}_{2^{\prime},\downarrow}+B_{2^{\prime},\uparrow}y^{*}_{3^{\prime},\downarrow}, \tag{111}\] where \(A_{3,\uparrow},B_{3,\downarrow},A_{2^{\prime},\uparrow},B_{2^{\prime},\downarrow}\) are given in (111), \(y^{*}_{1,\downarrow},y^{*}_{7^{\prime},\downarrow}\) are arbitrary, and \[M_{1,\uparrow}=-a_{1,1,u}x^{*}_{1,\uparrow}-b_{1,1}y^{*}_{1, \downarrow},\;M_{1,\downarrow}=-a_{1,1,d}y^{*}_{1,\downarrow}-b^{\prime}_{1,1} x^{*}_{1,\uparrow},\;M_{2,\uparrow}=-a_{3,4,u}x^{*}_{7^{\prime},\uparrow}-b_{3,4}y^{*}_{7^{ \prime},\downarrow},\] \[M_{2,\downarrow}=-b^{\prime}_{3,4}x^{*}_{7^{\prime},\uparrow}-a _{3,4,d}y^{*}_{7^{\prime},\downarrow},\;A^{0}_{3,\uparrow}=-\frac{a_{2,5,u}M_ {1,\uparrow}}{a_{2,3,u}a_{1,5,u}},\;B^{0}_{3,\downarrow}=-\frac{a_{2,5,d}M_{1, \downarrow}}{a_{2,3,d}a_{1,5,d}},\] \[A^{0}_{2^{\prime},\uparrow}=-\frac{a_{2,5,u}M_{2,\uparrow}}{a_ {2,2,u}a_{3,5,u}},\;B^{0}_{2^{\prime},\downarrow}=-\frac{a_{2,5,d}M_{2, \downarrow}}{a_{2,2,d}a_{3,5,d}}. \tag{112}\] Furthermore, one obtains \[x^{*}_{4,\uparrow}=\frac{(\bar{Y}_{0}b_{3,4}-\bar{X}_{0}a_{3,4,d})+(\bar{Y}b_{3,4}y^{*}_{2,\downarrow}-\bar{X}a_{3,4,d}x^{*}_{2,\uparrow})} {b^{\prime}_{3,4}b_{3,4}-a_{3,4,d}a_{3,4,u}},\] \[y^{*}_{4,\downarrow}=\frac{(\bar{X}_{0}b^{\prime}_{3,4}-\bar{Y} _{0}a_{3,4,u})+(\bar{X}b^{\prime}_{3,4}x^{*}_{2,\uparrow}-\bar{Y}a_{3,4,u}y^{* }_{2,\downarrow})}{b^{\prime}_{3,4}b_{3,4}-a_{3,4,d}a_{3,4,u}},\] \[x^{*}_{4^{\prime},\uparrow}=\frac{(\bar{V}_{0}b_{1,1}-\bar{Z}_{0 }a_{1,1,d})+(\bar{V}b_{1,1}y^{*}_{3^{\prime},\downarrow}-\bar{Z}a_{1,1,d}x^{*} _{3^{\prime},\uparrow})}{b^{\prime}_{1,1}b_{1,1}-a_{1,1,d}a_{1,1,u}},\] \[y^{*}_{4^{\prime},\downarrow}=\frac{(\bar{Z}_{0}b^{\prime}_{1,1} -\bar{V}_{0}a_{1,1,u})+(\bar{Z}b^{\prime}_{1,1}x^{*}_{3^{\prime},\uparrow}- \bar{V}a_{1,1,u}y^{*}_{3^{\prime},\downarrow})}{b^{\prime}_{1,1}b_{1,1}-a_{1,1,d}a_{1,1,u}}, \tag{113}\] where \(\bar{X},\bar{Y},\bar{Z},\bar{V}\) are given in (111), while the remaining parameters are defined as \[\bar{X}_{0}=-M_{1,\uparrow}(\frac{a_{3,5,u}}{a_{1,5,u}}-\frac{a_ {3,3,u}a_{2,5,u}}{a_{2,3,u}a_{1,5,u}},\;\bar{V}_{0}=-M_{1,\downarrow}(\frac{a_ {3,5,d}}{a_{1,5,d}}-\frac{a_{3,3,d}a_{2,5,d}}{a_{2,3,d}a_{1,5,d}},\] \[\bar{Z}_{0}=-M_{2,\uparrow}(\frac{a_{1,5,u}}{a_{3,5,u}}-\frac{a_ {1,2,u}a_{2,5,u}}{a_{2,2,u}a_{3,5,u}},\;\bar{V}_{0}=-M_{2,\downarrow}(\frac{a_ {1,5,d}}{a_{3,5,d}}-\frac{a_{1,2,d}a_{2,5,d}}{a_{2,2,d}a_{3,5,d}}. \tag{114}\] Starting from this point, because of (111), the deduction procedure strongly differs from that applied in Appendix E. The remaining equations from (28) are \[x^{*}_{4,\uparrow}a_{5,4,u}+y^{*}_{4,\downarrow}b_{5,4}+x^{*}_{4^{ \prime},\uparrow}a_{5,7,u}+y^{*}_{4^{\prime},\downarrow}b_{5,7}=0,\] \[x^{*}_{4,\uparrow}b^{\prime}_{5,4}+y^{*}_{4,\downarrow}a_{5,4,d}+x ^{*}_{4^{\prime},\uparrow}b^{\prime}_{5,7}+y^{*}_{4^{\prime},\downarrow}a_{5,7,d}=0. \tag{115}\] Using (F1), the system of equations (F7) becomes of the form \[a_{5,4,u}b^{\prime}_{5,7}(x^{*}_{4,\uparrow}b^{\prime}_{5,4}+y^{*} _{4,\downarrow}a_{5,4,d})+a_{5,7,u}b^{\prime}_{5,4}(x^{*}_{4^{\prime},\uparrow} b^{\prime}_{5,7}+y^{*}_{4^{\prime},\downarrow}a_{5,7,d})=0,\] \[(x^{*}_{4,\uparrow}b^{\prime}_{5,4}+y^{*}_{4,\downarrow}a_{5,4,d} )+(x^{*}_{4^{\prime},\uparrow}b^{\prime}_{5,7}+y^{*}_{4^{\prime},\downarrow}a_ {5,7,d})=0,\] (F8) from where it results \[(x^{*}_{4,\uparrow}b^{\prime}_{5,4}+y^{*}_{4,\downarrow}a_{5,4,d})=0,\;(x^{*} _{4^{\prime},\uparrow}b^{\prime}_{5,7}+y^{*}_{4^{\prime},\downarrow}a_{5,7,d} )=0.\] (F9) From here, using (F5) one finds \[y^{*}_{2,\downarrow}=K_{2}x^{*}_{2,\uparrow}+C_{2},\;y^{*}_{3^ {\prime},\downarrow}=K_{3}x^{*}_{3^{\prime},\uparrow}+C_{3},\;x^{*}_{2, \uparrow}=p,\;x^{*}_{3^{\prime},\uparrow}=q,\] \[K_{2}=\frac{\bar{X}(b^{\prime}_{5,4}a_{3,4,d}-b^{\prime}_{3,4}a_ {5,4,d})}{\bar{Y}(b^{\prime}_{5,4}b_{3,4}-a_{3,4,u}a_{5,4,d})},\;C_{2}=\frac{b^ {\prime}_{5,4}(\bar{X}_{0}a_{3,4,d}-\bar{Y}_{0}b_{3,4})-a_{5,4,d}(\bar{X}_{0}b ^{\prime}_{3,4}-\bar{Y}_{0}a_{3,4,u})}{\bar{Y}(b^{\prime}_{5,4}b_{3,4}-a_{3,4,u}a_{5,4,d})},\] \[K_{3}=\frac{\bar{Z}(b^{\prime}_{5,7}a_{1,1,d}-b^{\prime}_{1,1}a_ {5,7,d})}{\bar{V}(b^{\prime}_{5,7}b_{1,1}-a_{1,1,u}a_{5,7,d})},\;C_{3}=\frac{b^ {\prime}_{5,7}(\bar{Z}_{0}a_{1,1,d}-\bar{Y}_{0}b_{1,1})-a_{5,7,d}(\bar{Z}_{0} b^{\prime}_{1,1}-\bar{V}_{0}a_{1,1,u})}{\bar{V}(b^{\prime}_{5,7}b_{1,1}-a_{1,1,u}a_ {5,7,d})},\] (F10) where \(p\), and \(q\) are arbitrary. ### Solution on restricted number of sites. In this subsection one presents a specif ic solution which appears on the sites 2,3,5,6 inside a single cell (see Fig.2). The \(\hat{X}^{\dagger}_{i}\) operator has in the present case the expression \[\hat{X}^{\dagger}_{i,\sigma}=x^{*}_{2,\sigma}\hat{c}^{\dagger}_{i,2,\sigma}+x ^{*}_{3,\sigma}\hat{c}^{\dagger}_{i,3,\sigma}+x^{*}_{5,\sigma}\hat{c}^{ \dagger}_{i,5,\sigma}+x^{*}_{6,\sigma}\hat{c}^{\dagger}_{i,6,\sigma},\] (F11) where \(\sigma\) is fixed. The equations (25) becomes in this case of the form (for exemplification, one takes \(\sigma=\uparrow\)) \[x^{*}_{2,\uparrow}a_{1,2,u}+x^{*}_{5,\uparrow}a_{1,5,u}=0,\] \[x^{*}_{2,\uparrow}a_{2,2,u}+x^{*}_{3,\uparrow}a_{2,3,u}+x^{*}_{5,\uparrow}a_{2,5,u}=0,\] \[x^{*}_{3,\uparrow}a_{3,3,u}+x^{*}_{5,\uparrow}a_{3,5,u}=0,\] \[x^{*}_{5,\uparrow}a_{4,5,u}+x^{*}_{6,\uparrow}a_{4,6,u}=0.\] (F12) The solution of the matching equation (F12) becomes of the form \[x^{*}_{2,\uparrow}=-\frac{a_{1,5,u}}{a_{1,2,u}}x^{*}_{5,\uparrow},\;x^{*}_{3, \uparrow}=-\frac{a_{3,5,u}}{a_{3,3,u}}x^{*}_{5,\uparrow},\;x^{*}_{6,\uparrow} =-\frac{a_{4,5,u}}{a_{4,6,u}}x^{*}_{5,\uparrow},\] (F13) where \(x^{\star}_{5,\uparrow}=p\) is an arbitrary parameter, but the condition \[a_{2,5,u}=a_{2,2,u}\frac{a_{1,5,u}}{a_{1,2,u}}+a_{2,3,u}\frac{a_{3,5,u}}{a_{3,3,u}} \tag{109}\] must be satisfied. In terms of the Hamiltonian parameters this condition means \(t_{h}=-\epsilon_{2}\) at \(\mathbf{B}=0\), and \(T_{h}=-\epsilon_{2}+h\) at \(\mathbf{B}\neq 0\), (\(\epsilon_{3}=\epsilon_{2}\) is considered from symmetry considerations). Similar solution holds for \(\sigma=\downarrow\) as well.
2310.11032
The Virtual Spectrum of Linkoids and Open Curves in 3-space
The entanglement of open curves in 3-space appears in many physical systems and affects their material properties and function. A new framework in knot theory was introduced recently, that enables to characterize the complexity of collections of open curves in 3-space using the theory of knotoids and linkoids, which are equivalence classes of diagrams with open arcs. In this paper, new invariants of linkoids are introduced via a surjective map between linkoids and virtual knots. This leads to a new collection of strong invariants of linkoids that are independent of any given virtual closure. This gives rise to a collection of novel measures of entanglement of open curves in 3-space, which are continuous functions of the curve coordinates and tend to their corresponding classical invariants when the endpoints of the curves tend to coincide.
Kasturi Barkataki, Louis H. Kauffman, Eleni Panagiotou
2023-10-17T07:01:30Z
http://arxiv.org/abs/2310.11032v1
# The Virtual Spectrum of Linkoids and Open Curves in 3-space ###### Abstract The entanglement of open curves in 3-space appears in many physical systems and affects their material properties and function. A new framework in knot theory was introduced recently, that enables to characterize the complexity of collections of open curves in 3-space using the theory of knotoids and linkoids, which are equivalence classes of diagrams with open arcs. In this paper, new invariants of linkoids are introduced via a surjective map between linkoids and virtual knots. This leads to a new collection of strong invariants of linkoids that are independent of any given virtual closure. This gives rise to a collection of novel measures of entanglement of open curves in 3-space, which are continuous functions of the curve coordinates and tend to their corresponding classical invariants when the endpoints of the curves tend to coincide. _Keywords_: linkoids, knotoids, virtual knots, open curves, topological invariants _MSC_: \(57M25\) ## 1 Introduction The entanglement of open curves in 3-space is a common problem of interest in many physical systems, such as polymers, textiles and crystals [1, 2, 3]. Even though the study of topological complexity of simple closed curves in space is studied thoroughly in knot theory, characterizing the complexity of open curves in 3-space has been a long-standing open problem in mathematics. A reason is that, the traditional topological invariance ideas are not applicable in the context of open curves in 3-space. Recent works have introduced a framework upon which knot theory can be extended to study open curves in 3-space without any approximation schemes [4, 5, 6]. In this paper, new measures of entanglement of collections of open curves in 3-space are introduced via a map between linkoids and virtual knots/links. A knot (or link) consists in a collection of closed curve(s) embedded in three dimensional space. Knots and links are classified, with respect to their complexity, by topological invariants, usually of the form of integer valued functions or polynomials with integer coefficients [7, 8, 9, 10, 11, 12]. A topological invariant is a function on the space of knots or links that is invariant under continuous deformations of the embeddings that do not allow self intersections. Knotoids/linkoids are open ended knot diagrams which can be classified under a diagrammatic notion of equivalence similar to that of knots, but with some restrictions regarding the isotopy moves on the endpoints of the diagram [13, 14]. As mentioned in [13], the study of knotoids is closely related to that of virtual knots. A virtual knot diagram is one with some of the crossings being of a different nature, called virtual and can be classified using a diagrammatic theory which is very similar to the handling of classical knot/link diagrams [15]. There exist many invariants of knotoids, several of them via virtual knots, such as height, odd write, the Jones polynomial, the parity bracket polynomial, the affine index polynomial and the arrow polynomial. Even though it is often assumed that invariants of linkoids would follow directly from those of knotoids, as is the case for knots and links, it has been substantially harder to define invariants of linkoids. In this paper, we give a general connection between knotoids/linkoids and virtual links. This leads to new invariants of linkoids via virtual closure, namely - height, genus, odd writhe, Jones polynomial, arrow polynomial and affine-index polynomial. By accounting for all different virtual closures, new invariants of linkoids are defined - the virtual spectrum and average spectral invariants, which are stronger and are independent of any given virtual closure. Although open curves in 3-space are not knotted or linked in the topological sense, they can form complex conformations, which are called entangled. Entanglement of open curves can also be measured via topological/geometrical measures, which are either real numbers or polynomials with real coefficients, that are continuous functions of the curve coordinates and can detect knotting and threading [6, 16, 4, 5]. A projection of a collection of curves in 3-space can be a closed or open knot/link diagram. In [4, 6] it was shown that a rigorous measure of complexity of a collection of open curves in 3-space is given by taking the average of an invariant of a projection of the curves over all possible directions of projection. In this paper, the special construction of the virtual closure of open curves in 3-space is introduced, which leads to the definition of new measures of entanglement of collections of open curves in 3-space. These are continuous functions of the curve coordinates and tend to the classical topological invariant as the endpoints coincide. The paper is organised as follows : Section 2 gives an exposition into some basic definitions and properties associated with virtual knots. Section 3 introduces the generalized virtual closure of linkoids and makes a rigorous connection between the theory of linkoids and the theory of virtual knots. Section 4 discusses the emergence of new invariants of linkoids via the virtual closure introduced in Section 3. Section 5 discusses the virtual spectrum of linkoids and new invariants of linkoids independent of any given virtual closure. Section 6 introduces the virtual closure of open curves in 3-space and new measures of entanglement of open curves in 3-space and discusses their properties. Section 7 presents the conclusions of this study. ## 2 Basics on Virtual Knots and Links This section gives an overview of the basic definitions pertaining to virtual knots and links and their properties [15, 14, 17, 18]. **Definition 2.1**.: _(Virtual knot/link and virtual knot/link diagram) A virtual knot/link diagram consists of generic closed curves in \(\mathbb{R}^{2}\) (or \(S^{2}\)) such that each crossing is either a classical crossing with over and under arcs, or a virtual crossing without over or under information. Virtual knot/link diagrams are classified using the generalized Reidemeister moves, which include the classical Reidemeister moves and the virtual Reidemeister moves (See Figure 1) and the forbidden moves (See Figure 2). A virtual knot/link is defined as an equivalence class of virtual knot/link diagrams under the generalised Reidemeister moves. In conjuction with the generalized Reidemeister moves, a segment of a diagram, consisting of a sequence of consecutive virtual crossings, can be excised and a new connection be made between the resulting free ends. If the new connecting segment intersects the remaining diagram (transversally) then each new intersection is taken to be virtual. Such an excision and reconnection is called a detour move (See Figure 2)._ Given a virtual knot/link diagram, \(VL_{g}\), with \(g\) virtual crossings, there is a canonical way to associate it with a knot/link in a thickened surface : Each virtual crossing in the diagram can be considered as Figure 1: Generalised Reidemeister moves on virtual knots/links : the classical Reidemeister moves, \(R_{1}\), \(R_{2}\), \(R_{3}\); the virtual moves, \(V_{1}\), \(V_{2}\), \(V_{3}\); and the semi-virtual move \(SV\). a shorthand for a detour of one of the arcs involved in the crossing through a 1-handle that has been attached to the 2-sphere of the original diagram. The two choices (above or below) for the 1-handle detour are homeomorphic to each other (as abstract manifolds with boundary). This gives an embedding (the canonical embedding) of a collection of circles into the sphere with \(g\) handles. Any other embedding, which is stably equivalent to the canonical embedding, is considered to be a valid embedding of \(VL_{g}\). In fact, two virtual knot/link diagrams are equivalent if and only if their corresponding surface embeddings are stably equivalent [15, 17]. **Remark 2.1**.: _A stably equivalent way to realize virtual knots is to form a ribbon neighborhood surface/abstract link diagram for the given virtual knot or link diagram [19]. In such a diagram, the classical crossings are represented as diagrammatic crossings in disks, which are connected by ribbons, while the virtual crossings are represented by crossings between two ribbons that pass over/under one another without interacting. The ribbon neighborhood diagram is independent of the choice of over or under information for the ribbon strips. By gluing discs to the boundary circles of a ribbon neighborhood diagram of a virtual knot, we obtain a unique surface representation of the virtual knot (See Figure 3)._ The virtual crossing number of a virtual knot is defined as follows [18, 20]: **Definition 2.2**.: _(Virtual crossing number, and Minimal virtual knot/link diagram) The virtual crossing number, \(\mathsf{cr}_{v}(VL)\) of a virtual knot/link \(VL\) is the minimum of the number of virtual crossings over all diagrams of \(VL\). A diagram corresponding to a virtual knot/link \(VL\) is said to be minimal if the number of virtual crossings in the diagram is equal to \(\mathsf{cr}_{v}(VL)\). The minimal diagram of a virtual knot/link is unique upto classical Reidemeister moves._ A virtual knot/link can also be studied as a knot in a thickened surface through its _minimal representation_[21, 22], defined as follows: **Definition 2.3**.: _(Genus of a virtual knot/link and Minimal representation of a virtual knot/link) The genus, \(g\), of a virtual knot/link is defined as the genus of its minimal embedding surface. An embedding of a virtual knot/link is said to be its minimal representation if it is an embedding in an abstract surface, \(S\), whose genus is equal to the genus of the given virtual knot. The minimal representation of a virtual knot/link is unique up to homeomorphism of the surface \(S\)._ **Remark 2.2**.: _Let \(D(V)\) be a diagram of a virtual knot, \(V\) and \(a(D(V))\) be its ribbon neighborhood diagram. The surface representation of \(V\), obtained by gluing discs to the two boundary circles in \(a(D(V))\), realizes the unique (up to homeomorphism) minimal embedding surface of \(V\)._ Figure 3: (Left) A diagram of a virtual trefoil knot, (Center) Its ribbon neighborhood representation, (Right) Its surface representation. Figure 2: Forbidden moves on virtual knots/links : the move involving an over-strand, FO and the move involving an under-strand, FU; and the detour move of an arc in a virtual knot diagram. Linkoids and Virtual Links In this section an analogy is drawn between virtual knots/links and _virtual closures_ of knotoids/linkoids. Linkoids have been typically studied as diagrammatic objects and they can be thought of as projections of open curves in 3-space [6, 14, 13, 23, 24, 25]. Throughout this section, the symbol \(\Sigma\) is used to denote a surface on which a linkoid diagram lies. The results can be generalized for any \(\Sigma\) but in this manuscript, it is assumed that \(\Sigma=S^{2}=\mathbb{R}^{2}\cup\infty\). **Definition 3.1**.: _(Knotoid/linkoid diagram and Knotoid/linkoid) A knotoid/linkoid diagram \(L\) with \(n\in\mathbb{N}\) components in \(\Sigma\) is a generic immersion of \(\bigsqcup_{i=1}^{n}[0,1]\) in the interior of \(\Sigma\) whose only singularities are transversal double points endowed with over/undercrossing data. These double points are called the crossings of \(L\). The immersion of each \([0,1]\) is referred to as a component of the knotoid/linkoid diagram and the images of \(0\) and \(1\) under this immersion are called the foot and the head of the component, respectively. These two points are distinct from each other and from the double points; they are called the endpoints of the component. The diagram \(L\) has a total of \(2n\) endpoints. A knotoid/linkoid is an equivalence class of knotoid/linkoid diagrams up to the equivalence relation induced by the three Reidemeister moves and isotopy. It is forbidden to pull the strand adjacent to an endpoint over/under a transversal strand (See Figure 4)._ Given a linkoid \(L\) with \(n\) components, a choice of labelling can be made for the \(2n\) endpoints with numbers from the set \(\{1,2,\cdots,2n\}\), without repetition. The initial connectivity among endpoints of \(L\) allows us to define a _strand permutation_ as follows : **Definition 3.2**.: _(Strand Permutation) Let the endpoints of a linkoid diagram, \(L\), with \(n\) components, be denoted by labels from the set \(E=\{1,2,\cdots,2n\}\), without repetition. The strand permutation of \(L\) is defined to be the element, \(\tau\in S_{2n}\), such that, for any \(i\in E\), i.e. the head/foot of a component, \(\tau(i)\in E\) is the corresponding foot/head and \(\tau(i)\neq i\). It follows that \(\tau\) is of order \(2\). In other words, \(i\) and \(\tau(i)\) are labels for the two endpoints of an open component (strand) in the linkoid._ By definition, the strand permutation for a linkoid with \(n\) components, is a product of \(n\) disjoint transpositions (See Figure 5). In the following, given a permutation \(\tau\) in \(E=\{1,2,\cdots,2n\}\), the notation \(L_{\tau}\) is used to represent a labelled linkoid with \(n\) components with the head-foot labelling implied by its strand permutation, \(\tau\). ### Generalized Virtual Closure of Linkoids In this section, the relationship between multi-component linkoids and virtual links is examined for the first time and a general framework for studying the map between linkoids and virtual knots/links is introduced. The relationship between knotoids (single component) and virtual knots has been explored before [13, 14]. It was shown that, by introducing a closure arc to a knotoid diagram, which only creates virtual crossings, one can obtain a virtual knot diagram. The extension of the approach presented in [13, 14] to the case of linkoids with more than 1 component, is substantially more complex and it is studied for the first time in this manuscript. Figure 4: Omega moves (Reidemeister moves) and forbidden moves \((\Phi_{+},\Phi_{-})\) on linkoid diagrams. Indeed, notice that, there are multiple ways to close a linkoid of more than 2 components and it is possible to obtain non-equivalent virtual knots/links, for the same linkoid, based on the choice of closure of the endpoints. To make this precise, the following definition of a _closure permutation_ will be useful : **Definition 3.3**.: _(Closure Permutation) Let the endpoints of a linkoid diagram, \(L\), with \(n\) components, be denoted by labels from the set \(E=\{1,2,\cdots,2n\}\), without repetition. A closure permutation of \(L_{\tau}\), where \(\tau\) denotes the strand permutation of \(L\), is any element, \(\sigma\in S_{2n}\), such that, \(\sigma^{2}=\mathrm{id}\) and \(\sigma(i)\neq i\), for all \(i\in E\)._ **Remark 3.1**.: _Definitions 3.2 and 3.3 imply the following : (i) The strand permutation is a special case of a closure permutation. (ii) Any closure permutation can be expressed as the product of \(n\) disjoint transpositions. (iii) Strand permutation (resp. closure permutation) of a linkoid is synonymous with the term head-foot pairing (resp. pairing combination) introduced in [6]._ Any virtual closure of a linkoid diagram, with respect to a given closure permutation, can be defined as follows: Figure 5: (Above) The labelled endpoints of any linkoid determine a strand permutation. Examples of a tangle, a braid and a general linkoid are shown above with strand permutations \((1\quad 3)(2\quad 4)\), \((1\quad 6)(3\quad 4)(2\quad 5)\) and \((1\quad 3)(2\quad 4)\), respectively. (Below) The virtual closure of the tangle, the braid and the linkoid according to their respective strand permutations. Figure 6: The virtual closure of a linkoid can give rise to non-equivalent virtual knots depending on the choice of the closure permutation, as seen in the above example. (Left) An example of a linkoid, \(L\), with \(\tau=(1\quad 2)(3\quad 4)\). (Center) The virtual closure, \((L_{\tau},\tau)\) of \(L\), with respect to \(\tau\), gives rise to a virtual link. It is in fact the _strand closure_ of \(L\). (Right) Whereas, the virtual closure, \((L_{\tau},\sigma)\) with respect to another closure permutation, \(\sigma=(1\quad 3)(4\quad 2)\), gives rise to a trefoil. **Definition 3.4**.: _(Virtual Closure of a linkoid diagram with respect to \(\sigma\) and Strand Closure) Let \(L_{\tau}\) be a linkoid diagram with \(n\) components; \(\tau\) be its strand permutation and \(\sigma\in S_{2n}\) be a closure permutation of \(L_{\tau}\). The virtual closure of \(L_{\tau}\) with respect to \(\sigma\), denoted \((L_{\tau},\sigma)_{v}\), is defined to be the virtual knot/link obtained by introducing a closure arc between \(i\) and \(\sigma(i)\), for every \(i\in\{1,2,\cdots,2n\}\), consisting entirely of virtual crossings. The special case of the virtual closure with \(\sigma=\tau\) will be called the strand closure of \(L\). (See Figure 6)._ Depending on the number of open-ended arcs, the strand permutation and the choice of closure permutation, the number of components of a virtual closure of a linkoid can be determined by its associated linkoid and closure permutations, by generalizing the notion of _segement cycle_ (introduced in [6]), as follows: **Definition 3.5**.: _(Segment cycle of a linkoid with respect to a closure permutation) Let \(L_{\tau}\) be a linkoid with \(n\) components, with strand permutation \(\tau\) and a closure permutation \(\sigma\). Let \(E=\{1,2,\cdots,2n\}\) denote the set of the labelled endpoints of \(L_{\tau}\). A segment cycle of \(L_{\tau}\) with respect to the closure permutation \(\sigma\), is defined to be an orbit of a point in \(E\), under the action of the group \(\langle\tau,\sigma\rangle\) on \(E\)._ The definition of a segment cycle and the defining properties of a group guarantee that the set of segment cycles of (points in) \(E\) under the action of \(\langle\tau,\sigma\rangle\) form a partition of \(E\). The set of all segment cycles of the linkoid under the action of \(\langle\tau,\sigma\rangle\) is written as the quotient of the action, i.e, \(E/\langle\tau,\sigma\rangle\). The number of components in a closure of a linkoid can be determined by the segment cycles as follows : **Proposition 3.1**.: _Let \(L_{\tau}\) be a linkoid with \(n\) components, with strand permutation \(\tau\) and a closure permutation \(\sigma\). Let \(E=\{1,2,\cdots,2n\}\) denote the set of the labelled endpoints of \(L_{\tau}\). Let \(G=\langle\tau,\sigma\mid\tau^{2}=\sigma^{2}=1\rangle\). The number of distinct segment cycles, \(|E/G|\) of \(L_{\tau}\), with respect to \(\sigma\) is given as_ \[|E/G|=\frac{1}{2m}\sum_{g\in G}|E^{g}|\] _where \(m=|\tau\sigma|\) is the order of \(\tau\sigma\in G\) and \(E^{g}\) denotes the set of elements in \(E\) that are fixed by \(g\in G\) i.e. \(E^{g}=\{x\in E\quad|\quad g.x=x\}\)._ Proof.: \(G\) can alternatively be expressed as, \(\langle\tau\sigma,\sigma\mid(\tau\sigma)^{m}=\sigma^{2}=(\sigma\tau\sigma)^{2 }=1\rangle\). Thus, \(G\) is isomorphic to the dihedral group \(D_{m}\). By Definition 3.5, segment cycles of \(E\) form a partition of \(E\) under the action of \(G\). Therefore, by Burnside's counting theorem, the number of distinct segment cycles of \(L_{\tau}\) with respect to the closure permutation, \(\sigma\) is given as, \[|E/G|=\frac{1}{|G|}\sum_{g\in G}|E^{g}|=\frac{1}{2m}\sum_{g\in\langle\tau, \sigma\rangle}|E^{g}|. \tag{1}\] **Corollary 3.1**.: _Let \(L_{\tau}\) be a linkoid with strand permutation \(\tau\) and a closure permutation \(\sigma\). The number of components in the virtual closure \((L_{\tau},\sigma)_{v}\) is equal to the number of segment cycles of \(L_{\tau}\) with respect to \(\sigma\)._ Figure 7: (Left) A linkoid, \(L\), with 2 components and strand permutation, \(\tau=(1\quad 3)(2\quad 4)\); (Center) The virtual closure, \((L_{\tau},\sigma)_{v}\), with respect to \(\sigma=(1\quad 4)(2\quad 3)\); (Right) the unique segment cycle in \(L\), with respect to \(\sigma\). Proof.: Let \(E\) be the set of all labelled endpoints of the linkoid, \(L_{\tau}\). If \(i\neq j\in E\) lie in the same component, then \(i,j\) are either in the same linkoid component or they belong to different components that are connected via closure, thus they are in the same orbit under the action of the group \(\langle\tau,\sigma\rangle\) on \(E\), which is the same segment cycle. Similarly, a segment cycle defines a component of the virtual closure, since, if \(i,j\) are in different components (different linkoid components and not connected via closure), they are not related by the action of \(\langle\tau,\sigma\rangle\) on \(E\) (See Figure 7). **Remark 3.2**.: _Braids and tangles are special types of linkoids. Hence, Definitions 3.1-3.4, Proposition 3.1 and Corollary 3.1 apply naturally to braids and tangles. Given any braid, \(B\), with \(n\) components, is it interesting to consider the two very special kinds of braid closures - the standard braid closure and the strand closure. As a convention, the endpoints of a braid are numbered, in order, from the top bar to the bottom bar. The standard braid closure is obtained by introducing a closure arc between \(i\) and \(i+n\), for all \(i\in\{1,2,\ldots,n\}\). On the other hand, the strand closure is obtained by introducing a closure arc between \(i\) and \(\tau(i)\), for all \(i\in\{1,2,\ldots,2n\}\), where \(\tau\) is the strand permutation associated with the braid (See Figure 8)._ **Proposition 3.2**.: _Let \(L\) be a linkoid diagram with strand permutation, \(\tau\), and a closure permutation, \(\sigma\). The virtual closure, \((L_{\tau},\sigma)_{v}\), is independent of the immersion class of the closure arcs._ Proof.: Let \(A\) and \(B\) be two diagrams, of the virtual closure of the given linkoid, with different immersion classes of the closure arcs. The diagram \(B\) can be obtained by detour moves on the closure arcs in \(A\). **Remark 3.3**.: _For a linkoid, \(L_{\tau}\), the virtual closure with respect to the strand permutation, \((L_{\tau},\tau)\), corresponds to the component-wise closure of \(L_{\tau}\). Thus, the virtual closure of a linkoid corresponding to the strand permutation (i.e. \(\sigma=\tau\)) has the same number of components as the linkoid._ **Proposition 3.3**.: _Let \(L_{\tau}^{1},L_{\tau}^{2}\) denote two equivalent linkoid diagrams with strand permutation \(\tau\) and let \(\sigma\) denote a closure permutation. Then their virtual closures with respect to \(\sigma\), \((L_{\tau}^{1},\sigma)\) and \((L_{\tau}^{2},\sigma)\) are equivalent._ Proof.: It follows by performing a sequence of Reidemeister moves on the virtual knot/link, \((L_{\tau}^{1},\sigma)\), which transforms \(L_{\tau}^{1}\) to \(L_{\tau}^{2}\). **Remark 3.4**.: _Let \(L\) be a linkoid diagram with strand permutation, \(\tau\), and a closure permutation, \(\sigma\) and let \((L_{\tau},\sigma)_{v}\) be the corresponding unique virtual knot/link obtained via virtual closure. \((L_{\tau},\sigma)_{v}\) can be interpreted in terms of links in thickened surfaces or on ribbon-neighborhoods (See Section 2)._ **Definition 3.6**.: _(Minimal virtual closure and Minimal virtual representative of a linkoid diagram with respect \(\sigma\)) A virtual closure of a linkoid diagram, \(L_{\tau}\), with respect to a closure permutation, \(\sigma\), is said to be minimal if no virtual crossing can be further removed via a sequence of generalized Reidemeister moves and the 0-move on \(S^{2}\). The resultant virtual knot/link, \((L_{\tau},\sigma)_{v}\), is called the minimal virtual representative of \(L_{\tau}\) with respect to \(\sigma\)._ Figure 8: (Left) Standard braid closure and (Right) Strand closure of a braid on 3 strands. **Definition 3.7**.: _(Link-type linkoid with respect to \(\sigma\)) A linkoid \(L_{\tau}\) is said to be link-type with respect to a closure permutation, \(\sigma\in S_{2n}\), if the minimal representative diagram of \((L_{\tau},\sigma)_{v}\) has no virtual crossings. Thus, the virtual closure of \(L_{\tau}\), with respect to \(\sigma\), gives rise to a classical link, \(\mathcal{C}\). Then, the link-type linkoid, \(L_{\tau}\), is said to be of link-type \(\mathcal{C}\) with respect to \(\sigma\in S_{2n}\)._ ### The Virtual Closure Map The virtual closures of linkoids define a map between linkoids and virtual links which can be formalized by the means of the _virtual closure map_, as follows: **Definition 3.8**.: _(Virtual closure map) Let for every \(n\in\mathbb{N}\), \(\mathcal{A}_{n}\) be the set of all linkoids with \(n\) components and \(H_{n}=\{\sigma\in S_{2n}:\sigma^{2}(i)=i,\sigma(i)\neq i,\quad\forall i\in E\}\), where \(E=\{1,2,\ldots,2n\}\). Let \(\mathcal{V}\) be the set of all virtual knots/links. The virtual closure map, \(\varphi\), is defined as,_ \[\varphi: \quad\{(L,\tau,\sigma)\mid(L,\tau,\sigma)\in\mathcal{A}_{n}\times H _{n}\times H_{n}\text{ for some }n\in\mathbb{N}\}\longrightarrow\mathcal{V}\] \[(L,\tau,\sigma)\mapsto(L_{\tau},\sigma)_{v},\quad\text{where }L \in\mathcal{A}_{n}\text{ and }\tau,\sigma\in H_{n}.\] _where, \((L_{\tau},\sigma)_{v}\) denotes the virtual closure, with respect to the closure permutation, \(\sigma\), of the linkoid, \(L\), with strand permutation, \(\tau\)._ **Theorem 3.1**.: _The virtual closure map, \(\varphi\), as given in Definition 3.8, is surjective but not injective._ Proof.: Let \(L_{1}\) and \(M_{1}\) be the two knotoids given in Figure 9 and for all \(n\geq 2\), let, \[L_{n}=L_{1}\bigm{|}_{i=2}^{n}\biggm{|}_{2i}^{2i-1}\quad\text{ and }\quad M_{n}=M_{1}\bigm{|}_{i=2}^{n}\biggm{|}_{2i}^{2i-1}\] where \(\biggm{|}_{2i}^{2i-1}\) is a crossingless open segment with endpoints \(2i\) and \(2i-1\), for all \(2\leq i\leq n\). Clearly, \(L_{n}\) and \(M_{n}\) are not equivalent as linkoids. For both \(L_{n},M_{n}\in\mathcal{A}_{n}\), let the strand permutation, \(\tau\) and the closure permutaaton \(\sigma\) be each equal to \(\prod_{i=1}^{n}(2i-1\quad 2i)\). The map \(\varphi\) is not injective since, \[\varphi(L_{n},\tau,\sigma)=\raisebox{-14.226378pt}{\includegraphics[width=14.226378 pt]{./. 1. The image of the restriction map, \(\varphi\big{|}_{L,\tau}:H_{n}\longrightarrow\mathcal{V}\), where \(n\) is the number of components in \(L\) is a subset of \(\mathcal{V}\) and is denoted as \(\operatorname{Im}(\varphi\big{|}_{L,\tau})\). The number of elements in this subset is equal to \(\big{|}\operatorname{Im}(\varphi\big{|}_{L,\tau})\big{|}\leq|H_{n}|=\dfrac{1}{ n!}\binom{2n}{2}\binom{2n-2}{2}\dots\binom{2}{2}\). Therefore, there can be at most \(|H_{n}|\) virtual knots/links that represent \(L\) via the virtual closure map. 2. Let \(L\) be a trivial linkoid with \(n\) components, and without loss of generality, let \(\tau=\prod_{i=1}^{n}(2i-1\quad 2i)\) i.e. \(L=\bigsqcup_{i=1}^{n}\ \genfrac{[}{]}{0.0pt}{}{2i-1}{2i}\). Then, the virtual closure of \(L\) with respect to any \(\sigma\in H_{n}\) is a trivial link (up to generalized Reidemeister moves) with \(\leq n\) components. In particular, virtual closure with respect to \(\sigma=(1\quad n)\prod_{i=1}^{n-1}(2i\quad 2i+1)\) corresponds to an unknot whereas, virtual closure with respect to \(\tau\) corresponds to a trivial link with \(n\) components. Hence, \(\big{|}\operatorname{Im}(\varphi\big{|}_{L,\tau})\big{|}=n\). 3. Consider \(\tau\) and \(\sigma\) defined on \(2n\) endpoints. For a given virtual link, \(V\in\mathcal{V}\), of \(m\) components, the preimage of the restriction map, \(\varphi\big{|}_{\tau,\sigma}:\mathcal{A}_{n}\longrightarrow\mathcal{V}\), is the set of all \(n\)-component linkoids, with strand permutation \(\tau\), whose virtual closure with respect to \(\sigma\) is equal to \(V\). 4. Without loss of generality, let \(\tau=\prod_{i=1}^{n}(2i-1\quad 2i)\), for a linkoid \(L\) with \(n\) components. If \(L\) has more than \(1\) component, \(\big{|}\operatorname{Im}(\varphi\big{|}_{L,\tau})\big{|}>1\) (Since virtual closure with respect to \(\tau\) yields a virtual link with \(n\)-components, whereas virtual closure with respect to \(\sigma=(1\quad n)\prod_{i=1}^{n-1}(2i\quad 2i+1)\) corresponds to a virtual knot with \(1\) component). **Remark 3.5**.: _Under the map in Definition 3.8, linkoids can be associated to embeddings of links in thickened surfaces of arbitrary genus._ ## 4 Invariants of Linkoids via Virtual Closure The framework introduced in the previous sections enables to define a new collection of linkoid invariants via invariants of virtual links. The following are some well-known invariants of virtual knots/links: fundamental group, quandles, biquandles, odd writhe, Jones polynomial, Khovanov homology, arrow polynomial, index polynomial, quantum link invariants and virtual Vassiliev invariants. All these can be defined for linkoids using the following theorem: **Theorem 4.1**.: _Let \(L_{\tau}\) be a linkoid, where \(\tau\) is the strand permutation and let \(\sigma\) a closure permutation. For any \(\mathcal{F}\), an invariant of virtual links, let the map \(\mathcal{F}_{\sigma}\), be defined as,_ \[\mathcal{F}_{\sigma}(L_{\tau}):=\mathcal{F}((L_{\tau},\sigma)_{v}), \tag{2}\] _where \((L_{\tau},\sigma)_{v}\) is the virtual closure of \(L_{\tau}\) corresponding to \(\sigma\). Then \(\mathcal{F}_{\sigma}\) is an invariant of linkoids._ Proof.: Let \(L_{\tau}\) and \(K_{\tau}\) be diagrams that differ by a Reidemeister move on linkoids. By Remark 3.2, the virtual closure diagrams, \((L_{\tau},\sigma)_{v}\) and \((K_{\tau},\sigma)_{v}\), represent the same virtual knot, where \(\sigma\) is the given closure permutation for \(\mathsf{L}\). Thus, \[\mathcal{F}_{\sigma}(L_{\tau})=\mathcal{F}((L_{\tau},\sigma)_{v})=\mathcal{F} ((K_{\tau},\sigma)_{v})=\mathcal{F}_{\sigma}(K_{\tau}),\] **Remark 4.1**.: _Note that the virtual class of the closure, with respect to a closure permutation \(\sigma\), of a linkoid \(\mathsf{L}_{\tau}\) is by itself an invariant of \(\mathsf{L}_{\tau}\). This virtual class of the closure is denoted \((\mathsf{L}_{\tau},\sigma)_{v}\)._ Theorem 4.1 gives rise to the following corollary regarding link-type linkoids, which plays a critical role in Section 6, which discusses entanglement of open curves in \(3\)-space. **Corollary 4.1**.: _Let \(L_{\tau}\) be a link-type linkoid of type \(\mathcal{C}\) (See Definition 3.7), with respect to a closure permutation \(\sigma\), where \(\tau\) denotes the strand permutation. Let \(\mathcal{F}\) denote an invariant of virtual links and \(\mathcal{F}_{\sigma}\) the corresponding invariant of linkoids. Then_ \[\mathcal{F}_{\sigma}(L_{\tau})=\mathcal{F}(\mathcal{C}) \tag{3}\] Proof.: It follows by Definition 3.7 and Theorem 4.1. The virtual closure map, \(\varphi\), defined in Definition 3.8 is not injective, where \(n\in\mathbb{N}\). Therefore, linkoids which are mapped to the same virtual knot/link under the virtual closure map, return the same value for any invariant of linkoids as defined in Theorem 4.1. A more detailed discussion on some specific invariants of linkoids is provided in the remainder of this section. A combinatorial approach to determine the Jones polynomial and the arrow polynomial of linkoids, with respect to arbitrary closure permutations, is discussed and it is shown to be consistent with the definition in terms of virtual closure. Moreover, the extension of invariants such as, height, genus, odd writhe and the affine-index polynomial is discussed for linkoids in terms of virtual closure. ### Height and Genus of Linkoids and Virtual Links A fundamental topological invariant of a knotoid (linkoid with one component) is its height [13, 14]. Here, a generalized definition of the height of linkoids is given by referring to virtual crossings and the choice of closure permutation. **Definition 4.1**.: _(Height of a linkoid with respect to \(\sigma\)) The height of a linkoid diagram \(L_{\tau}\), corresponding to a closure permutation \(\sigma\), is the number of virtual crossings in its minimal virtual closure. The height of a linkoid, \(\mathsf{L}_{\tau}\) corresponding to \(\sigma\) is defined as the minimum of the diagrammatic heights, taken over all knotoid/linkoid diagrams equivalent to \(\mathsf{L}_{\tau}\)._ **Remark 4.2**.: _The height of a knotoid, as defined in [13, 14], is recovered as a special case of Definition 4.1 by taking the strand closure of the knotoid._ Via virtual closure, the genus of a linkoid can be defined as follows: **Definition 4.2**.: _(Genus of linkoid with respect to \(\sigma\) ) The genus, \(g\), of a linkoid \(L_{\tau}\) with respect to a closure permutation, \(\sigma\) is defined as the genus of the minimal embedding surface of the virtual knot/link, \((L_{\tau},\sigma)\)._ ### The Jones polynomial of Linkoids and Virtual Links In this section, the Jones polynomial of a knotoid/linkoid, with respect to a given closure permutation, is discussed as the normalized bracket polynomial of the knotoid/linkoid, with the substitution \(A=t^{-1/4}\). The bracket polynomial for linkoids was recently defined combinatorially, without referring to its virtual closure in [6]. In this section, it is shown that the definition in [6] coincides with that of the strand virtual closure. Moreover, it is shown that the combinatorial definition of the bracket polynomial can be generalized to account for arbitrary closure permutations and that it coincides with the corresponding definition in terms of virtual closure. Below follows a combinatorial definition of the Jones polynomial of linkoids : **Definition 4.3**.: _(Jones polynomial of a linkoid with respect to \(\sigma\)) The Jones polynomial of an oriented linkoid diagram, \(L\), with respect to a closure permutation, \(\sigma\), is defined as,_ \[f_{L^{\sigma}}=(-A^{3})^{-\mathsf{Wr}(L)}\langle L^{\sigma}\rangle\,, \tag{4}\] _where \(\mathsf{Wr}(L)\) is the writhe of the linkoid diagram and \(\langle L^{\sigma}\rangle\) is the generalized bracket polynomial of \(L\), with respect to \(\sigma\) (See Definition 4.4)._ The generalized bracket polynomial in Definition 4.3 can be defined combinatorially as follows: **Definition 4.4**.: _(Generalized bracket polynomial of a linkoid with respect to \(\sigma\)) Let \(L\) be a linkoid diagram with \(n\) components, \(E\) be the set of its labelled endpoints and \(\sigma\) be a closure permutation. The generalized bracket polynomial of \(L\), with respect to \(\sigma\), is completely characterised by the following skein relation and conditions:_ (5) _where \(d=-A^{2}-A^{-2}\) ; \(\tilde{L}\) is any linkoid diagram ; \(S_{\ell}\) is the trivial linkoid formed by the collection of \(n\) labelled open segments in a state, \(S\), in the state sum expansion of \(L\) and \(\tau_{S}\) is the strand permutation of \(S_{\ell}\). The term, \(|E/\langle\tau_{S},\sigma\rangle|\), is the number of distinct segment cycles in \(S_{\ell}\), with respect to \(\sigma\)._ _The generalized bracket polynomial of \(L\), with respect to \(\sigma\), can be expressed as the following state sum expression :_ \[\left\langle L^{\sigma}\right\rangle\,:=\,\sum_{S}A^{\alpha(S)}d^{\mathrm{ circ}(S)+|E/\langle\tau_{S},\sigma\rangle|-1}\quad, \tag{6}\] _where \(S\) is a state corresponding to a choice of smoothing over all double points in \(L\); \(\alpha(S)\) is the algebraic sum of the smoothing labels of \(S\) and \(\mathrm{circ}(S)\) is the number of circular components in \(S\)._ The generalized bracket polynomial of linkoids has the following properties : 1. It is a one-variable polynomial that preserves the underlying skein relation of the bracket polynomial of classical knots and knotoids. 2. If \(L\) is a link diagram, with no open components, then the labelled set of endpoints, \(E=\emptyset\) and the generalized bracket polynomial coincides with the traditional Kauffman bracket polynomial of \(L\) i.e. \(\left\langle L\right\rangle=\sum_{S}A^{\alpha(S)}d^{\mathrm{circ}(S)-1}\). 3. If \(L\) is a knotoid diagram (linkoid with one component), then the generalized bracket polynomial coincides with the Kauffman bracket polynomial of knotoids. Any closure permutation connects the 2 open endpoints of \(L\), thus \(|E/\langle\tau_{S},\sigma\rangle|=1\) for all \(S\). Thus, \(\left\langle L\right\rangle=\sum_{S}A^{\alpha(S)}d^{\mathrm{circ}(S)}\). 4. For an \(n\)-component linkoid diagram, \(L\), with strand permutation, \(\tau\), and no crossings, the state sum only has 1 state, \(S\). In this case, \(\tau_{S}=\tau\) and \(\left\langle L^{\sigma}\right\rangle=d^{|E/\langle\tau,\sigma\rangle|-1}\), where \(E\) denotes the set of labelled endpoints of \(L\). 5. If a linkoid diagram, \(L\), is of link-type, with respect to a closure permutation, \(\sigma\), the bracket polynomial coincides with that of the corresponding link upon the closure of endpoints. According to Theorem 4.1, the Jones polynomial of a linkoid, \(L_{\tau}\), with respect to a closure permutation \(\sigma\), can be defined as the Jones polynomial of its virtual closure, \((L_{\tau},\sigma)_{v}\). Theorem 4.2 shows the equivalence between Definition 4.3 and the definition of the Jones polynomial via virtual closure. **Theorem 4.2**.: _Let \(L\) be a linkoid diagram with strand permutation, \(\tau\) and closure permutation, \(\sigma\) on the labelled set of endpoints, \(E\). The Jones polynomial of \(L\), according to Definition 4.3, is equal to the Jones polynomial of the virtual closure, \((L_{\tau},\sigma)_{v}\), namely, \(f_{L^{\sigma}}=f_{(L_{\tau},\sigma)_{v}}\)._ Proof.: \(L\) and \((L_{\tau},\sigma)_{v}\) have equal number of classical crossings, therefore \(\mathsf{Wr}(L)=\mathsf{Wr}\left((L_{\tau},\sigma)_{v}\right)\). Let \(S\) be a state in the state sum expansion of \(L\) corresponding to a choice of smoothing over the classical crossings in \(L\). By the one-to-one correspondence between the classical crossings in \(L\) and \((L_{\tau},\sigma)_{v}\), there is a state \(S_{v}\) in \((L_{\tau},\sigma)_{v}\) corresponding to the same choice of smoothing. \(S_{v}\) is the virtual closure of \(S\) since, \(S=S_{v}\backslash\{r\mid r\) is a virtual closure arc on \(E\}\). By Corollary 3.1, \(S\) and \(S_{v}\) have the same number of components. Thus \(\left\langle S^{\sigma}\right\rangle=\left\langle S_{v}\right\rangle\) for all states \(S\) of \(L\). Hence, \(\left\langle L^{\sigma}\right\rangle=\left\langle(L_{\tau},\sigma)_{v}\right\rangle\). Therefore, the Jones polynomial of \(L\), according to Definition 4.3, is equal to the Jones polynomial of its virtual closure, i.e. \[f_{L^{\sigma}}=(-A^{3})^{-\mathsf{Wt}(L)}\langle L^{\sigma}\rangle=(-A^{3})^{ -\mathsf{Wt}((L_{\tau},\sigma)_{v})}\langle(L_{\tau},\sigma)_{v}\rangle=f_{(L_ {\tau},\sigma)_{v}}. \tag{7}\] **Remark 4.3**.: _Theorem 4.2 allows to compute the Jones polynomial of virtual knots/links by using linkoids and without using the closure arcs._ **Example 4.1**.: _Let us consider the linkoid, \(L=\)\(\left\langle 1\quad 2\right\rangle\), with strand permutation \(\tau=(1\quad 2)(3\quad 4)\). The virtual closure of \(L\) with respect to \(\sigma=(1\quad 4)(2\quad 3)\) corresponds to the virtual trefoil knot, \(K=\)\(\left\langle 1\quad 2\right\rangle\). Hence, the Jones polynomial of \(L\) with respect to \(\sigma\) is given as, \(f_{L^{\sigma}}=f_{K}=(-A^{3})^{-2}(A^{2}-A^{-4}+1)=A^{-4}-A^{-10}+A^{6}\)._ ### The Arrow polynomial of Linkoids and Virtual Links The construction of the arrow polynomial invariant begins with the oriented state summation of the bracket polynomial where each local smoothing is either an oriented smoothing or a disoriented smoothing, as shown in Equation 8, for both positive and negative classical crossings in a diagram [22]. \[\begin{array}{c}\includegraphics[width=142.364pt]{./figures/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ _where \(S\) is a state corresponding to a choice of smoothing (oriented or disoriented) over all classical in \(L\); \(\alpha(S)\) is the algebraic sum of the smoothing labels of \(S\) and \(\operatorname{circ}(S)\) is the total number of circular components in \(S\) and \(|E/\langle\tau_{S},\sigma\rangle|\) is the total number of segment cycles in \(S\). Let \(S_{i_{1}},S_{i_{2}},\ldots S_{i_{m}}\), with \(i_{1},i_{2}\ldots i_{m}\) pairs of surviving cusps respectively, be all the state components in \(S\) with surviving pairs of cusps. Then \(\langle S_{j}\rangle_{\mathcal{A}}=K_{j}\), for all \(j\in\{i_{1},i_{2},\ldots i_{m}\}\) and \(\mathcal{K}=\prod_{j=1}^{m}K_{i_{j}}\) (See Figure 11)._ **Remark 4.4**.: _The generalized arrow polynomial of a linkoid is a polynomial in variables \(A,K_{i}\), where \(i\in\mathbb{N}\) with integer coefficients._ **Remark 4.5**.: _In [14], the arrow polynomial of knotoids was defined where a state in the state sum expansion can have a cusped long segment. If the long segment has \(2i\) irreducible cusps, then the arrow polynomial of the long segment is denoted by the variable \(\Lambda_{i}\)._ According to Theorem 4.1, the generalized arrow polynomial of a linkoid, \(L_{\tau}\), with respect to a closure permutation \(\sigma\), can be defined as the arrow polynomial of its virtual closure, \((L_{\tau},\sigma)_{v}\). This defintion is equivalent to Defintion 4.5 due to the following therorem: **Theorem 4.3**.: _Let \(L\) be a linkoid diagram with strand permutation, \(\tau\) and \(\sigma\) be a closure permutation. The arrow polynomial of \(L_{\tau}\), as in Definition 4.5, coincides with the arrow polynomial of its virtual closure, \((L_{\tau},\sigma)_{v}\)._ Proof.: It follows similarly to Theorem 4.2. Example 4.2 shows the arrow polynomial state sum expansion for a particular linkoid. One of the final states in this example corresponds to a loop with cusps and virtual crossings. Via detour move, it can be seen that this state is in fact a loop with irreducible cusps (See Figure 11). **Example 4.2**.: _The arrow polynomial of \(\tikzfig{fig:2}\) is evaluated as follows :_ \[\tikzfig{fig:2} \tag{10}\] \[=A+A^{-1}\left\langle\tikzfig{fig:2}\right\rangle_{\mathcal{A}}=A+A ^{-1}K_{1}.\] Example 4.3 illustrates the evaluatation the arrow polynomial of a 2 component linkoid: Figure 11: The arrow polynomial of a loop with \(2i\) irreducible cusps is denoted by the variable, \(K_{i}\), where \(i\in\mathbb{N}\). **Example 4.3**.: _Let us consider the linkoid, \(L\), with strand permutation \(\tau=(1\quad 4)(2\quad 3)\) and its virtual closure, with respect to \(\sigma=(1\quad 2)(3\quad 4)\), \(K\) as in Figure 12. Since, the Kishino knot has trivial Jones polynomial, the Jones polynomial of \(L\) with respect to \(\sigma\) is also trivial (despite \(L\) being a non-trivial linkoid). However, the arrow polynomial of \(L\) is easily seen to be non-trivial, i.e. it matches the arrow polynomial of \(K\), namely, \(A^{4}+1+A^{-4}-(A^{4}+2+A^{-4})K_{1}^{2}+2K_{2}.\) Thus, with respect to \(\sigma\), \(L\) is a linkoid that the Jones polynomial cannot detect but the arrow polynomial is able to detect._ ### The Affine-index polynomial of Linkoids and Virtual Links Given a labelled flat (without over/under information at crossings) virtual knot diagram, two numbers, \(W_{-}(c)\) and \(W_{+}(c)\), can be defined at each classical node \(c\). If there is a labeled classical node with left incoming arc \(a\) and right incoming arc \(b\), as in \(\begin{array}{c}\text{b+1}\\ \text{a}\end{array}\), then the right outgoing arc is labeled \(a-1\) and the left outgoing arc is labeled \(b+1\). Then \(W_{+}(c)=a-b-1\) and \(W_{-}(c)=-a+b+1\), respectively. Note that \(W_{-}(c)=-W_{+}(c)\) in all cases. Whereas, if there is a virtual node with left incoming arc \(a\) and right incoming arc \(b\), then the right outgoing arc is labeled \(a\) and the left outgoing arc is labeled \(b\) (i.e. No change at a virtual crossing). The affine-index polynomial of a virtual knot was defined in [26], in terms of such integer labelling on flat virtual knot diagrams, in the following way: **Definition 4.6**.: _(Affine index polynomial of virtual knots) The Affine Index Polynomial of a virtual knot diagram, \(K\), is defined by the equation_ \[P_{K}:=\sum_{c}\operatorname{sgn}(c)\left(t^{W_{K}(c)}-1\right)=\sum_{c} \operatorname{sgn}(c)t^{W_{K}(c)}-\operatorname{\mathsf{wr}}(K)\quad, \tag{11}\] _where \(c\) denotes a classical crossing in \(K\), \(\operatorname{sgn}(c)\) is the sign of \(c\), \(\operatorname{\mathsf{wr}}(K)\) is the writhe of \(K\) and \(W_{K}(c)=W_{\operatorname{sgn}(c)}(c)\), obtained from labelling of arcs in the flat counterpart of \(K\)._ According to Theorem 4.1, the affine-index polynomial of a linkoid, \(L_{\tau}\), with respect to a closure permutation \(\sigma\), can be defined as the affine-index polynomial of its virtual closure, \((L_{\tau},\sigma)_{v}\) provided \((L_{\tau},\sigma)_{v}\) consists of only one component. **Definition 4.7**.: _(Affine-index polynomial of a linkoid with respect to \(\sigma\)) Let \(L\) be a linkoid diagram with \(n\) components, \(E\) be the set of its labelled endpoints and \(\sigma\) be a closure permutation such that \((L_{\tau},\sigma)_{v}\) consists of only one component. The affine-index polynomial of \(L\), with respect to \(\sigma\), is defined as:_ \[P_{L^{\sigma}}:=P_{(L_{\tau},\sigma)_{v}}\quad, \tag{12}\] _where \(P_{(L_{\tau},\sigma)_{v}}\) is the affine -index polynomial of the virtual knot, \((L_{\tau},\sigma)_{v}\)._ Example 4.4 illustrates the evaluatation the affine-index polynomial of a 2 component linkoid: Figure 12: (Left) A linkoid \(L\) with 2 components and strand permutation, \(\tau=(1\quad 4)(2\quad 3)\). (Right) The virtual closure of \(L\), with respect to the closure permutation, \(\sigma=(1\quad 2)(3\quad 4)\), is equal to the kishino knot, \(K\). **Example 4.4**.: _Consider the linkoid, \(L=\)\(\xy(0,0)*{\bullet}{\bullet}\), with 2 components. The virtual closure of \(L\), with respect to \(\sigma=(1\quad 2)(3\quad 4)\), has only 1 component whose flat diagram is given as, \(FV=\)\(\xy(0,0)*{\bullet}{\bullet}\). The labels \(A,B\) and \(C\) correspond to classical crossings, which all have positive signs. Thus, \(W_{+}(\overline{A})=-2,W_{+}(B)=2\) and \(W_{+}(C)=0\) are respective the weights associated with the classical crossings. The virtual knot, \((L_{r},\sigma)_{v}\) is one with unit Jones polynomial. However, it has a non-trivial affine-index polynomial, namely, \(P_{(L_{r},\sigma)_{v}}=t^{-2}+t^{2}-2\). In other words, the linkoid \(L\) has trivial Jones polynomial but non-trivial affine-index polynomial, with respect to \(\sigma=(1\quad 2)(3\quad 4)\)._ ### The Odd Writhe of Linkoids and Virtual Knots The odd writhe of a virtual knot, which was shown to be an invariant of virtual knots in [27], is defined as follows: **Definition 4.8**.: _(Odd crossing and Odd writhe of a virtual knot) A crossing of a virtual knot is called odd if its crossing labels in the Gauss diagram have an odd number of crossing labels between them. The odd writhe of the virtual knot is defined as the sum of the signs of the odd crossings._ Note that in classical knots, the odd writhe is always zero, since all crossings of classical knots are even. As an example, the virtual closure of the linkoid in Figure 13 has odd writhe equal to \(-1\), which proves that this knot is not equivalent to any classical knot. Due to Theorem 4.1, the odd writhe of linkoids is well-defined with respect to closure permutations which gives rise to a virtual knot (one component): **Definition 4.9**.: _(Odd writhe of a linkoid with respect to \(\sigma\)) Let \(L\) be a linkoid diagram and \(\sigma\) be a closure permutation such that, \((L,\sigma)_{v}\) has one component. The odd writhe, \(\mathsf{owr}\) of \(L\), with respect to \(\sigma\), is defined as the sum of the signs of the odd crossings in \((L,\sigma)_{v}\)._ **Remark 4.6**.: _For linkoids, the odd-writhe is an integer valued invariant._ **Remark 4.7**.: _In the context of virtual links (2 or more components), the notion of parity does not extend naturally. Depending upon how the link components are examined, there is an ambiguity regarding whether a crossing is odd or even. This ambiguity can be remedied by defining even and odd for crossings in individual components while labeling crossings shared by 2 components as link crossings [28]._ Figure 13: A linkoid diagram with strand permutation \(\tau=(1\quad 2)(3\quad 4)\), its virtual closure with respect to \(\sigma=(1\quad 3)(2\quad 4)\) and its Gauss diagram. The crossings labelled, \(4,5,6\) and \(7\) are odd and the odd-writhe of the virtual knot diagram is \(-1\). The Virtual Spectrum of a Linkoid This section introduces the virtual spectrum of linkoids which leads to new invariants of linkoids that are not those of any given virtual closure. For any linkoid with fixed strand permutation, the virtual spectrum is the set of virtual links resulting from virtual closure over all possible closure permutations. More formally, it can be defined as follows: **Definition 5.1**.: _(Virtual spectrum of a linkoid) The virtual spectrum of a linkoid, \(L\), with \(n\) components and strand permutation, \(\tau\), is defined to be the set, \(\operatorname{spec}(L):=\{\varphi_{n}(L,\tau,\sigma):\sigma\in H_{n}\}\), where \(\varphi_{n}\) is the virtual closure map defined on linkoids with \(n\) components, \(E=\{1,2,\ldots,2n\}\) and \(H_{n}=\{\sigma\in S_{2n}:\sigma^{2}(i)=i,\sigma(i)\neq i,\quad\forall i\in E\}\). In other words, the virtual spectrum of \(L\) is given by \(\operatorname{Im}(\varphi_{n}\big{|}_{L,\tau})\)._ **Theorem 5.1**.: _For a linkoid, \(L\), its virtual spectrum, \(\operatorname{spec}(L)\) is an invariant of \(L\)._ Proof.: By Definition 5.1, each element in \(\operatorname{spec}(L)\) is of the form, \((L,\sigma)_{v}\), where \(\sigma\) is a closure permutation. For any \(\sigma\), the virtual closure, \((L,\sigma)_{v}\), is a linkoid invariant by Remark 3.2 and Proposition 3.3. Hence, \(\operatorname{spec}(L)\) is an invariant of \(L\). **Corollary 5.1**.: _If two linkoid diagrams, \(L_{1}\) and \(L_{2}\), are equivalent up to Reidemeister moves, then \(\operatorname{spec}(L_{1})=\operatorname{spec}(L_{2})\)._ Proof.: It follows from Theorem 5.1 **Corollary 5.2**.: _Let \(L_{1}\) and \(L_{2}\) be two linkoid diagrams. If \(\operatorname{spec}(L_{1})\neq\operatorname{spec}(L_{2})\), then \(L_{1}\) and \(L_{2}\) are distinct._ Proof.: It follows from Theorem 5.1. **Remark 5.1**.: _Two non-equivalent linkoids will have the same virtual spectrum only if all the closure permutations give the same virtual knots/links._ **Example 5.1**.: _Note that, \(\big{|}_{L}\) and \(\big{|}_{L}\) are two non-equivalent linkoids with same number of components and same virtual spectrum._ **Remark 5.2**.: _Property ii of the virtual closure map implies that the virtual spectrum of a trivial linkoid with \(n\)-components has size \(n\)._ ### New Invariants of Linkoids from the virtual spectrum New invariants of linkoids can be defined, independent of any particular closure permutation, by means of the virtual spectrum of linkoids. **Definition 5.2**.: _(Average spectral \(\mathcal{F}\)-invariant of a linkoid) Let \(\mathcal{F}\) be an invariant of virtual links, \(L\) be a linkoid and \(\mathcal{F}(\operatorname{spec}(L))\) be the collection of values of \(\mathcal{F}\) over all virtual links in \(\operatorname{spec}(L)\). The average spectral \(\mathcal{F}\)-invariant of \(L\) is defined as the average of \(\mathcal{F}(\operatorname{spec}(L))\), namely :_ \[\operatorname{avg}_{\mathcal{F}}(L):=\overline{\mathcal{F}(\operatorname{ spec}(L))}=\frac{1}{|\operatorname{spec}(L)|}\sum_{\ell\in\operatorname{ spec}(L)}\mathcal{F}(\ell).\] The invariance of the average spectral \(\mathcal{F}\)-invariant of a linkoid is given by the following theorem: **Theorem 5.2**.: _Let \(L\) be a linkoid, \(\operatorname{spec}(L)\) be its virtual spectrum and let \(\mathcal{F}\) be an invariant of virtual links. Let \(\mathcal{F}(\operatorname{spec}(L))\) be the collection of values of \(\mathcal{F}\) over all virtual links in \(\operatorname{spec}(L)\). Then, the set, \(\mathcal{F}(\operatorname{spec}(L))\) and the measure, \(\operatorname{avg}_{\mathcal{F}}(L)\), are invariants of the linkoid, \(L\)._ Proof.: Since, \(\mathcal{F}\) is an invariant of virtual links, there exits an invariant of linkoids, \(\mathcal{F}_{\sigma}\), which is defined as \(\mathcal{F}\) acting on the closure of the linkoid with respect to a closure permutation, \(\sigma\) ( by Theorem 4.1). Therefore, the elements of the set, \(\mathcal{F}(\mathrm{spec}(L))\), are of the form, \(\mathcal{F}_{\sigma}(L)\), where \(\sigma\) is a closure permutation. Each \(\mathcal{F}_{\sigma}\) is an invariant of \(L\), hence their collection, \(\mathcal{F}(\mathrm{spec}(L))\) and the measure, \(\mathrm{avg}_{\mathcal{F}}(L)\), are also invariants of \(L\). In this way, we can define the new invariants of linkoids such as, average spectral height, average spectral genus, average spectral Jones polynomial and average spectral arrow polynomial. The average spectral height and genus of linkoids are rational numbers whereas, the average spectral Jones polynomial and arrow polynomial of linkoids are polynomials with rational coefficients. **Example 5.2**.: _For the linkoid, \(\underset{\mathbf{2}}{\overset{\mathbf{1}_{\mathbf{3}}}{\mathbf{2}_{\mathbf{4} }}}\) with \(\tau=(1\quad 2)(3\quad 4)\), the virtual spectrum is given by accounting for all \(\sigma\in\{(1\quad 2)(3\quad 4),(1\quad 3)(2\quad 4),(1\quad 4)(2\quad 3)\}\), i.e. \(\mathrm{spec}\left(\underset{\mathbf{2}}{\overset{\mathbf{1}_{\mathbf{3}}}{ \mathbf{2}_{\mathbf{4}}}}\right)=\left\{\begin{array}{c}\includegraphics[]{ ccc}\includegraphics[]{ccc}\includegraphics[]{ccc}\includegraphics[]{ccc} \includegraphics[]{ccc}\includegraphics[]{ccc}\includegraphics[]{ccc} \includegraphics[]{ccc}\includegraphics[]{ccc}\includegraphics[]{ccc} \includegraphics[]{ccc}\includegraphics[]{ccc}\includegraphics[]{ccc} \includegraphics[]{ccc}\includegraphics[]{ccc}\includegraphics[]{ccc} \includegraphics[]{ccc}\includegraphics[]{ccc}\includegraphics[]{ccc} \includegraphics[]{ccc}\includegraphics[]{ccc}\includegraphics[]{ccc} \includegraphics[]{ccc}\includegraphics[]{ccc}\includegraphics[]{ccc} \includegraphics[]{ccc}\includegraphics[]{ccc}\includegraphics[]{ccc}\includegraphics[]{ccc }\includegraphics[]{ccc}\includegraphics[]{ccc}\includegraphics[]{ccc} \includegraphics[]{ccc}\includegraphics[]{ccc}\includegraphics[]{ccc}\includegraphics[]{ccc }\includegraphics[]{ccc}\includegraphics[]{ccc}\includegraphics[]{ccc} \includegraphics[]{ccc}\includegraphics[]{ccc}\includegraphics[]{ccc}\includegraphics[]{ccc }\includegraphics[]{ccc}\includegraphics[]{ccc}\includegraphics[]{ccc}\includegraphics[]{ccc }\includegraphics[]{ccc}\includegraphics[]{ccc}\includegraphics[]{ccc}\includegraphics[]{ccc }\includegraphics[]{ccc}\includegraphics[]{ccc}\includegraphics[]{ccc} \includegraphics[]{ccc}\includegraphics[]{ccc}\includegraphics[]{ccc}\includegraphics[]{ccc }\includegraphics[]{ccc}\includegraphics[]{ccc}\includegraphics[]{ccc} \includegraphics[]{ccc}\includegraphics[]{ccc}\includegraphics[]{ccc}\includegraphics[]{ccc }\includegraphics[]{ccc}\includegraphics[]{ccc}\includegraphics[]{ccc} \includegraphics[]{ccc}\includegraphics[]{ccc}\includegraphics[]{ccc}\includegraphics[]{ccc }\includegraphics[]{ccc}\includegraphics[]{ccc}\includegraphics[]{ccc}\includegraphics[]{c }\includegraphics[]{ccc}\includegraphics[]{ccc}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{ccc}\includegraphics[]{ccc}\includegraphics[]{c} \includegraphics[]{ccc}\includegraphics[]{ccc}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{ccc}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c} \includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c} \includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c} \includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c} \includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c} \includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c} \includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c} \includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c} \includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c }\includegraphics[]{c}\includegraphics[]{c}\includegraphics[]{c} \includegraphics[]{c **Definition 5.3**.: _(Minimal spectral \(\mathcal{F}\)-invariant of a linkoid) Let \(\mathcal{F}\) be an invariant of virtual links which takes values in \(\mathbb{R}\) (such as height, genus and odd writhe). The minimal spectral \(\mathcal{F}\)-invariant of a linkoid, \(L\), is defined as the minimum of the values of the \(\mathcal{F}\)-invariant over all elements in the virtual spectrum, \(\operatorname{spec}(L)\):_ \[\min_{\mathcal{F}}(L):=\min\left\{\mathcal{F}(\ell):\ell\in\operatorname{ spec}(L)\right\}.\] **Remark 5.3**.: _The minimal spectral height and genus of a linkoid can be defined due to Definition 5.3 and they take values in \(\mathbb{Z}\). Whereas, the average spectral height and genus of a linkoid takes values in \(\mathbb{Q}\)._ ## 6 Measures of entanglement of Open Curves in 3-space via Virtual Closure In this section, it is shown how topological invariants from virtual knot theory can be extended to define measures of entanglement of open curves in 3-space. For a collection of open or closed curves in 3-space in general position, any (regular) projection of these curves gives rise to a linkoid diagram. In fact, any such projection will be generic with probability one. Based on the directions of projection, given as unit vectors in \(S^{2}\), it is possible to get non-equivalent linkoid diagrams for a collection of open curves in 3-space. To fully capture the entanglement complexity of a collection of open curves in 3-space, it is necessary to account for all possible directions of projection [6, 5, 4, 29]. In the following discussion, let \(\mathcal{L}\) be a collection of \(n\) open curves in 3-space, with endpoints denoted by labels from the set \(E=\left\{1,2,\cdots,2n\right\}\), without repetition. A _strand permutation_ and a _closure permutation_ of \(\mathcal{L}\) can be defined in the same way as Definitions 3.2 and 3.3, namely: **Definition 6.1**.: _(Strand permutation of collections of open curves in 3-space) The strand permutation of \(\mathcal{L}\) is defined to be the element, \(\tau\in S_{2n}\), such that, for any \(i\in E\), \(i\) and \(\tau(i)\) are labels for the two endpoints of the same component of \(\mathcal{L}\)._ **Definition 6.2**.: _(Closure permutation of collections of open curves in 3-space) A closure permutation of \(\mathcal{L}_{\tau}\), where \(\tau\) denotes a strand permutation of \(\mathcal{L}\), is any element, \(\sigma\in S_{2n}\), such that, \(\sigma^{2}=\operatorname{id}\) and \(\sigma(i)\neq i\), for all \(i\in E\)._ Notice that, for every \(\vec{\xi}\in S^{2}\), the projection of \(\mathcal{L}\) to the plane with normal vector, \(\vec{\xi}\) is a linkoid diagram, \(\mathcal{L}_{\vec{\xi}}\). A choice of strand permutation and closure permutation of a collection of open curves, \(\mathcal{L}\), canonically induces a strand permutation and closure permutation on a projection, \(\mathcal{L}_{\vec{\xi}}\), for all \(\vec{\xi}\in S^{2}\). Therefore, \(\left(\mathcal{L}_{\vec{\xi}}\right)_{\tau}\) and \(\left(\left(\mathcal{L}_{\vec{\xi}}\right)_{\tau},\sigma\right)\) are well-defined for all \(\vec{\xi}\in S^{2}\), where \(\tau\) and \(\sigma\) are the given strand permutation and closure permutation of \(\mathcal{L}\). The notion of virtual closure can be extended to open curves in 3-space by introducing virtual closure arcs in 3-space according to some closure permutation (See Figure 14), as defined before. **Definition 6.3**.: _(Virtual arc in 3-space) A virtual arc in 3-space is an arc in 3-space such that its projection in any direction involves only virtual crossings._ **Definition 6.4**.: _(Virtual closure of collections of open curves in 3-space with respect to \(\sigma\)) Let \(\mathcal{L}\) be a collection of open curves in 3-space with strand permutation, \(\tau\). The virtual closure of \(\mathcal{L}_{\tau}\), with respect to \(\sigma\), is denoted by \(\left(\mathcal{L}_{\tau},\sigma\right)_{v}\) and it is defined to be the knot/link in 3-space, with virtual closure arcs between \(i\) and \(\sigma(i)\), for all \(i\in E\)._ **Corollary 6.1**.: _Let \(\mathcal{L}\) denote a collection of open curves in 3-space and let \(\tau\) and \(\sigma\) denote its strand permutation and closure permutation, respectively. Let \(\vec{\xi}\in S^{2}\) and let \(\left(\left(\mathcal{L}_{\tau},\sigma\right)_{v}\right)_{\vec{\xi}}\) be the projection, with respect to \(\vec{\xi}\), of the virtual closure of \(\mathcal{L}_{\tau}\), with respect to \(\sigma\) and \(\left(\left(\mathcal{L}_{\vec{\xi}}\right)_{\tau},\sigma\right)_{v}\) be the virtual closure of the linkoid diagram, \(\left(\mathcal{L}_{\vec{\xi}}\right)_{\tau}\), with respect to \(\sigma\). Then \(\left(\left(\mathcal{L}_{\tau},\sigma\right)_{v}\right)_{\vec{\xi}}=\left( \left(\mathcal{L}_{\vec{\xi}}\right)_{\tau},\sigma\right)_{v}\)._ Proof.: By Definition 6.4, all the virtual crossings in \(\left((\mathcal{L}_{\tau},\sigma)_{v}\right)_{\vec{\xi}}\) are contributed by the projections of the virtual closure arcs in \(\left(\mathcal{L}_{\tau},\sigma\right)_{v}\). The linkoid diagram, \(\left(\mathcal{L}_{\vec{\xi}}\right)_{\tau}\), is obtained if the arcs between \(i\) and \(\sigma(i)\) are removed from the diagram, \(\left(\left(\mathcal{L}_{\tau},\sigma\right)_{v}\right)_{\vec{\xi}}\), for all \(i\in\{1,2,\ldots,2n\}\). Hence, \(\left(\left(\mathcal{L}_{\tau},\sigma\right)_{v}\right)_{\vec{\xi}}=\left( \left(\mathcal{L}_{\vec{\xi}}\right)_{\tau},\sigma\right)_{v}\). The following corollary asserts that the virtual closure of collection of open curves in 3-space is independent of the embedding of the virtual arcs in 3-space. **Corollary 6.2**.: _Let \(\mathcal{L}\) denote a collection of open curves in 3-space and let \(\tau\) and \(\sigma\) denote its strand permutation and closure permutation, respectively. Let \(X\) and \(Y\) denote two representations of \(\left(\mathcal{L}_{\tau},\sigma\right)_{v}\), corresponding to different embeddings of the virtual arcs. Then, \(X_{\vec{\xi}}=Y_{\vec{\xi}}\) for all \(\vec{\xi}\in S^{2}\)._ Proof.: It follows from Corollary 6.1. A projection of a collection of open curves in 3-space can give only finitely many knotoids/linkoids [29, 6]. Similarly, for any closure permutation, the virtual closure of a collection of open curves in 3-space can be associated with a finite family of virtual knot diagrams (which arise from projections of the virtual closure). **Corollary 6.3**.: _Let \(\mathcal{L}\) be a collection of open curves in 3-space. Given a closure permutation, \(\sigma\), the number of distinct virtual knots associated with the virtual closure of \(\mathcal{L}\) is less than or equal to the number of distinct knotoids/linkoids associated with \(\mathcal{L}\)._ Proof.: A projection of \(\mathcal{L}\) can give only finitely many knotoids/linkoids [29, 6]. Hence, a projection of the virtual closure of \(\mathcal{L}\), with respect to \(\sigma\), can give only finitely many virtual knots/links. The claim follows from the fact that the virtual closure map on linkoids is not injective. (See Theorem 3.1). **Remark 6.1**.: _By Definition 6.4, a new notion of virtual links in 3-space arises i.e. the links in 3-space containing one or more virtual arcs. Let us denote virtual links in 3-space by \(\mathcal{V}^{3D}\). Let for every \(n\in\mathbb{N}\), \(\mathcal{A}_{n}^{3D}\) be the set of all collections of open curves in 3-space with \(n\) components and \(H_{n}=\{\sigma\in S_{2n}:\sigma^{2}(i)=i,\sigma(i)\neq i,\quad\forall i\in E\}\), where \(E=\{1,2,\ldots,2n\}\). Similar to Definition 3.8, we can define a closure map relating open curves in 3-space to virtual links in 3-space as follows :_ \[\phi:\quad\{(\mathcal{L},\tau,\sigma)\mid(\mathcal{L},\tau,\sigma )\in\mathcal{A}_{n}^{3D}\times H_{n}\times H_{n}\text{ for some }n\in\mathbb{N}\} \longrightarrow\mathcal{V}^{3D}\] \[(\mathcal{L},\tau,\sigma)\mapsto(\mathcal{L}_{\tau},\sigma)_{v} \quad,\] Figure 14: Projections of the virtual closure of open curves in 3-space giving rise to virtual knot diagrams. _where \(\tau,\sigma\in H_{n}\), \(\mathcal{L}\in\mathcal{A}_{n}^{3D}\) and \((\mathcal{L}_{\tau},\sigma)_{v}\) is the virtual closure of \(\mathcal{L}_{\tau}\) with respect to \(\sigma\). The map \(\phi\) has properties analogous to properties i-iv of the virtual closure map \(\varphi\) of linkoids (See Defintion 3.8)._ **Remark 6.2**.: _Through virtual closure, open curves in 3-space can be associated to a collection of embeddings of links in thickened surfaces of arbitrary genus._ A collection of new measures of entanglement of collections of open curves in 3-space can be defined by using invariants of virtual knots as follows : **Definition 6.5**.: _(New measures of entanglement of collections of open curves in 3-space) Let \(\mathcal{L}\) be a collection of open curves in 3-space and let \(\mathcal{F}\) be any invariant of virtual links. Given a closure permutation \(\sigma\), a measure of entanglement of \(\mathcal{L}_{\tau}\) is,_ \[\mathcal{F}_{\sigma}(\mathcal{L}_{\tau})\,:=\,\frac{1}{4\pi}\int_{\vec{\xi} \in S^{2}}\mathcal{F}\left((\mathcal{L}_{\tau},\sigma)_{\vec{\xi}}\right)dS\,. \tag{13}\] _where \((\mathcal{L}_{\tau},\sigma)_{\vec{\xi}}\) denotes the projection along \(\vec{\xi}\) of the virtual closure of \(\mathcal{L}_{\tau}\), with respect to \(\sigma\)._ Note that the integral in Equation 13 is taken over all vectors \(\vec{\xi}\in S^{2}\) except a set of measure zero (corresponding to the irregular projections). For collections of open curves in 3-space, \(\mathcal{F}_{\sigma}\) has the following properties : 1. \(\mathcal{F}_{\sigma}\) does not depend on any particular projection of the collection of open or closed curves. 2. For a collection of open curves, \(\mathcal{F}_{\sigma}\) is not an invariant of a corresponding/approximating link, linkoid or virtual link. 3. \(\mathcal{F}_{\sigma}\) is not a topological invariant, but it is a continuous function of the curve coordinates (see Corollary 6.4). 4. For a collection of closed curves in 3-space (a link), \(\mathcal{F}_{\sigma}\) coincides with a link invariant and it can be computed from a single projection, i.e. \(\mathcal{F}_{\sigma}\left(\mathcal{L}_{\tau}\right)=\mathcal{F}_{\sigma}\left( \left(\mathcal{L}_{\vec{\xi}}\right)_{\tau}\right)=\mathcal{F}\left(\left( \mathcal{L}_{\tau},\sigma\right)_{\vec{\xi}}\right)\), where \(\vec{\xi}\in S^{2}\) is any projection vector. 5. As the endpoints of a collection of open curves in 3-space tend to coincide, according to \(\sigma\), the value of \(\mathcal{F}_{\sigma}\) tends to the value of \(\mathcal{F}\) on the corresponding classical link in 3-space (see Corollary 6.4). **Corollary 6.4**.: _Let \(\mathcal{L}\) denote a collection of open curves in 3-space. Then, \(\mathcal{F}_{\sigma}\) is a continuous function of the curve coordinates of \(\mathcal{L}_{\tau}\). As the endpoints of \(\mathcal{L}_{\tau}\) tend to coincide according to \(\sigma\), to form a closed link in 3-space, \(\mathcal{F}_{\sigma}\) tends to the value of \(\mathcal{F}\) on the corresponding classical link in 3-space._ Proof.: Let us approximate \(\mathcal{L}\) by a set of polygonal curves of \(n\) edges each, namely \(\mathcal{L}^{(n)}\). Then, \[\mathcal{F}_{\sigma}\left(\mathcal{L}^{(n)}{}_{\tau}\right)=\sum_{i=1}^{k}p_{ i}\mathcal{F}_{\sigma}\left(\mathsf{L}_{i}^{(n)}{}_{\tau}\right), \tag{14}\] where \(\mathsf{L}_{i}^{(n)}{}_{\tau}\) for \(i=1,\ldots,k\) are the possible linkoids that can occur in all projections of \(\mathcal{L}^{(n)}{}_{\tau}\) and \(p_{i}\), the corresponding geometric probabilities. The geometric probability \(p_{i}\) can be expressed as, \(p_{i}=\frac{2A_{0}}{4\pi}\), where \(A_{0}\) is the area on the sphere corresponding to unit vectors such that the projection of \(\mathcal{L}^{(n)}{}_{\tau}\) along such vectors results in the linkoid \(\mathsf{L}_{i}^{(n)}{}_{\tau}\). \(A_{0}\) is the area of the quadrangle bounded by great circles defined by the edges and vertices of the polygonal curves in \(\mathcal{L}^{(n)}{}_{\tau}\), which is a continuous function of the coordinates of \(\mathcal{L}^{(n)}{}_{\tau}\) (see proof of Lemma 3.1 in [4]). The result follows as \(n\) goes to infinity. As the endpoints tend to coincide, the geometric probability that a projection gives a pure linkoid (or a pure virtual link) goes to 0, while the geometric probability that it gives the link-type linkoid goes to 1. Thus, the value of \(\mathcal{F}_{\sigma}\) on \(\mathcal{L}_{\tau}\) tends to the value of \(\mathcal{F}\) on the corresponding classical link in 3-space. For example, the _height_, _genus_ and _odd-writhe_ for a collection of open curves in 3-space can be defined as follows : **Definition 6.6**.: _(Height of a collection of open curves in 3-space with respect to \(\sigma\)) Let \(\mathcal{L}\) denote a collection of open curves in 3-space. Given a closure permutation \(\sigma\), the height of a collection of open curves in 3-space, \(\mathcal{L}_{\tau}\), is defined as,_ \[h_{\sigma}(\mathcal{L}_{\tau})\,:=\,\frac{1}{4\pi}\int_{\vec{\xi}\in S^{2}}h \left((\mathcal{L}_{\tau},\sigma)_{\vec{\xi}}\right)dS\,. \tag{15}\] _where \((\mathcal{L}_{\tau},\sigma)_{\vec{\xi}}\) denotes a projection of the virtual closure of \(\mathcal{L}_{\tau}\) with respect to \(\sigma\)._ **Definition 6.7**.: _(Genus of a collection of open curves in 3-space with respect to \(\sigma\) ) Let \(\mathcal{L}\) denote a collection of open curves in 3-space. Given a closure permutation \(\sigma\), the genus of a collection of open curves in 3-space, \(\mathcal{L}_{\tau}\), is defined as,_ \[g_{\sigma}(\mathcal{L}_{\tau})\,:=\,\frac{1}{4\pi}\int_{\vec{\xi}\in S^{2}}g \left((\mathcal{L}_{\tau},\sigma)_{\vec{\xi}}\right)dS\,. \tag{16}\] _where \((\mathcal{L}_{\tau},\sigma)_{\vec{\xi}}\) denotes a projection of the virtual closure of \(\mathcal{L}_{\tau}\) with respect to \(\sigma\)._ **Definition 6.8**.: _(Odd writhe of a collection of open curves in 3-space with respect to \(\sigma\)) Let \(\mathcal{L}\) denote a collection of open curves in 3-space and let \(\sigma\) be a closure permutation such that, \((\mathcal{L},\sigma)_{v}\) has only one component. Then, the odd-writhe of a collection of open curves in 3-space, is defined as,_ \[\mathsf{owr}_{\sigma}(\mathcal{L}_{\tau})\,:=\,\frac{1}{4\pi}\int_{\vec{\xi} \in S^{2}}\mathsf{owr}\left((\mathcal{L}_{\tau},\sigma)_{\vec{\xi}}\right)dS\,. \tag{17}\] _where \((\mathcal{L}_{\tau},\sigma)_{\vec{\xi}}\) denotes a projection of the virtual closure of \(\mathcal{L}_{\tau}\) with respect to \(\sigma\)._ Similarly, we can define the Jones polynomial and the arrow polynomial of collections of open curves in 3-space with respect to a given closure permutation. The affine-index polynomial of collections of open curves in 3-space is also well-defined provided the closure permutation in concern gives rise to a virtual link in 3-space with only 1 component. Notice that for open curves in 3-space, the height, genus and odd-writhe are real numbers that are continuous functions of the curve coordinates and they tend to the integer topological invariant as the endpoints coincide. The Jones polynomial, arrow polynomial and affine-index polynomial for open curves in 3-space have real valued coefficients which are continuous functions of the curves co-ordinates. As the endpoints of the curves tend to coincide, these polynomials tend to the corresponding polynomial-type invariants of the resultant closed knot/link in 3-space. **Remark 6.3**.: _The virtual closure of open curves in 3-space can be useful in the context of physical systems that employ periodic boundary conditions [30, 1, 31, 32]. The generating cell, \(C\), of a PBC system is in fact an \(n\)-tangle, \(T\), possibly involving endpoints inside the tangle. Here \(n\in\mathbb{N}\) determines the number of arcs inside \(C\) and the periodic conditions canonically determine a strand permutation, \(\tau_{pbc}\), for \(T\). The virtual closure of \(T\), denoted \((T_{\tau_{pbc}},\tau_{pbc})\), can be formed by introducing closure arcs to the endpoints in \(T\), lying on the boundary surface of the \(C\), with respect to \(\tau_{pbc}\). In any projection of \((T_{\tau_{pbc}},\tau_{pbc})\), the arcs inside \(C\) contributes classical crossings to the diagram whereas the arcs (virtual closure arcs) lying outside \(C\) contributes virtual crossings to the diagram (See Figures 15 and 16). Thus, for any projection of a PBC box, we can assign a virtual link. The virtual link however, depends on the direction of projection of the box. By Theorem 6.5, novel measures of entanglement for systems employing PBC can be defined in terms of the virtual closure (respecting PBC) of the arcs within a base cell._ ### The Virtual Spectrum of collections of Open Curves in 3-space In this section, the virtual spectrum and the weighted virtual spectrum of collections of open curves in 3-space is discussed to define a new measure of entanglement of open curves in 3-space that is independent of any virtual closure. **Definition 6.9**.: _(Virtual Spectrum of open curves in 3-space) Let \(\mathcal{L}\) be a collection of open curves in 3-space with \(n\) components and strand permutation, \(\tau\) on the set of endpoints, \(E\). The virtual spectrum of \(\mathcal{L}_{\tau}\) is defined to be the set, \(\mathrm{spec}(\mathcal{L}_{\tau}):=\{(\mathcal{L}_{\tau},\sigma)_{v}:\sigma \in H_{n}\}\), where \(H_{n}\) is the set of all closure permutations on \(E\)._ **Definition 6.10**.: _(Weighted Virtual Spectrum of open curves in 3-space) Let \(\mathcal{L}\) be a collection of open curves in 3-space with \(n\) components and strand permutation, \(\tau\) on the set of endpoints, \(E\). The weighted virtual spectrum of \(\mathcal{L}_{\tau}\) is defined to be the set,_ \[\mathrm{spec}(\mathcal{L}_{\tau}):=\bigg{\{}(w_{\sigma},(\mathcal{L}_{\tau}, \sigma)_{v}):\sigma\in H_{n},\quad w_{\sigma}=\frac{1}{2n}\sum_{i\in E}\mathrm{ d}(i,\sigma(i))\bigg{\}}\text{, where }H_{n}\text{ is the set of all closure permutations on }E\text{ and }\mathrm{d}(i,\sigma(i))\text{ denotes the Euclidean distance between the points }i\text{ and }\sigma(i)\text{ for all }i\in E\text{.}\] Using the weighted virtual spectrum, we define a new measure of entanglement for collections of open curves in 3-space: **Definition 6.11**.: _(Spectral measure of open curves in 3-space) Let \(\mathcal{L}\) be a collection of open curves in 3-space Figure 16: The virtual closure of the generating cell of two different systems employing 2 periodic boundary conditions. Figure 15: Virtual closure of the arcs in a base cell of a PBC system in 3D and one of its projections (virtual knot diagram). _with \(n\) components and strand permutation, \(\tau\) on the set of endpoints, \(E\). Let \(\mathcal{F}\) be an invariant of virtual knots/links. The \(\mathcal{F}\)-spectral measure of \(\mathcal{L}_{\tau}\) is defined as:_ \[\mathcal{F}(\mathcal{L}_{\tau}):=\sum_{\sigma\in H_{n}}\frac{w_{min}}{w_{ \sigma}}\mathcal{F}_{\sigma}\left((\mathcal{L}_{\tau},\sigma)_{v}\right),\] _where \(w_{\sigma}=\frac{1}{2n}\sum_{i\in E}\mathrm{d}(i,\sigma(i))\) and \(w_{min}=\min\{w_{\sigma}\mid\sigma\in H_{n}\}\)._ Let \(\mathcal{F}\) be an invariant of virtual knots/links. For collections of open curves in 3-space, the \(\mathcal{F}\)-spectral measure has the following properties : 1. It is independent of any choice of projection of the collection of open curves in 3-space. 2. It is not an invariant of a corresponding/approximating classical or virtual link. 3. It is not a topological invariant, but it is a continuous function of the curve coordinates. 4. As the endpoints of a collection of open curves in 3-space tend to coincide, according to a particular \(\sigma\), its \(\mathcal{F}\)-spectral measure tends to the \(\mathcal{F}\) invariant of the corresponding classical link in 3-space (see Theorem 6.1). In particular, we can define the spectral height, the spectral genus, the spectral Jones polynomial and the spectral arrow polynomial for collections of open curves in 3-space. The spectral height and spectral genus take values in \(\mathbb{R}\) whereas the spectral Jones polynomial and the spectral arrow polynomial are polynomial with coefficients in \(\mathbb{R}\). **Theorem 6.1**.: _Let \(\mathcal{L}\) be a collection of open curves in 3-space with \(n\) components and strand permutation, \(\tau\) on the set of endpoints, \(E\). Let \(\mathcal{F}\) be an invariant of virtual links. As the endpoints tend to coincide according to a closure permutation, \(\sigma\), the \(\mathcal{F}\)-spectral measure tends to the \(\mathcal{F}\)-invariant of the resultant closed link in 3-space._ Proof.: As endpoints tend to coincide, \(w_{min}\to 0\) and \(\mathcal{F}_{\sigma}\left((\mathcal{L}_{\tau},\sigma)_{v}\right)\) is the only surviving term in \(\mathcal{F}(\mathcal{L}_{\tau})\). The claim follows from Corollary 6.4. ## 7 Conclusions In this paper, new invariants of linkoids are introduced via a mapping from linkoids to virtual links. This leads to a new collection of strong invariants of linkoids that are independent of any given virtual closure and to novel measures of entanglement of open curves in 3-space. These measures are continuous functions of the curve coordinates and tend to the corresponding classical invariants when the endpoints of the curves tend to coincide. The mapping from linkoids to virtual links gives us access to many ideas and invariants of virtual links that we can now apply to studying linkoids and open curves in 3-space. Thus the program we follow here gives new applications of virtual knot theory to classical and physical knotting. The association of virtual knots to linkoids and open curves in three space can be interpreted as a way to construct embedded curves in thickened surfaces that reflect these topological structures. We have concentrated on making new invariants of linkoids and open curves, but the geometry of the association with curves in thickened surfaces remains to be more fully investigated in the future. ## 8 Acknowledgements Kasturi Barkataki and Eleni Panagiotou were supported by NSF DMS-1913180 and NSF CAREER 2047587.
2304.13962
Influence of oxygen on electronic correlation and transport in iron in the outer Earth's core
Knowing the transport properties of iron under realistic conditions present in the Earth's core is essential for the geophysical modeling of Earth's magnetic field generation. Besides by extreme pressures and temperatures, transport may be influenced importantly also by the presence of light elements. Using a combination of molecular dynamics, density functional theory, and dynamical mean-field theory methods we investigate how oxygen impurities influence the electronic correlations and transport in the liquid outer Earth's core. We consider a case with an oxygen content of ~10 atomic%, a value that is believed to be close to the composition of the core. We find that the electronic correlations are enhanced but their effect on conductivities is moderate (compared to pure Fe, electrical conductivity drops by 10% and thermal conductivity by 18%). The effect of electron-electron scattering alone, whereas not large, is comparable to effects of the compositional disorder. We reveal the mechanism behind the larger suppression of the thermal conductivity and associated reduction of the Lorenz ratio and discuss its geophysical significance.
German G. Blesio, Leonid V. Pourovskii, Markus Aichhorn, Monica Pozzo, Dario Alfè, Jernej Mravlje
2023-04-27T05:48:08Z
http://arxiv.org/abs/2304.13962v1
# Influence of oxygen on electronic correlation and transport in iron in the outer Earth's core ###### Abstract Knowing the transport properties of iron under realistic conditions present in the Earth's core is essential for the geophysical modeling of Earth's magnetic field generation. Besides by extreme pressures and temperatures, transport may be influenced importantly also by the presence of light elements. Using a combination of molecular dynamics, density functional theory, and dynamical mean-field theory methods we investigate how oxygen impurities influence the electronic correlations and transport in the liquid outer Earth's core. We consider a case with an oxygen content of \(\sim 10\)atomic%, a value that is believed to be close to the composition of the core. We find that the electronic correlations are enhanced but their effect on conductivities is moderate (compared to pure Fe, electrical conductivity drops by 10% and thermal conductivity by 18%). The effect of electron-electron scattering alone, whereas not large, is comparable to effects of the compositional disorder. We reveal the mechanism behind the larger suppression of the thermal conductivity and associated reduction of the Lorenz ratio and discuss its geophysical significance. ## I Introduction The Earth's magnetic field is generated by a self-excited dynamo that is driven by convection in the outer core. The thermal conductivity of iron, which determines the amount of heat flow available for convection, and the value of electrical conductivity, which determines the dissipation of the current, are crucial inputs for geophysical models of this geodynamo mechanism. Direct measurements of transport under extreme pressures and temperatures that are relevant to Earth's core are challenging [1; 2] as one must ensure homogeneous temperature and carefully control the geometry of the samples [3]. One can access transport properties also from first principle calculations based on the molecular dynamics (MD)-density functional theory (DFT) method [4; 5; 6]. These calculations have shown that the electrical and thermal conductivities have values that are significantly (2-3 times) higher [5; 6] than earlier established estimates [7; 8]. These earlier estimates were based on extrapolations that neglected the effects of resistivity saturation [9; 10; 11]. The higher values of thermal conductivity lead to a different geophysical picture, with an inner core that is younger (\(<1\) billion years, whereas magnetism is known to exist for at least 3.4 billion years [12; 13] ), and less thermal convective energy to drive the geodynamo, which is known as "the new core paradox" [14]. Namely, less thermal energy implies that convection must be helped by the chemical convection driven by the exsolution of lighter elements but this was less active before the formation of the inner core. There has been an ongoing discussion on whether electronic correlations, which have been shown to be significant in iron under Earth's core conditions [15; 16], can cause a breakdown in the Mott-Ioffe-Regel resistivity saturation [9; 17]. This debate revolves around whether these correlations are strong enough to reduce conductivity values to previously established levels [18; 19; 20], as suggested in a pioneering work (later retracted)[18]. Recent findings indicate that electron-electron scattering (EES) plays only a moderate role and represents a small fraction compared to thermal-disorder (electron-phonon) scattering [21; 22]. One important question remains to be addressed: does alloying with lighter elements significantly enhance correlations, and if so, to what extent are conductivities suppressed? The Earth's core contains a sizable contribution of lighter elements, primarily silicon and oxygen, with recent research indicating a high oxygen concentration [23]. The influence of these substitutions has been broadly investigated using MD-DFT [5; 6; 24; 25], but without accounting for correlation effects. Oxygen might enhance electronic correlations by reducing iron 3d shell occupancy toward half-filling [26]. Indeed, recent theoretical work [27] investigated several ordered FeO structures and found a very strong enhancement of electron-electron scattering (EES). However, the impact of thermal disorder on EES was neglected in that study, and the final effect on thermal conductivity, taking into account both EES and electron-phonon scattering, was not evaluated. Furthermore, the impact of EES on conductivity in liquid iron, which is most relevant for the dynamo mechanism, was not clarified. In this study, we investigate the transport properties of Fe and FeO alloys in their liquid state at the inner-core boundary (ICB) and core-mantle boundary (CMB) conditions. We describe the liquid state using MD-DFT and account for electron-electron scattering (EES) using dynamical mean-field theory (DMFT) [20; 28; 29; 20]. Specifically, we focus on the Fe\({}_{0.91}\)O\({}_{0.09}\) composition and observe that its EES rate increases by \(\sim 25\%\) compared to pure iron (Fe ), but the conductivities are affected to a lesser extent with only an \(\approx 10\%\) drop in electrical conductivity \(\sigma\) and an \(\approx 18\%\) drop in thermal conductivity \(\kappa\). This change is mainly due to band-structure effects rather than a direct increase in EES. In order to quantify the effects of electronic-correlation on transport we compare the calculated DMFT conductivities with those from the MD-DFT. We find that the inclusion of EES leads to only a moderate reduction of electrical and thermal conductivities by roughly 10% and 20%, respectively. This finding is crucial considering a large body of existing theoretical work that neglects EES [6; 11; 24; 5]. A related study of Fe-Si alloys has shown similar behavior [22]. We discuss why the thermal conductivities are suppressed more than the electrical ones and highlight the geophysical implications of our results. ## Methods We performed the molecular dynamics calculations using the VASP code [30], using the projected augmented wave method [31; 32] to describe the interactions between the electrons and the ions and expanded the single-particle orbitals as linear combinations of plane waves (PW), including PW with maximum energies of 400 eV. The molecular dynamics simulations were performed by sampling the Brillouin zone using the \(\Gamma\) point only, and a time step was 1 fs. The temperature was controlled using the Nose [33] thermostat. To compute the DFT electrical and thermal conductivity we used the modified version of VASP by Dejarlais [34]. The DFT+DMFT self-consistent calculations were performed with a local density approximation (LDA) approach in the Wien2K code [35; 36], using the TRIQS library [37; 38; 39; 40] for the DMFT and transport calculations. We used a local density-density interaction vertex with interaction parameters \(U=5.0\,\mathrm{eV}\), \(J_{H}=0.93\,\mathrm{eV}\), in agreement with the previous studies on pure iron [19; 21], and solved the impurity problem using the continuous-time hybridization-expansion segment solver [41; 42]. Each calculation was first converged by 25 fully self-consistent DFT+DMFT iterations, where each Monte Carlo run employed \(2\times 10^{10}\) Monte Carlo moves and 200 moves/measurement. Using the converged Kohn-Sham Hamiltonian, 10 additional DMFT cycles were performed with the number of Monte Carlo moves increased to \(10^{11}\). To obtain clean data for analytical continuation that we performed using Maximum Entropy method, 20 additional runs (with \(2\times 10^{11}\) moves per run) were carried out starting from the same converged value of the DMFT bath Green's function and resetting the random sequence. We calculated the conductivities within the Kubo linear-response neglecting the vertex corrections. The electrical and thermal conductivity read [43; 40] \[\sigma_{\alpha\alpha^{\prime}}=\frac{e^{2}}{k_{B}T}K^{0}_{\alpha\alpha^{ \prime}}\,\ \kappa_{\alpha\alpha^{\prime}}=k_{B}\left[K^{2}_{\alpha\alpha^{ \prime}}-\frac{\left(K^{1}_{\alpha\alpha^{\prime}}\right)^{2}}{K^{0}_{\alpha \alpha^{\prime}}}\right], \tag{1}\] where \(\alpha\) is the direction (\(x\), \(y\) or \(z\)) and \(k_{B}\) the Boltzmann constant. The kinetic coefficients \(K^{n}_{\alpha\alpha^{\prime}}\) are \[K^{n}_{\alpha\alpha^{\prime}}=2\pi\hbar\int d\omega(\beta\omega)^{n}f(\omega )f(-\omega)\Gamma^{\alpha\alpha^{\prime}}(\omega,\omega), \tag{2}\] where 2 is the spin factor, \(f(\omega)\) is the Fermi function, and the \(\Gamma^{\alpha\alpha^{\prime}}\) is given by \[\Gamma^{\alpha\alpha^{\prime}}(\omega,\omega^{\prime})=\frac{1}{V}\sum_{\bf k }\mathrm{Tr}\left(v^{\alpha}_{\bf k}A_{\bf k}(\omega)v^{\alpha^{\prime}}_{\bf k }A_{\bf k}(\omega^{\prime})\right) \tag{3}\] where \(V\) is the unit-cell volume, \(A_{\bf k}(\omega)\) is the DMFT spectral function at momentum \({\bf k}\), and \(v^{\alpha}_{\bf k}\) is the corresponding band velocity in the direction \(\alpha\). We also define the transport distribution \[\Gamma(\omega)=\sum_{\alpha}\Gamma^{\alpha\alpha}(\omega,\omega). \tag{4}\] We also calculated the response at finite frequency \(\Omega\) which yields the optical electrical and thermal conductivity. These are evaluated by Eq.1 using the kinetic coefficients evaluated at a finite frequency \(\Omega\) \[K^{n}_{\alpha\alpha^{\prime}}(\Omega)=2\pi\hbar\int d\omega \Gamma^{\alpha\alpha^{\prime}}(\omega+\Omega/2,\omega-\Omega/2)(\omega+\Omega/ 2)^{n}\] \[\beta^{n-1}\frac{f(\omega-\Omega/2)-f(\omega+\Omega/2)}{\Omega}. \tag{5}\] In the momentum sums we retained 14 momentum points. The electron-phonon-only values were calculated using the Kubo-Greenwood approximation as implemented in VASP [34] using 10 momentum points. We checked that upon increasing the number of momentum points further the results vary by less than 1%. ## Results We calculate the liquid phase for 67 atoms: all iron (Fe ) or with oxygen (Fe\({}_{0.91}\)O\({}_{0.09}\) ). We study the liquid at CMB conditions for a temperature \(T=4400\) K, volume \(8.64\,\mathrm{\AA^{3}/atom}\) (which corresponds to pressure 132 GPa), and ICB conditions \(T=6350\) K with volume \(7.16\,\mathrm{\AA^{3}/atom}\) (pressure 330 GPa). The calculations were performed for three snapshots separated by 5 ps time each and the calculations of transport properties was performed by averaging over the snapshots and spatial directions. **Electronic correlations.** Figure 1 displays the imaginary part of self-energies for a single snapshot in the energy range of \([-1,1]\) eV. Each curve represents a different site/orbital/spin index, and the thick lines indicate an average over all of them. Since the electron-electron scattering rate is \(1/\tau=-2\text{Im}\Sigma(\omega\to 0)\), more negative values of \(\text{Im}\Sigma\) indicate stronger electronic correlations. Our results show that the scattering increases significantly in the oxygen-rich case for both ICB and CMB. The distribution of the curves reveals that not only the Fe sites closest to oxygen are affected, but also the spread of the data is wider in the oxygen-rich case, and even the self-energies with the smallest magnitudes are enhanced by oxygen. Overall, the average EES, as given by the average \(\text{Im}\Sigma\), increases by 25%. **Conductivities.** The calculated optical conductivities are shown in Fig. 2 for the case of electrical and thermal current on the top and bottom panels, respectively. The \(\omega\to 0\) values indicate the dc-transport values. One sees that quantitatively the effect of oxygen on transport is somewhat weaker than on the EES. Also shown in that figure are the results of a simplified calculation where one uses the \(\langle\Sigma\rangle\) instead of the individual self-energies. One sees a "self-averaging" effect: the results of such a calculation are almost indistinguishable from the full calculation. This also tells that statistical uncertainties of the individual self-energies will not affect the calculated conductivities. To what extent is the suppression of conductivities in the oxygen-rich case due to the increase of EES documented in Fig. 1? It turns out that, as suggested in the earlier work [21] iron under Earth's core conditions is in Figure 1: Imaginary part of the calculated self-energies on the real axis. In the top and middle panels, we show the individual self-energies (blue lines) and their average (\(\text{Im}\Sigma\)) (black thick). In the bottom panel, the four average self-energies are shown, at the ICB (full) and CMB (dashed) conditions. Figure 3: Thermal conductivity for pure Fe and Fe\({}_{0.91}\)O\({}_{0.09}\) for the ICB (top) and CMB cases (bottom). We show the results obtained with orbitally and site-resolved self-energies (dots) as well as those calculated using the average self-energy (full line). The latter was first averaged over all sites and orbitals on the Matsubara grid and then analytically continued. The differences between the two are very small. With dash-dotted line, we show results calculated by exchanging the average self-energy between the Fe and Fe\({}_{0.91}\)O\({}_{0.09}\). Figure 2: Optical electrical (top) and thermal (bottom) conductivity for pure Fe and Fe\({}_{0.91}\)O\({}_{0.09}\). The ICB and CMB cases are shown on the left and right, respectively. The dashed lines (that overlap closely with the filled ones) indicate a simplified calculation using fully (site, orbital, and spin) averaged self-energies. a thermal-disorder dominated case where the changes of EES impact transport weakly. In Fig. 3 we demonstrate this by additional calculations where we compute the conductivity of the Fe\({}_{0.91}\)O\({}_{0.09}\) case by using the scattering information from \(\langle\Sigma\rangle\) corresponding to pure Fe calculation and vice versa for the other case. Quite strikingly, these "exchanged" calculations are at small frequencies almost indistinguishable from the "non-exchanged" ones. This tells that the oxygen affects the results through a structurally induced change in the band dispersions and that the changes in the EES play an insignificant role. This is further demonstrated in Appendix B where the scattering is artificially increased and only a weak effect on transport is seen. Fig. 4 shows the calculated values of resistivity (top) and thermal conductivity (bottom) along with data from the literature. Thermal disorder dominates, and perfect crystalline lattices have much higher conductivities. The additional influence of compositional disorder is moderate, and the magnitude of the change is similar to that of including EES on top of the thermal disorder for a given composition. **Suppression of Lorenz ratio.** Interestingly, EES suppresses the thermal conductivities more than the electrical ones. Figure 5 shows the evolution of the Lorenz number \(L=\kappa/(\sigma T)\) with respect to the strength of EES, which is scaled by the factor \(\alpha\), as described in Appendix B. For pure Fe, the value of \(L\) due to electron-phonon scattering is almost identical to the standard value of \(2.44\cdot 10^{-8}\) (W\(\Omega\)/K\({}^{2}\)), whereas it is considerably reduced when EES is taken into account. This reduction occurs because inelastic EES affects \(\kappa\) more strongly than \(\sigma\). Specifically, \(\kappa\) is determined by integrating \(\Gamma(\omega)\omega^{2}(-df/d\omega)\), where \(\Gamma(\omega)\) is the transport distribution function, \(f\) is the Fermi function, and \(\omega\) is the frequency. On the other hand, \(\sigma\) is calculated by integrating \(\Gamma(\omega)(-df/d\omega)\), which only involves the derivative of the Fermi function. The conductivity is mostly given by states around \(\omega=0\), while the dominant contribution to thermal conductivity occurs at finite energies \(1.5T\lesssim|\omega|\lesssim 4T\). The right panel of Figure 5 presents the transport distribution \(\Gamma(\omega)\) for the CMB case, which are evaluated for the actual EES (full) and compared to a calculation where the energy dependence of scattering is suppressed and the self-energy Im\(\Sigma\rightarrow\)const is taken (dashed), corresponding to a DFT transport distribution. It is evident that the increase of EES with energy suppresses \(\Gamma\) that becomes smaller at larger energies compared to the DFT transport distribution case. Additionally, a comparison between Fe and Fe-O is interesting, which is evident from the DFT transport distributions. The inclusion of oxygen leads to significant suppression of transport distribution at \(\omega=2.5\)eV, caused by O-2p hybridization with the 4s iron states. This explains the smaller Lorenz number in the oxygen-rich case. ## IV Discussion In summary, our study focused on the impact of oxygen on electronic transport in liquid iron corresponding to the outer Earth's core conditions. Because oxygen diminishes the Fe \(3d\) electronic occupation towards half-filling, it can be expected to strongly enhance electronic correlations. We indeed find that the EES is moderately increased. The numerical values of the conductivities and the Lorenz Figure 4: Resistivity (top) and thermal conductivity (bottom) for Fe and Fe\({}_{0.91}\)O\({}_{0.09}\) for the ICB and CMB cases calculated using DMFT (including both electron-electron and e.-ph scattering) and DFT (e.-ph. only). Previous calculations (extracted from Ref. [21]) for supercell and perfect lattice bcc/hcp are shown for the solid case. Figure 5: (left) Lorenz number for Fe (bullets) and Fe\({}_{0.91}\)O\({}_{0.09}\) (crosses) and for the ICB (full line) and CMB cases (dashed line). We use the parameter \(\alpha\) to artificially change the magnitude of the EES \(\Sigma\rightarrow\Sigma_{\alpha}=\text{Re}\Sigma+\alpha i\text{Im}\Sigma\). One sees that the Lorenz number is reduced by the presence of oxygen. (right) Transport distribution \(\Gamma(\omega)\). Solid curves are the obtained form the full DMFT self energy, dashed lines from a constant scattering rate approximation. ratio are given in Table 1. The oxygen substitution at \(\sim\) 10% level, which is a plausible content for the outer core, suppresses electrical (thermal) conductivities by about 10(17)% only. In both Fe and Fe-O including electronic correlations diminishes electrical conductivities by less than 10% and thermal conductivities by about 20%. This reduction is consistent with similar studies [21; 22], which suggests that it can be used as a rule of thumb when direct calculations are not feasible. What are the geophysical implications of electronic correlations? The first observation is that their effect is moderate. The drastic reduction of conductivity, which was predicted on the basis of EES enhancement in ordered Fe-O structures [22], is not observed when thermal disorder effects are simultaneously included. But it is also clear from our study that neither can one neglect electronic correlations, since the reduction of conductivities due to EES is comparable to the reduction of electron-phonon scattering due to light elements. Importantly, some models that assume a higher heat flow from the core find a thermally only driven geodynamo for \(\kappa\) below a limiting value that is of order 100W/mK [44; 45]. EES whereas small compared to the thermal disorder, might in the end provide just the necessary additional scattering next to the compositional disorder to power the geodynamo sufficiently. At the very least, whenever one argues the compositional disorder is important, EES must not be neglected either. Another important finding is the universal suppression of Lorenz number: thermal conductivities are affected by EES more than electrical ones [19; 20]. This is seen to be also an effect of compositional disorder in the Fe-O case but EES enhances this further, by supressing the contribution of states away from the Fermi energy. This is important both to properly interpret the high-pressure measurements that mostly probe \(\sigma\) and for the geodynamo, because of the distinct influence of the two quantities there. In future studies, it would be interesting to investigate also alloying with sulfur and silicon [24]. Both elements affect the transport properties strongly at the DFT level because unlike oxygen that alloys interstitially [24], they alloy substitutionally and therefore more strongly affect the bond disorder with perhaps different implications for electronic correlations. The silicon case was recently investigated [22], and the results for CMB seem compatible with what we find at comparable concentrations of oxygen, but sulfur that would act also in an oxidizing way could potentially have a bigger effect. Whereas the sulfur concentrations are believed to be negligible in the Earth [46], they are expected to be sizable in extrater-restial planets [47]. Finaly, both thermal disorder and compositional disorder are also important for Fe oxides that are relevant for the properties of the lower mantle, where, for example, FeO is predicted to be [48; 49; 50; 51] in a state where electronic correlations are very strong and the influence of thermal and compositional disorder might lead to large effects there. ###### Acknowledgements. GGB and JM are supported by Slovenian Research Agency (ARRS) under Grant no. P1-0044 and J1-2458. JM acknowledges discussions with A. Georges. DA and MP are supported by the U.K. Natural Environment Research Council (NERC) under Grant no. NE/T000228/1 and NE/R000425/1. Computations were performed on the supercomputer Vega at the Institute of Information Science (IZUM) in Maribor, Slovenia and in UK on the UK national service Archer2. ## Author contributions G.B.B. carried out the DFT + DMFT electronic structure and transport'calculations. D.A. and M.P. carried out the DFT molecular dynamics and transport calculations. G.G.B., J.M, L.V.P, M.A, and D.A discussed the results and wrote the paper. ## Competing interests The authors declare no competing interests. ## Appendix A Matsubara self-energies Figure 6 depicts the imaginary part of the self-energy as a function of Matsubara frequency at ICB (top) and CMB (bottom). The site, spin, and orbital average is performed and the corresponding self-energy is indicated by a full line, whereas the individual self-energies smoothly span the full range, indicated by shading. One sees the enhancement of EES for the oxygen-rich case in terms of larger magnitude (i.e. more negative values) of \(\Sigma(i\omega)\). \begin{table} \begin{tabular}{|l|c|c|c|} \cline{2-4} \multicolumn{1}{c|}{} & \multicolumn{2}{|c|}{**CMB**} & \multicolumn{2}{|c|}{**ICB**} \\ \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{\(T=\)4400 K} & \multicolumn{2}{|c|}{\(T=\)6350 K} \\ \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{\(V=\)8.64 Å\({}^{3}\)/at} & \multicolumn{1}{c|}{\(V=\)7.16 Å\({}^{3}\)/at} \\ \hline Case & Fe & Fe\({}_{0.91}\)O\({}_{0.09}\) & Fe & Fe\({}_{0.91}\)O\({}_{0.09}\) \\ \hline \(-\)Im\(\langle\Sigma(\omega=0)\rangle\) (eV) & 0.11 & 0.14 & 0.09 & 0.11 \\ \hline \(\sigma_{\rm DFT}\) (\(10^{4}\Omega^{-2}\) cm\({}^{-1}\)) & 1.35 & 1.24 & 1.54 & 1.37 \\ \(\sigma_{\rm DMFT}\) (\(10^{4}\Omega^{-1}\) cm\({}^{-1}\)) & 1.22 & 1.10 & 1.41 & 1.28 \\ \(1-\sigma_{\rm DMFT}/\sigma_{\rm DFT}\) (\%) & 10 & 11 & 9 & 7 \\ \hline \(\kappa_{\rm DFT}\) (W m\({}^{-1}\)K\({}^{-1}\)) & 145 & 120 & 243 & 190 \\ \(\kappa_{\rm DMFT}\) (W m\({}^{-1}\)K\({}^{-1}\)) & 119 & 99 & 193 & 158 \\ \(1-\kappa_{\rm DMFT}/\kappa_{\rm DFT}\) (\%) & 18 & 17 & 21 & 17 \\ \hline \(L_{\rm DFT}\) (\(10^{-8}\) W\(\Omega\) K\({}^{-2}\)) & 2.43 & 2.20 & 2.48 & 2.18 \\ \(L_{\rm DMFT}\) (\(10^{-8}\) W\(\Omega\) K\({}^{-2}\)) & 2.22 & 2.05 & 2.16 & 1.95 \\ \hline \end{tabular} \end{table} Table 1: Summary of results for CMB and ICB conditions, both for Fe and Fe\({}_{0.91}\)O\({}_{0.09}\).
2302.01030
Vortices in a rotating holographic superfluid with Lifshitz scaling
We have extended our previous work [1] on rotating holographic superfluids to include Lifshitz scaling. Presence of this scaling breaks relativistic invariance of the boundary superfluid system and indicates the existence of a Lifshitz fixed point [2]. We have analytically shown that we still get same vortex solutions as discovered earlier in [1]. We have recovered previous results for the case of z = 1, which restores the relativistic invariance in the holographic superfluid system. However, for z $\neq$ 1 this study indicate surprising results regarding dissipation in such a holographic superfluid. We found that higher winding number vortices increase with higher values of imaginary chemical potential for values of z in the open interval (1, 2). This result is remarkable because it asserts that dissipation in the rotating holographic superfluid increases in the presence of Lifshitz scaling.
Ankur Srivastav, Sunandan Gangopadhyay
2023-02-02T11:47:58Z
http://arxiv.org/abs/2302.01030v2
# Vortices in a rotating holographic superfluid with Lifshitz scaling ###### Abstract We have extended our previous work [1] on rotating holographic superfluids to include Lifshitz scaling. Presence of this scaling breaks relativistic invariance of the boundary superfluid system and indicates the existence of a Lifshitz fixed point [2]. We have analytically shown that we still get same vortex solutions as discovered earlier in [1]. We have recovered previous results for the case of \(z=1\), which restores the relativistic invariance in the holographic superfluid system. However, for \(z\neq 1\) this study indicate surprising results regarding dissipation in such a holographic superfluid. We found that higher winding number vortices increase with higher values of imaginary chemical potential for values of \(z\) in the open interval \((1,\,2)\). This result is remarkable because it asserts that dissipation in the rotating holographic superfluid increases in the presence of Lifshitz scaling. ## I Introduction Applied gauge/gravity duality has been subject of interest for the past two decades [3; 4; 5; 6]. It has been tremendously useful in understanding various strongly coupled condensed matter systems where perturbative techniques of standard quantum field theory have almost no access [7]. Apart from condensed matter applications, this duality has provided insights in QCD and cosmology as well [3]. Holographic superconductor [8; 9] and superfluid models [10], which mimic the properties of unconventional superconductors and superfluids, have been explored extensively over the past few years [11; 12; 13; 14; 15; 16; 17; 18; 19; 20]. In particular, vortex structure and its dynamics in holographic superfluids and superconductors were studied in various phenomenological settings on the gravity side [21; 22; 23; 24; 25; 26; 27; 28; 29; 30]. Existence of vortices is one of the important properties of superfluids under rotation and a variety of vortices have been observed in experiments [31; 32]. Recently, we have also analysed such a rotating holographic superfluid model where we had built novel vortex solutions and showed that dissipation in this model increases with an increase in imaginary chemical potential [1]. In condensed matter physics, however, there exists a class of systems which do not have relativistic symmetry and thus shows a dynamic scaling (\(z\neq 1\)) near phase transition [33]. For such non-relativistic theories, a gravity dual geometry was constructed in [2] for \(z=2\) and was subsequently generalised for other values of \(z\)[34; 35]. These gravity geometries are known as Lifshitz geometries which admit following scaling symmetry, \[t\rightarrow\lambda^{z}t,\hskip 28.452756ptx^{i}\rightarrow\lambda x^{i}. \tag{1}\] Gravity dual models constructed out of this geometry are known as Lifshitz holographic models. Lifshitz holographic models of unconventional superconductors have also been analysed in the past [36; 37]. Our interest in this paper is to generalise vortices built in [1] for Lifshitz holographic model of rotating superfluids. It should be noted from eq.(1) that Lifshitz holographic model reduces to standard \(AdS\) holographic model for \(z=1\) where relativistic symmetry is restored. In this paper we have considered a disc of radius \(R\) at the spacetime boundary and allowed the superfluid to rotate. As mentioned in [1], it gives an equivalent description for the static superfluid in a rotating disc. It is known that rotating holographic superfluid admits a vortex state above a critical value of the rotation. It has been numerically shown in [21] that above this critical rotation, vortices get excited in the rotating holographic superfluid system. Here we have analytically studied such vortex structure near to this critical rotation in rotating Lifshitz holographic superfluid. In this model also, we have found that chemical potential needs to be purely imaginary for condensate to be real. A holographic model of QCD has been explored with imaginary chemical potential in [38] in the past. The dissipative nature of imaginary chemical potential in condensed matter system has also been suggested in a previous study [39]. In this analysis we have found that vortices remain unaffected by Lifshitz scaling \(z\). This imply that we again get the same vortex solutions at the boundary disc as the ones obtained in [1] for any value of \(z\). Also linear relation between winding number of vortices and the angular velocity of the rotating superfluid hold irrespective of the value of dynamical exponent \(z\). However, Lifshitz scaling does change dissipative nature of vortex state in this model strongly. For \(z=1\), results in this model are in agreement with [1]. That is increase in imaginary chemical potential reduces higher winding number vortices and thereby reduces dissipation in the system. We have obtained a remarkably opposite behaviour in case of \(z\neq 1\). It turns out that for such a non-relativistic situation, increase in imaginary chemical potential increases higher winding number vortices and hence dissipation in rotating Lifshitz holographic superfluids also increases. In other words, key finding of this work is that presence of imaginary chemical potential supports dissipation through vortex state for non-relativistic holographic superfluids whereas for relativistic holographic superfluids it opposes such a dissipation. It should be noted that we have considered \(z\) to be in the interval [1,2], that is, our analysis does not capture the behaviour of rotating Lifshitz holographic superfluid with the dynamic scaling exponent \(z=2\). This is because for \(z=2\) there is logarithmic divergence in the gauge fields at the spacetime boundary which needs separate attention. We have left the analysis of this case for future works. This paper has been organised in the following manner. Section (II) introduces a holographic superfluid model in a \((3+1)\)-dimensional Lifshitz spacetime with a static black hole. Near critical angular velocity for a rotating container, we have built vortex solutions in section (III). In section (IV), Sturm-Liouville eigenvalue approach has been used to analyse this model in bulk direction. This paper ends with section (V) where we have discussed our final observations in this study and commented on the results. There are also Appendices containing some plots of \(\Omega\) vs \(\mu\) for different values of \(z\) lying between 1 and 2. ## II Setting up holographic superfluid model We consider the following matter action for a holographic superfluid, \[\mathcal{S}=\frac{\not{L}}{16\pi Ge^{2}}\int_{\mathcal{M}}d^{4}x\{-\frac{1} {4}F^{2}-|D\Psi|^{2}-m^{2}\Psi^{2}\} \tag{2}\] where \(l\) is the radius of curvature of the spacetime geometry, \(e\) is charge, \(G\) is Newton's constant, \(m\) is mass of the scalar field and \(F^{2}\equiv F_{\mu\nu}F^{\mu\nu}\). Also Faraday tensor and covariant derivative are given by \(F_{\mu\nu}=\partial_{[\mu}A_{\nu]}\) and \(D_{\mu}=\nabla_{\mu}-ieA_{\mu}\) respectively. In this paper, we shall be working in the probe limit. In this limit, matter sector is assumed to be non back-reacting with the black hole background. This can be achieved mathematically by rescaling scalar and gauge fields with the charge \(e\) as \(A_{\mu}\rightarrow\frac{A_{\mu}}{e}\) and \(\Psi\rightarrow\frac{\Psi}{e}\), and then taking limit \(e\rightarrow\infty\). It is equivalent to set \(e=1\) in this model. We study this holographic superfluid model in a \((3+1)\)-dimensional black hole spacetime with the scaling symmetry given by eq.(1), where \(z\) is known as the dynamical exponent. Such a black hole spacetime is realised by the following metric [33], \[ds^{2}=-\frac{f(u)}{u^{2z}}dt^{2}+\frac{du^{2}}{f(u)u^{2}}+\frac{1}{u^{2}}(dr^ {2}+r^{2}d\theta^{2}). \tag{3}\] The blackening factor is given by \(f(u)=(1-u^{z+2})\). We have set \(l\) and \(16\pi G\) to be unity for convenience and the bulk direction has been scaled in such a way that \(u=0\) is the spacetime boundary and \(u=1\) represents the event horizon of the black hole. The boundary coordinates \((r,\theta)\) define a 2-dimensional flat disc. Notice that setting \(z=1\) in the above metric restores \(AdS_{(3+1)}\) black hole spacetime structure. We now rewrite the metric in Eddington-Finkelstein (EF) coordinates as below, \[ds^{2}=-\frac{f(u)}{u^{2z}}dt^{2}-\frac{2}{u^{z+1}}dtdu+\frac{1}{u^{2}}(dr^{2} +r^{2}d\theta^{2}). \tag{4}\] We have redefined the EF-time label to \(t\) for notational simplicity. Equations of motion for the matter and the gauge fields are given by, \[(D^{2}-m^{2})\Psi=0 \tag{5}\] \[\nabla_{\nu}F_{\mu}^{\ \nu}=j_{\mu}:=i\{(D_{\mu}\Psi)^{\dagger}\Psi-\Psi(D_{ \mu}\Psi)\}. \tag{6}\] We assume no explicit time dependency in this model so that all the fields remain stationary. This assumption is justified because we are interested in equilibrium analysis of the rotating superfluid system. In addition to it, we shall be working in the axial gauge, that is, \(A_{u}=0\). With these conditions, eq.(5) reduces to the following equation, \[\{\mathcal{D}(u)+\mathcal{D}(r)+\frac{1}{r^{2}}\mathcal{D}(\theta)\}\Psi(u,r, \theta)=0. \tag{7}\] The derivative operators are given by, \[\mathcal{D}(u)\equiv u^{z+1}\partial_{u}\Big{(}\frac{f(u)}{u^{2}} \partial_{u}\Big{)}+iu^{z+1}\partial_{u}\Big{(}\frac{A_{t}}{u^{2}}\Big{)}+iu^ {z-1}A_{t}\partial_{u}-\frac{m^{2}}{u^{2}}\] \[\mathcal{D}(r)\equiv\frac{1}{r}\partial_{r}(r\partial_{r})- \frac{i}{r}\partial_{r}(rA_{r})-iA_{r}\partial_{r}-A_{r}^{2}\] \[\mathcal{D}(\theta)\equiv\partial_{\theta}^{\ 2}-i(\partial_{\theta}A_{ \theta}+A_{\theta}\partial_{\theta})-A_{\theta}^{\ 2}\.\] We would like to point out here that the information about dynamical exponent \(z\) is solely contained in the derivative operator along bulk direction \(u\). The other two derivative operators (along boundary coordinates \(r\) and \(\theta\)) remain same as that were in previous study in \(AdS\) black hole spacetime model [1]. ## III The holographic vortex We are interested in the equilibrium state of this rotating holographic system where vortices exist. As we know that after a critical value of rotation parameter vortices are expected to appear in the holographic superfluid system, we define a deviation parameter, \(\epsilon\), from this critical value of rotation, \(\Omega_{c}\), by the following relation, \[\epsilon\coloneqq\frac{\Omega-\Omega_{c}}{\Omega_{c}} \tag{8}\] where \(\Omega\) is considered to be the constant angular velocity of the disc. It has been argued in [21] that the superfluid and the boundary disc have a relative velocity and, hence, a static superfluid in a rotating boundary disc could be replaced by a rotating superfluid in a static disc. We have pursued this latter senario. To study this system very near to critical rotation velocity, we series expand all the fields and currents in the following manner [23], \[\Psi(u,r,\theta)=\sqrt{\epsilon}\Big{(}\Psi_{1}(u,r,\theta)+\epsilon \Psi_{2}(u,r,\theta)+...\Big{)} \tag{9}\] \[A_{\mu}(u,r,\theta)=\Big{(}A^{(0)}_{\mu}(u,r,\theta)+\epsilon A^ {(1)}_{\mu}(u,r,\theta)+...\Big{)}\] (10) \[j_{\mu}(u,r,\theta)=\epsilon\Big{(}j^{(0)}_{\mu}(u,r,\theta)+ \epsilon j^{(1)}_{\mu}(u,r,\theta)+...\Big{)}. \tag{11}\] ### Lowest order solutions near spacetime boundary In axial gauge, the lowest order solutions for gauge fields that generate rotation field and the chemical potential are given by following relations, \[A^{(0)}_{t}(u)=\mu(1-u^{2-z}),\ \ \ \ (z<2) \tag{12}\] \[A^{(0)}_{r}=0,\ \ \ \ A^{(0)}_{\theta}(r)=\Omega r^{2}. \tag{13}\] \(A^{(0)}_{r}=0\) restricts the superfluid flow in radial direction and \(A^{(0)}_{\theta}\) introduces rotation into the superfluid. It should be noted that \(z=2\) case is non-trivial due to logarithmic divergence in the \(A^{(0)}_{t}(u)\) near the spacetime boundary and needs a separate investigation which is extremely difficult to deal with analytically. Hence, we keep ourselves restricted to values of \(z\) in the interval [1,2]. Considering these lowest order solutions for fields near the spacetime boundary, we rewrite eq.(7) for lowest order in \(\epsilon\), \[\{\mathcal{D}^{(0)}(u)+\mathcal{D}^{(0)}(r)+\frac{1}{r^{2}} \mathcal{D}^{(0)}(\theta)\}\Psi_{1}(u,r,\theta)=0 \tag{14}\] such that the derivative operators become, \[\mathcal{D}^{(0)}(u)\equiv u^{z+1}\partial_{u}\Big{(}\frac{f(u)}{u^{2}} \partial_{u}\Big{)}+iu^{z+1}\partial_{u}\Big{(}\frac{A^{(0)}_{t}}{u^{2}} \Big{)}\] \[+iu^{z-1}A^{(0)}_{t}\partial_{u}-\frac{m^{2}}{u^{2}}\] \[\mathcal{D}^{(0)}(r)\equiv\frac{1}{r}\partial_{r}(r\partial_{r})\] \[\mathcal{D}^{(0)}(\theta)\equiv\partial_{\theta}^{\ 2}-i(\partial_{\theta}A^{(0)}_{\theta}+A^{(0)}_{\theta} \partial_{\theta})-A^{(0)2}_{\theta}\.\] Using method of variable separation to solve eq.(14) and writing \(\Psi_{1}(u,r,\theta)\) as a function of \(u\) and \((r,\theta)\) separately as below, \[\Psi_{1}(u,r,\theta)=\Phi(u)\xi(r,\theta) \tag{15}\] eq.(14) provides the following separated equations, \[\mathcal{D}^{(0)}(u)\Phi(u)=\lambda\Phi(u) \tag{16}\] \[\{\mathcal{D}^{(0)}(r)+\frac{1}{r^{2}}\mathcal{D}^{(0)}(\theta) \}\xi(r,\theta)=-\lambda\xi(r,\theta) \tag{17}\] where \(\lambda\) is some unknown separation constant. Eq.(s)(16, 17) are eigenvalue equations with eigenvalue \(\lambda\). It has been pointed out earlier that the information about dynamical exponent is only in the equation of motion along bulk direction, that is eq.(16). However, the equation on the boundary disc, that is eq.(17), remains same as in [1] and hence, needs no separate investigation. ### Vortex solution In this subsection we shall write the vortex solutions obtained in [1] and list important properties associated with these solutions. The vortex solutions are given as, \[\xi(r,\theta)=\eta_{p,n}(r)e^{ip\theta}=a_{0}e^{-\Omega r^{2}/2}F_{p,n}(r)e^{ ip\theta} \tag{18}\] where \(p\in\mathcal{Z}\) for single valuedness of the solution and \(\lambda=2\Omega(n+1)\) and, \[F_{p,n}(r)=r^{p}\big{(}1+\frac{a_{2}}{a_{0}}r^{2}+\frac{a_{4}}{a_{0}}r^{4}+... +\frac{a_{n}}{a_{0}}r^{n}\big{)}\.\] The coefficients \(a_{i}\) can be determined from a recurrence relation given by eq.(26) of [1]. We make following observations regarding these vortices. 1. These solutions are rotationally symmetric and are subject to Neumann boundary conditions given by, \[\partial_{r}\eta_{p}|_{r=0}=0=\partial_{r}\eta_{p}|_{r=R}\] (19) where \(R\) is the radius of the disc boundary. 2. For the case of \(n=0\), these boundary conditions imply the quantisation of the angular velocity via \(\Omega=\frac{p}{R^{2}}\) for all the values of \(p>1\). 3. For the case of \(n=2\), boundary conditions again restrict \(p>1\). However, a linear relation between \(\Omega\) and \(p\) is obtained for large values of \(p\). Figure(1), taken from [1], shows some of these vortex solutions for \(n=0\). ## IV Sturm-Liouville eigenvalue analysis We shall now solve eq.(16) using Sturm-Liouville eigenvalue approach for the eigenvalue \(\lambda=2\Omega\) corresponding to the vortex solutions with \(n=0\). Near the critical chemical potential (\(\mu\sim\mu_{c}\)), we may take the ansatz for the lowest order gauge fields, \[A^{(0)}_{t}(u)=\mu,\ \ \ A^{(0)}_{r}=0,\ \ \ A^{(0)}_{\theta}(r)=\Omega r^{2}. \tag{20}\] For simplicity, we shall consider \(m^{2}=-2z\) and \(\Delta=z\). With these considerations, we get, \[u^{z+1}\partial_{u}\Big{(}\frac{1-u^{z+2}}{u^{2}}\partial_{u} \Phi(u)\Big{)}+iu^{z+1}\partial_{u}\Big{(}\frac{\mu}{u^{2}}\Phi(u)\Big{)}\] \[+iu^{z-1}\mu\partial_{u}\Phi(u)+\frac{2z}{u^{2}}\phi=2\Omega\Phi (u). \tag{21}\] Further we simplify eq.(21) as, \[u^{z-1}(1-u^{z+2})\partial_{u}^{2}\Phi - \Big{(}zu^{2z}+2u^{z-2}-2i\mu u^{z-1}\Big{)}\partial_{u}\Phi \tag{22}\] \[- \Big{(}2\Omega-\frac{2z}{u^{2}}+2i\mu u^{z-2}\Big{)}\Phi=0\.\] We may now write \(\Phi(u)\) near AdS boundary (\(u\to 0\)), \[\Phi(u)\simeq\ \ <{\cal O}>u^{z}\Lambda(u)\] such that \(\Lambda(u)\) is subjected to the following boundary conditions, \[\Lambda(0)=1\ \ ;\ \ \partial_{u}\Lambda(0)=0. \tag{23}\] Substituting this form of \(\Phi(u)\) in eq.(22), we get an equation for \(\Lambda(u)\), \[(1-u^{z+2})\Lambda^{\prime\prime}+\big{(}\frac{2(z-1)}{u}-3zu^{z +1}+2i\mu\big{)}\Lambda^{\prime}\] \[+ \big{(}\frac{z(z-3)}{u^{2}}+\frac{2i\mu(z-1)}{u}+\frac{2z}{u^{z+1 }}-\frac{2\Omega}{u^{z-1}}-z^{2}u^{z}\big{)}\Lambda=0. \tag{24}\] Here \({}^{\prime}\) denotes derivative with respect to \(u\) in above equation. Eq.(24) implies that \(\mu\) must be purely imaginary for \(\Lambda\) to be real. Hence, we set \(Re(\mu)=0\) and \(Im(\mu)=\mu^{I}\) in above equation. For notational simplicity we shall still denote \(\mu^{I}\) with \(\mu\) in the following discussion. With this imaginary chemical potential, eq.(24) takes the following form, \[(1-u^{z+2})\Lambda^{\prime\prime}+\big{(}\frac{2(z-1)}{u}-3zu^{z +1}-2\mu\big{)}\Lambda^{\prime}\] \[+ \big{(}\frac{z(z-3)}{u^{2}}-\frac{2\mu(z-1)}{u}+\frac{2z}{u^{z+1 }}-\frac{2\Omega}{u^{z-1}}-z^{2}u^{z}\big{)}\Lambda=0. \tag{25}\] To put eq.(25) in Sturm-Liouville form, we use the integrating factor, \[R(u)=u^{z-1}exp(-2\mu\int\frac{du}{(1-u^{z+2})}) \tag{26}\] We may now cast eq.(25) in the Sturm-Liouville form, \[(P(u)\Lambda^{\prime}(u))^{\prime}+Q(u)\Lambda(u)+\Gamma S(u) \Lambda(u)=0 \tag{27}\] where eigenvalue \(\Gamma=\Omega\) could be obtained using following integral, \[\Omega=\frac{\int_{0}^{1}du(P(u)(\Lambda^{\prime}(u))^{2}-Q(u) \Lambda^{2}(u))}{\int_{0}^{1}duS(u)\Lambda^{2}(u)}. \tag{28}\] Also, the Sturm-Liouville coefficient functions \(P(u),Q(u)\) and \(S(u)\) are given as, \[P(u)=u^{z-1}(1-u^{z+2})R(u)\] \[Q(u)=u^{z-1}\big{(}\frac{z(z-3)}{u^{2}}-\frac{2\mu(z-1)}{u}+ \frac{2z}{u^{z+1}}-z^{2}u^{z}\big{)}R(u)\] \[S(u)=-2R(u). \tag{29}\] Note that the integral in eq.(26) can be performed exactly to obtain, \[R(u)=u^{z-1}exp(-2\mu\ u\ _{2}F_{1}(1,\frac{1}{z+2};\frac{z+3}{z+2};u^{z+2})). \tag{30}\] \({}_{2}F_{1}(a,b;c;x)\) is the hypergeometric function given by, \[{}_{2}F_{1}(a,b;c;x)=\sum_{n=0}^{\infty}\frac{(a)_{n}(b)_{n}}{(c)_{n}}\frac{x^ {n}}{n!} \tag{31}\] where \((m)_{n}\equiv m(m+1)...(m+n-1)\). We now consider a trial function for \(\Lambda(u)\) of the form, \[\Lambda_{\alpha}(u)=(1-\alpha u^{2})\] which satisfies the given boundary conditions, that is, \(\Lambda(0)=1,\ \partial_{u}\Lambda(0)=0\). With this trial function, we have to extremize \(\Omega_{\alpha}\) with respect to \(\alpha\), \[\Omega_{\alpha}=\frac{\int_{0}^{1}du(P(u)(\Lambda^{\prime}_{\alpha}(u))^{2}-Q (u)\Lambda^{2}_{\alpha}(u))}{\int_{0}^{1}duS(u)\Lambda^{2}_{\alpha}(u)}. \tag{32}\] ### Analysis for z=1 Let us first consider the case for dynamical exponent \(z=1\). In this case, eq.(3) shows that bulk spacetime becomes \(AdS_{(3+1)}\) black hole spacetime, which is exactly the one that we have analysed in [1]. Near \(AdS\) boundary (\(u\to 0\)) we know that \(\Phi(u)=<{\cal O}>u\Lambda(u)\), which is the same as in this case with \(z=1\). So we get the following Sturm-Liouville form to solve for, \[(P(u)\Lambda^{\prime}(u))^{\prime}+Q(u)\Lambda(u)+\Gamma S(u) \Lambda(u)=0 \tag{33}\] where eigenvalue \(\Gamma=\Omega\) and, \[P(u)=(1-u^{3})R(u)\] \[Q(u)=-uR(u)\] \[S(u)=-2R(u). \tag{34}\] Figure 1: Un-normalized lowest order (\(n=0\)) vortex solutions for different winding numbers. (The value of R is set to be equal to 10). The integrating factor in this case is given as, \[R(u)=exp(-2\mu\ u\ _{2}F_{1}(1,\frac{1}{3};\frac{4}{3};u^{3})). \tag{35}\] We solve this problem by considering the trial function as discussed above. Also, near \(AdS\) boundary \(u\to 0\), we further approximate integrating factor \(R(u)\) in the following manner, \[R(u)\simeq(1-2\mu\ u\ _{2}F_{1}(1,\frac{1}{3};\frac{4}{3};u^{3})). \tag{36}\] Figure(2) shows the variation of extremised \(\Omega\equiv\Omega_{\alpha=\alpha^{*}}\) with the increasing value of imaginary chemical potential, \(\mu\). Note that three colour plots in the figure represent three different orders upto which we have approximated the hypergeometric function in eq.(36) for calculations. Orange plot is obtained with lowest order approximation while blue and green plots result from next consecutive orders of approximation. We shall follow this colour scheme throughout this paper. A few observations are in order regarding this graph which we enumerate below. 1. This graph between \(\Omega\) and \(\mu\) shows the same decreasing pattern as in [1]. 2. Viewed in conjunction with \(\Omega=\frac{p}{R^{2}}\), this analysis shows that for \(AdS_{(3+1)}\) holographic superfluid model analysed near equilibrium, presence of \(\mu\) opposes the formation of higher winding number vortices. 3. As is well known in gauge/gravity duality that vortices in holographic superfluid provide mechanism for external perturbations to decay through black hole horizon and hence represent dissipation in such gravity dual systems, this graph suggests that for \(z=1\) case, as \(\Omega\) decreases with increase in \(\mu\), hence \(\mu\) supports less dissipation in the system. ### Analysis for z \(\neq\)1 In this case, the Sturm-Liouville form of the equation is given by eq.(27) and the value of \(z\) lies in the interval \((1,2)\). Notice that \(z\neq 2\) because of logarithmic divergence of the fields at the boundary \(u\to 0\). With the assumed trial function \(\Lambda_{\alpha}(u)=(1-\alpha u^{2})\), we need to extremise the following eigenvalue integral, \[\Omega_{\alpha}=\frac{\int_{0}^{1}du(P(u)(\Lambda_{\alpha}^{\prime}(u))^{2}-Q (u)\Lambda_{\alpha}^{2}(u))}{\int_{0}^{1}duS(u)\Lambda_{\alpha}^{2}(u)}. \tag{37}\] where \[P(u) = u^{z-1}(1-u^{z+2})R(u)\] \[Q(u) = u^{z-1}\big{(}\frac{z(z-3)}{u^{2}}-\frac{2\mu(z-1)}{u}+\frac{2z }{u^{z+1}}-z^{2}u^{z}\big{)}R(u)\] \[S(u) = -2R(u). \tag{38}\] Here we shall use the following approximated form of eq.(30), \[R(u)=u^{z-1}(1-2\mu\ u\ _{2}F_{1}(1,\frac{1}{z+2};\frac{z+3}{z+2};u^{z+2})). \tag{39}\] In Figure(3), we have shown the variation of extremised values of \(\Omega\) against imaginary chemical potential \(\mu\) for \(z=\frac{3}{2}\) by considering three orders of approximation of \(\ {}_{2}F_{1}(1,\frac{1}{z+2};\frac{z+3}{z+2};u^{z+2})\). As we have mentioned, colour codes remain same. Let us summarise the key observations from this graph. 1. For \(z=\frac{3}{2}\), \(\Omega\) shows an increasing pattern with \(\mu\) unlike in the previous case for \(z=1\). 2. This behaviour implies that for a holographic superfluid with Lifshitz geometry and dynamical exponent \(z=\frac{3}{2}\), higher winding number solutions are more favourable with increasing value of \(\mu\). 3. In terms of dissipation in such a rotating holographic superfluid, we conclude from this result that higher values of \(\mu\) introduce more dissipation in the presence of Lifshitz fixed points (in gauge/gravity duality Lifshitz geometry of the bulk theory is dual to a boundary theory with Lifshitz fixed point.). 4. Same increasing trend for \(\Omega\) with \(\mu\) is obtained for other values of \(z\) in the interval \((1,2)\). Cases with \(z=\{1.1,\frac{5}{4},\frac{7}{4}\}\) are given in _Appendix I_. ## V Conclusion and remarks In this work we have studied the properties of Lifshitz scaling in the holographic superfluid model under rotation. We have explicitly shown that for \(z=1\) our results match with [1]. Although vortex structure at the boundary disc remains same for all the values of \(z\), Lifshitz holographic system differs significantly from the holographic superfluid model in AdS black hole spacetime. In fact, for \(1<z<2\) we get remarkably different trend between \(\Omega\) and \(\mu\). Our analysis shows that presence of Lifshitz scaling in holographic superfluids does allow vortex formation if we put it under rotation. However, high values of the chemical potential \(\mu\) support the formation of higher winding number vortices. This implies that for holographic superfluids with Lifshitz scaling, \(\mu\) increases dissipation in the system unlike in the case for AdS black hole model where it has been shown in [1] that \(\mu\) suppresses dissipation by disfavouring the formation of high winding number vortices. Because presence of Lifshitz scaling breaks the relativistic invariance, gravity model built using Lifshitz geometry is dual to non-relativistic boundary superfluid system. In this context we may conclude from this analysis that \(\mu\) favours the dissipative vortex state for the non-relativistic superfluids having Lifshitz scaling symmetry whereas for relativistic boundary superfluid systems, presence of imaginary chemical potential opposes the dissipation as \(\Omega\) decreases with increase in \(\mu\)[1]. We have also checked the robustness of our results for a different choice of trial function as well. Results obtained in that analysis are given in _Appendix II_ which show that the qualitative difference between relativistic and non-relativistic cases remain same. **Acknowledgements**: AS would like to acknowledge department of science and technology, government of India for the research fellowship. Authors would also like to thank anonymous referee for sharing some valuable remarks.
2307.15843
Design of Cost-Effective Nanoacoustic Devices based on Mesoporous Thin Films
Gigahertz acoustic resonators have the potential to advance data processing and quantum communication. However, they are expensive and lack responsiveness to external stimuli, limiting their use in sensing applications. In contrast, low-cost nanoscale mesoporous materials, known for their high surface-to-volume ratio, have shown promise in various applications. We recently demonstrated that mesoporous silicon dioxide (SiO2) and titanium dioxide (TiO2) thin layers can support coherent acoustic modes in the 5 to 100 GHz range. In this study, we propose a new method for designing tunable acoustic resonators using mesoporous thin films on acoustic distributed Bragg reflectors. By infiltrating the pores with different chemicals, the material's properties could be altered and achieve tunability in the acoustic resonances. We present four device designs and use simulations to predict resonators with Q-factors up to 1000. We also observe that the resonant frequency and intensity show a linear response to relative humidity, with a tunability of up to 60 %. Our platform offers a unique opportunity to design cost-effective nanoacoustic sensing and reconfigurable optoacoustic nanodevices.
Edson R. Cardozo de Oliveira, Priscila Vensaus, Galo J. A. A. Soler-Illia, Norberto Daniel Lanzillotti-Kimura
2023-07-29T00:15:27Z
http://arxiv.org/abs/2307.15843v1
# Design of Cost-Effective Nanoacoustic Devices based on Mesoporous Thin Films ###### Abstract Gigahertz acoustic resonators have the potential to advance data processing and quantum communication. However, they are expensive and lack responsiveness to external stimuli, limiting their use in sensing applications. In contrast, low-cost nanoscale mesoporous materials, known for their high surface-to-volume ratio, have shown promise in various applications. We recently demonstrated that mesoporous silicon dioxide (SiO\({}_{2}\)) and titanium dioxide (TiO\({}_{2}\)) thin layers can support coherent acoustic modes in the 5 to 100 GHz range. In this study, we propose a new method for designing tunable acoustic resonators using mesoporous thin films on acoustic distributed Bragg reflectors. By infiltrating the pores with different chemicals, the material's properties could be altered and achieve tunability in the acoustic resonances. We present four device designs and use simulations to predict resonators with Q-factors up to 1000. We also observe that the resonant frequency and intensity show a linear response to relative humidity, with a tunability of up to 60%. Our platform offers a unique opportunity to design cost-effective nanoacoustic sensing and reconfigurable optoacoustic nanodevices. ## 1 Introduction The potential of nanophononics - the engineering and manipulation of acoustic phonons at the nanoscale -, [1, 2, 3, 4, 5, 6, 7] has been established within various technological applications, including heat management, data processing, and quantum technologies. The field explores how the interaction between GHz acoustic phonons and other excitations, dependent on the position of the atoms in a lattice, can be leveraged to engineer novel and more efficient devices in phonon-based optoelectronics, [8, 9, 10, 11, 12] photonics, [13, 14, 15, 16, 17, 18] plasmonics, [19, 20, 21, 22], polaritonics, [23, 24, 25] and other related areas. To precisely tailor the phononic response and produce high-quality devices, expensive techniques such as molecular beam epitaxy and electron beam lithography are typically employed. [26, 21, 5, 22] However, there is a growing demand for cost-effective and reliable fabrication processes in engineering artificial materials for high-frequency acoustic applications. One promising solution is using nanoscale mesoporous materials, which possess a high surface-to-volume ratio and tailorable mesopores. [27, 28, 29, 30] These materials allow for chemical functionalization, [31, 32] have versatile optical applications, [33, 34] and offer a phononic response within the GHz range. [35, 36, 37, 38, 39] In recent works, we reported the successful development of mesoporous silicon dioxide (SiO\({}_{2}\)) and titanium dioxide (TiO\({}_{2}\)) thin layers capable of supporting coherent acoustic modes spanning from 5 to 100 GHz, with Q-factors ranging from 5 to 17. [36, 37] Building upon this progress, we theoretically investigate a platform for engineering complex mesoporous systems specifically designed to support targeted high-frequency acoustic phonon modes, thus optimizing their performance for sensing applications. Our proposed structures consist of a mesoporous surface cavity placed atop an acoustic distributed Bragg reflector (DBR), enabling the confinement of phonons at 100 GHz. By incorporating environmental molecules into the pores and manipulating the elastic properties of the metamaterial, the mesoporous layer could potentially serve as the active element for sensing applications. [35] Our simulation results demonstrate the presence of robust coherent phonon signals at the desired frequency, offering the potential for achieving high Q-factors. This significant advancement paves the way for a promising platform in nanoacoustic sensing and reconfigurable optoacoustic nanodevices, all made possible through the utilization of soft and cost-effective fabrication methods. ## 2 Methodology In a pump-probe coherent phonon generation experiment, standard technique in nanophononics, a strong laser pulse, namely the pump, interacts with the structure, triggering the phononic dynamics in the system. The modulation of the material refractive index due to these vibrations, i.e., the photoelastic interaction, takes place. Finally, a delayed probe reaching the structure measures transient reflectivity changes due to the presence of coherent acoustic phonons. [40; 41] In this work, we simulate the generation and detection spectra of coherent acoustic phonons in such an experiment and test different sample designs. This is done by implementing the photoelastic model using the standard transfer matrix method for acoustic and optical fields. [13; 42] The theoretical approach employed here allows us to investigate multilayered structures with contrasting acoustic and optical impedances and test different parametrizations. [13] The material parameters used in the simulations are presented in Table 1. We assume a white phonon spectrum propagating from the substrate towards the surface and compute the normalized stationary solutions for each phonon frequency. This allows us to calculate the acoustic phonon transmission spectrum and determine the frequency-dependent acoustic field distribution along the structure. Analogously, the incident laser electric field profile can be calculated by considering Maxwell's equations for electromagnetic waves. [43] We assume standard boundary conditions, and a monochromatic laser to simplify the formulation. Finally, we determine the acoustic phonon generation spectrum (\(G(\omega)\)) with an overlap integral of the incident pump's electric field (\(E(\nu,z)\)), the strain field (\(\eta(\omega,z)\)), i.e., the derivative of the acoustic displacement, and the material-dependent transduction constant (\(k(z)\)), as follows: \[G(\omega)=\int k(z)\eta(\omega,z)|E(\nu,z)|^{2}dz, \tag{1}\] where \(\omega\) and \(\nu\) correspond to the acoustic and optical frequencies, respectively. The coherent phonon detection spectrum, which can be related to the modulation of the probe reflectivity (\(\Delta R(\omega)\)), is then calculated by considering the photoelastic interaction in the material. This is done by calculating an overlap integral similar to equation 1, but taking into account the generation spectrum and the probe's complex electric field, in the form: \[\Delta R(\omega)=\int G(\omega)p(z)\eta(\omega,z)E(\nu,z)^{2}dz \tag{2}\] Additionally, we can determine the surface displacement by multiplying the phonon generation spectrum with the transmission spectrum: \[d(\omega)=G(\omega)T(\omega) \tag{3}\] Experimentally, the surface displacement can be measured by implementing an interferometer in the pump-probe setup, which involves sensitive alignment procedures. [44] While it is easier to probe changes in reflectivity, the analysis of surface displacement becomes necessary depending on the specific sample under investigation. [36; 37] It is worth noting that our analysis assumed that the optical absorption and coherent phonon generation are limited to the nickel transducer layer. \begin{table} \begin{tabular}{l l l l} \hline Material & Index of refraction & Speed of sound (m/s) & Density (g/cm\({}^{3}\)) \\ \hline dSiO\({}_{2}\) & 1.538 & 5750 & 2.2 \\ dTiO\({}_{2}\) & 2.56 & 6700 & 2.9 \\ mSiO\({}_{2}^{*}\) & 1.323 & \(\sim\) & \(\sim\) \\ (\(p\) = 45\%) & & & \\ Nickel & 2.218+4.893\(i\) & 5580 & 8.908 \\ Air & 1.0003 & 343 & 1.28x10\({}^{-3}\) \\ Water & 1.33 & 1480 & 0.997 \\ Glass & 1.538 & 5750 & 2.2 \\ \hline \end{tabular} * Mesoporous SiO\({}_{2}\). Refer to section 5 for details on the speed of sound and density. \end{table} Table 1: Optical and elastic properties of the studied materials for the numerical simulation. Design of mesoporous surface acoustic resonators Mesoporous materials are known for their ability to adsorb chemical compounds into the mesopores. [45, 46] This leads to effective functionalization of the materials, that can be sensitive to different external stimuli. In the context of nanoacoustics, the chemical infiltration modifies the overall elastic properties of the material, such as mass density (\(\rho\)) and speed of sound (\(v\)). The simplest acoustic resonator can be engineered by embedding a mesoporous thin film (MTF) between two contrasting acoustic impedance (\(Z\)) layers. Depending on the relative impedances of the layers, the resonator will support frequency modes according to either \[f_{n}=\frac{mv}{2d}\] (4a) or \[f_{n}=\frac{(2n-1)v}{4d}, \tag{4b}\] where \(d\) and \(n=1,2,3...\) are associated with the mesoporous thickness and mode order, respectively. Here, the acoustic resonances in the mesoporous materials can be determined with the eq. 4a The acoustic impedance mismatch between the layers, given by the contrast in mass density and speed of sound of the involved materials, can account for enhanced transmission or reflection of the sound waves at the interfaces. Therefore, chemical infiltration into the mesopores can significantly change the resonance frequency -by changing the \(v\), and the Q-factor of the confined modes -by changing \(v\) and \(\rho\). A simple approach could be to submit the device to different relative humidity (RH) conditions, which could be extended for sensing different gases. In fact, phonon confinement in mesoporous thin films based on SiO\({}_{2}\) and TiO\({}_{2}\) has been recently reported in pump-probe experiments. [36, 37] In such works the mesoporous layer was embedded between a glass substrate and a nickel thin-film that acts as the acousto-optical transducer. Two major constraints arise from this design: the mesoporous thin film is covered by other layers, which prevents efficient adsorption into the mesopores; and the leakage of the acoustic waves towards the substrate reduces the Q-factor of the resonator. To overcome these limitations we propose a device design composed of the MTF as the outermost layer in the structure, followed by the nickel transducer and an acoustic distributed Bragg reflector (DBR) based on dense SiO\({}_{2}\)/TiO\({}_{2}\) on top of a glass substrate. Since the mesoporous SiO\({}_{2}\) is a transparent material for the studied optical range (near infrared), the phonon generation takes place in the nickel layer upon light absorption, mainly via photoinduced thermoelasticity. [41] The acoustic DBR designed at a frequency \(f\) has an acoustic stop-band that reflects the frequency components within this band, as shown in Figure 1, for \(f=100\) GHz. On the other hand, GHz acoustic phonons are entirely reflected at the mesoporous/air interface, as they do not propagate into the air. With that, by properly engineering the thicknesses of the layers, it is possible to tune the DBR stop-band and develop a surface acoustic cavity confining phonons at the mesoporous layer. The acoustic resonators (ARs) proposed here are formed by four different elements. We engineer and numerically investigate four distinct variations of ARs, shown in Figs 2(a-d). These variations involve different Figure 1: (a) Response of the acoustic DBR formed by dense SiO\({}_{2}\)/TiO\({}_{2}\) layers, showing a stop band centered at 100 GHz. (b) The acoustic displacement pattern at 100 GHz (red) and the acoustic impedance (black) along the structure. combinations of layers. Our objective is to identify the optimal configuration that exhibits the strongest correlation between acoustic resonance and the infiltration of liquids into the mesopores. The devices are formed by a glass substrate, followed by a 3-periods-DBR of TiO\({}_{2}\)/SiO\({}_{2}\) (\(\lambda/4\),\(\lambda/4\)) designed to operate at 100 GHz, where \(\lambda\) corresponds to the acoustic wavelength at the design frequency. The structures are named AR1, 2, 3 and 4. ARs 1-3 have an extra \(\lambda/4\) TiO\({}_{2}\) layer before the \(\lambda/4\) Ni transducer. After the nickel, the four structures have a combination of zero, one, or two \(\lambda/4\) dense layers before finalizing with the mesoporous layer. With these configurations, the mesoporous thickness leading to phonon confinement at 100 GHz is \(\lambda/2\) for AR1, AR3, and AR4, and \(\lambda/4\) for AR2. Figure 2(e) displays the simulated phononic spectra of all the structures, considering the modulation of the refractive index of the Nickel layer, with a pronounced acoustic mode at 100 GHz. This result supports the use of a reflectometric experimental setup for detecting the acoustic resonances in the mesoporous layer. The acoustic resonance detection performed by the Ni layer indicates that its proximity to the mesoporous layer is not a critical requirement. The peaks at 75 GHz and 125 GHz indicate a stop-band spanning over \(\sim\)50 GHz. The acoustic resonance in AR2 has the weakest response. To investigate the region in which the phonons are confined in the structures, the displacement pattern at 100 GHz is plotted, overlapped with the device profiles (blue curves of Figs. 2(a-d)), the maximum displacement indicates the region of phonon confinement. In such analysis, AR1 reveals as the only device in which the phonon confinement is not at the mesoporous layer; however, it has the strongest displacement among all. AR3 presents maximum amplitude in two different regions of the structure, which indicates the formation of a surface and a cavity mode. Finally, AR2 and AR4 have a maximum displacement at the mesoporous layer. Usually, the fabrication of multilayered structures is susceptible to thickness fluctuations. When engineering acoustic resonators, the precise definition of the layer thicknesses is essential, and the thickness variation might compromise the phonon confinement. Notwithstanding, we study the photoelastic interaction as a function of the mesoporous layer thickness, in terms of phonon wavelength \(\lambda\), ranging from \(\lambda/4\) to \(\lambda\), and the results are presented in Figure 3. The mesoporous thickness for which the resonators exhibit the mode frequency at 100 GHz are \(\lambda/2\) for AR1, AR3 and AR4, and \(\lambda/4\) for AR2, in agreement with the designed structures. Furthermore, we can identify how sensitive the acoustic resonances are with the mesoporous thickness. By changing the layer thickness from Figure 2: (a-d) Normalized displacement pattern (blue) and acoustic impedance (red) of the surface cavity structures 1-4. The layers of mesoporous, nickel, SiO\({}_{2}\) and TiO\({}_{2}\) are represented by the colors orange, yellow, blue, and green, respectively. (e) Reflectivity spectra for the structures presented in panels (a-d). The inset represents an enlargement of the peak at 100 GHz. \(\lambda/4\) to \(3\lambda/4\), the acoustic resonances of AR2 and AR4 span over \(\sim\) 20 and \(\sim\) 30 GHz, respectively. Conversely, AR1 and AR3 have minimum dependence on the mesoporous thickness, as the phonon confinement region is located between the Ni and the DBR. It is worth noting that the modes within the stop band persist, even if the resonance deviates from the specified 100 GHz. ## 4 Optimization of the devices To determine the optimal responsivity of the proposed devices, we simulated the photoelastic interaction of structures with a different number of the dense SiO\({}_{2}\)/TiO\({}_{2}\) bilayers composing the DBR, ranging from 2 to 10 periods, for the 4 structures. Afterward, the mode at \(f=100\) GHz in the full pump-probe simulation is fitted with a Lorentzian function, and both the integrated intensity and full width at half maximum (FWHM) are extracted. The results are shown in Figure 4. The integrated intensity (Fig. 4(a)) is strongly dependent on the number of DBR periods, for all the ARs, changing up to five orders of magnitude over the calculated range. Maximum intensity is observed at 6 periods for AR1 and 5 periods for AR2 and AR3, and AR4 displays the maximum value at 2 periods. The quality factor (Q-factor), shown in Fig. 4(b), is calculated by the ratio between the mode frequency (100 GHz) and the extracted FWHM. All the resonators present an increase of the Q-factor as the number of DBR periods increases, as expected, showing a tendency for stabilization at higher periods. At 10 periods, the highest Q-factor, of \(\sim\)1000, is for AR1, followed by AR3 (\(\sim\)550), AR2 (\(\sim\)180) and AR1 (\(\sim\)150). ## 5 Performance of the devices Finally, we investigate the performance of the acoustic resonators over liquid infiltration into the nanopores. It is still unclear how acoustic resonators in the GHz range respond to the density and speed of sound changes in Figure 4: (a) Integrated intensity and (b) quality factor of the acoustic mode at 100 GHz as a function of the number of DBR periods for the four proposed resonators. Figure 3: Photoelastic interaction as a function of the mesoporous layer thickness and frequency for AR1, AR2, AR3, and AR4. Horizontal dashed lines represent the key layer thicknesses in steps of a quarter of the phonon wavelength. Vertical dashed lines are guides to the design frequency of 100 GHz. the MTF under liquid infiltration. [36, 37] High-frequency coherent acoustic phonons are unable to penetrate into liquids or gases; therefore, they remain entirely on the solid matrix without penetrating into the nanopores, and no response would be perceived. However, recent studies indicate that in nanoscale mesoporous materials, liquids, such as water, condensate at the boundaries of the matrix and behave as solid elements. [47, 48]. This allows high-frequency phonon penetration, leading to a change in the effective density, as well as in the speed of sound. The pore-load modulus, a quantity associated with the pressure inside the pores under liquid infiltration, has also been studied in silicon mesoporous membranes as a function of the material porosity. [49] Elucidating such effects is out of the scope of this work. Hereupon, in order to verify the effect of density and speed of sound change in the material, we consider weighted averages between the solid matrix, porosity, and humidity, while keeping the other parameters (such as mesoporous layer thickness) constant, according to the equation \[X=X_{\text{solid}}\left(1-p\right)+\left[X_{\text{air}}\left(1-h\right)+X_{ \text{water}}h\right]p, \tag{5}\] where \(X\) represents either the density or the speed of sound, whereas \(p\) and \(h\) correspond to the porosity and humidity ratios. We then perform the simulation for all the acoustic resonators with a constant porosity of 40 %, and variable humidity between 0 and 100 %, in which the reference 100 GHz mode is calculated at 0 % humidity. By fitting a Lorentzian function on the acoustic resonance, we extract and analyze the frequency position, FHWM, and normalized intensity variation. The results for each of the peak properties are respectively depicted in Figs. 5(a), (b), and (c). The water adsorption on the mesoporous layer influences the three peak parameters. All the resonators undergo a blueshift in frequency at higher humidity (Figs. 5(a)), which is related to the increase in the speed of sound. As the sound speed in the water is higher, the weighted average increases and the resonator confines higher frequencies at the same region. It is worth noticing that AR1 is the least sensitive, with a frequency shift \(\Delta f\sim 0.5\) GHz from 0 to 100 % RH, whereas AR4 is the most sensitive (\(\Delta f\sim 3.5\) GHz). This can be justified by the fact that AR1 is the only resonator in which the confined mode is not at the mesoporous layer, whereas AR4 displays clear confinement at this layer, as seen in Figs. 2(a) and (d). The highest sensitivity for AR4 and lowest for AR1 is also true for the analysis of FWHM and integrated intensity. The linewidth of the resonance broadens for the resonators AR1, AR3, and AR4, whereas it narrows for AR2, as shown in Fig. 5(b). Liquid infiltration into the mesopores also leads to a decrease in the integrated intensity of the peak at 100 GHz for all the resonators (Fig. 5(c)), which reach values of \(\sim 0.68\), \(\sim 0.77\), \(\sim 0.45\), and \(\sim 0.40\) compared to the maximum intensity at 0 % RH, for each of the resonators AR1, AR2, AR3, AR4, respectively. This reduction in intensity comes from the fact that the mesoporous acoustic impedance increases under liquid infiltration, leading to a less efficient coupling of the acoustic waves to the transducer. The three peak parameters of the four resonators show a nearly linear dependence on water adsorption, which is an essential aspect of the calibration and scale of sensors. The design of AR4 is considered to provide the best sensitivity. Despite the observed differences, the four structures proposed in here are potential candidates for sensing applications. Further experimental efforts should explore this potential in detail. Additionally, the scope of the study could be expanded to include other systems, such as the detection of different gases with distinct densities, which would likely yield varied responses. Figure 5: (a) Frequency shift, (b) FWHM, and (c) normalized intensity of the mode at 100 GHz as a function of water adsorption for the four proposed structures. Conclusion We have introduced a new design for optoacoustic sensing devices featuring mesoporous thin film as the active component. By infiltrating the layer with liquid, the elastic properties of the thin film can be gradually modified. As a result, an acoustic resonator employing these materials experiences an overall alteration of its resonance characteristics. We theoretically demonstrated the promising potential of these devices for environmental sensing. They can exhibit high-quality factors of up to 1000, a nearly linear response of peak parameters to changes in humidity, and a significant peak intensity variation of up to 60 %. The small size of the resonator and the short wavelength of GHz phonons offer a notable advantage over other sensors, as they enable significantly faster responsiveness. This means that a smaller volume of liquid infiltration is needed to achieve effective sensing. However, the requirement for all-optical experiments to investigate high-frequency acoustic phonons remains a significant roadblock for practical applications. This challenge could potentially be overcome by combining electrically contacted bulk acoustic wave resonators for efficient transduction. [50, 51] Nevertheless, our findings shed light on a novel framework for nanoacoustic sensing and adaptable devices, utilizing cost-effective fabrication techniques. ## 7 Acknowledgment The authors acknowledge funding by the ECOS-Sud Program through the project PA19N03 (TUNA-Phon). NDL-K acknowledges funding by the European Research Council Consolidator Grant No. 101045089 (T-Recs). GJAASI acknowledges ANPCyT for projects PICT 2017-4651, PICT-2018-04236, and PICT 2020-03130. PV acknowledges CONICET for the Doctoral Fellowship.
2303.01319
DFT based investigation of bulk mechanical, thermophysical and optoelectronic properties of PbTaSe2 topological semimetal
PbTaSe2 is a non-centrosymmetric topological semimetal. In this work we have explored the structural, elastic, mechanical, bonding, electronic, acoustic, thermal, and optical properties of PbTaSe2. The electronic bond structure calculations confirm semi-metallic character. Fermi surface topology shows both electron and hole sheets. The single crystal elastic constants reveal that PbTaSe2 is elastically stable. The compound is soft, brittle, and highly machinable at the same time. It also possesses very high level of dry lubricity. Various anisotropy indicators suggest that PbTaSe2 is elastically anisotropic with layered character. The phonon dynamics has been investigated. Phonon dispersion plot shows that the compound is dynamically stable with a clear frequency gap between the acoustic and optical branches. The Debye temperature, phonon thermal conductivity, and melting temperature of PbTaSe2 is low. The compound has medium Gr\"uneisen parameter. The bonding character is mainly dominated by ionic bonding with some metallic contribution. The optical parameters have been studied in detail. The optical spectra reveal metallic features. The compound reflects visible light very efficiently (reflectance above 60%). It is also an efficient absorber of the ultraviolet light. The compound exhibits significant optical anisotropy with respect to the polarization directions of the incident electric field.
A. S. M. Muhasin Reza, S. H. Naqib
2023-03-02T14:50:27Z
http://arxiv.org/abs/2303.01319v1
DFT based investigation of bulk mechanical, thermophysical and optoelectronic properties of PbTaSe\({}_{2}\) topological semimetal ###### Abstract PbTaSe\({}_{2}\) is a non-centrosymmetric topological semimetal. In this work we have explored the structural, elastic, mechanical, bonding, electronic, acoustic, thermal, and optical properties of PbTaSe\({}_{2}\). The electronic bond structure calculations confirm semi-metallic character. Fermi surface topology shows both electron and hole sheets. The single crystal elastic constants reveal that PbTaSe\({}_{2}\) is elastically stable. The compound is soft, brittle, and highly machinable at the same time. It also possesses very high level of dry lubricity. Various anisotropy indicators suggest that PbTaSe\({}_{2}\) is elastically anisotropic with layered character. The phonon dynamics has been investigated. Phonon dispersion plot shows that the compound is dynamically stable with a clear frequency gap between the acoustic and optical branches. The Debye temperature, phonon thermal conductivity, and melting temperature of PbTaSe\({}_{2}\) is low. The compound has medium Gruneisen parameter. The bonding character is mainly dominated by ionic bonding with some metallic contribution. The optical parameters have been studied in detail. The optical spectra reveal metallic features. The compound reflects visible light very efficiently (reflectance above 60%). It is also an efficient absorber of the ultraviolet light. The compound exhibits significant optical anisotropy with respect to the polarization directions of the incident electric field. Topological semimetal; Elastic properties; Thermal properties; Optoelectronic properties; Density functional theory ## 1 Introduction Condensed matter physics is an important branch of material physics. Broadly speaking, there are three types of material based on their electronic structure based on the band theory of solids; insulators, semiconductors and metals. Materials are also classified in various phases based on symmetry states and symmetry braking states [1, 2, 3, 4, 5, 6]. In topological materials specific states of electrons are topologically protected and are free from environmental perturbation. The compound PbTaSe\({}_{2}\) is a topological semimetal which becomes a non-centrosymmetric superconductor at 3.8 K. Structurally, the compound has Pb layers which are separated by the TaSe\({}_{2}\) layers [7, 8]. In this structure the Pb atoms made triangular lattice and is sandwiched between the hexagonal TaSe\({}_{2}\) layers in the ground state. The compound PbTaSe\({}_{2}\) undergoes a number of structural phase transitions with different crystal symmetries at high pressures. The angle-resolved photoemission spectroscopy (ARPES) and first-principles calculations, showed the bulk nodal-line band structure and fully spin-polarized topological surface states in PbTaSe\({}_{2}\)[7, 8, 9]. The electronic properties of topological semimetals like the gapless band structure can be used in photo detectors [10, 11, 12]. Fermi-arcs for spintronics and qubits [13, 14, 15, 16] for nanoscale device applications in nanostructures. Previously, the electronic band structure, energy density of states and superconducting state of PbTaSe\({}_{2}\) have been studied in detail [7, 8, 10, 15, 17, 18]. On the other hand, investigation of various bulk physical properties, such as, elastic constants, mechanical characteristics, anisotropy, bonding features, optical properties, and thermal parameters in the ground state structure have not been explored theoretically yet. There is significant research gap existing as far as ground state bulk physical properties of PbTaSe\({}_{2}\) are concerned. We wish to fill these research gaps in this study. The rest of the paper has been arranged as follows. The computational methodology is described in Section 2. The results of the calculations are presented and discussed in Section 3. Section 4 consists of the conclusions of this work. ## 2 Computational scheme In this work all the calculations are carried out using the plane wave pseudopotential density functional theory (DFT) as implemented by the CAmbridge Serial Total Energy Package (CASTEP) [19, 20, 21]. The exchange-correlation terms are incorporated in the total energy by using the generalized gradient approximation (GGA) [22] with the functional developed by Perdew, Burke, and Ernzerhof (PBE-sol) [23] and the local density approximation (LDA). The ground state of a crystalline solid material is found by solving the Khon-Sham equation [20]. For reliable results, selection of pseudopotential is important. The pseudopotential gives the residual attractive interaction between an electron and an ion after taking into account the effective repulsion that arises from the exclusion principle demanding that valence states are orthogonal to core electronic states. The on-the-fly generated (OTFG) ultrasoft pseudopotential have been used in the calculations [24]. The Broyden-Fletcher-Goldfarb-Shanno (BFGS) optimization method has been adopted to find out the ground state crystal structure. We have also used the density mixing [25]. The following valence electron orbitals are considered for Pb, Ta, Se atoms, respectively: Pb [6s\({}^{2}\)6p\({}^{2}\)], Ta[5d\({}^{3}\)6s\({}^{2}\)], Se[4s\({}^{2}\)4p\({}^{4}\)]. The G-centered k-points have been considered in the reciprocal space (Brillouin zone). A k-points mesh of size 19\(\times\)19\(\times\)6 in the Monkhorst-Pack grid scheme [26] has been used for the sampling of the first Brilloun zone (BZ) of the hexagonal unit cell for PbTaSe\({}_{2}\). A plane wave basis with a cut off energy of 500 eV is used to expand the eigenfunctions of the valence and nearly valence electrons. Geometry optimization has been performed using a self-consistent convergence limit of 10\({}^{-6}\) eV/atom for the energy, 0.02 eV/A for the maximum force, 0.05 GPa for the maximum stress and 10\({}^{-3}\) A for maximum atomic displacement. The optical properties of PbTaSe\({}_{2}\) have been evaluated using the ground state electronic band structure. The single crystal elastic constants have been calculated using the stress-strain method contained in the CASTEP. Thermal parameters have been studied with the help of elastic constants and moduli. The phonon dynamical calculations are carried out using the linear response theory. We have not included the spin-orbit coupling (SOC) in the band structure calculations. This is because the focus of this work is the bulk physical properties of PbTaSe\({}_{2}\) which are not greatly affected by the SOC [3, 4, 6, 27]. Inclusion of the SOC affects mainly the surface electronic states. The main effect of SOC is the splitting of surface sensitive electronic states of PbTaSe\({}_{2}\) by \(\sim\) 0.10 to 0.20 eV with an amplification of the topological semimetal characteristics of the compound. Such splitting does not affect the bulk physical behavior significantly. The chemical bonding natures of PbTaSe\({}_{2}\) have been explored via the Mulliken population analysis (MPA) and the Hirshfeld population analysis (HPA). ## 3 Results and analysis ### Structural properties The P\(\bar{6}\)m\({}_{2}\) structure of PbTaSe\({}_{2}\) is called the \(\alpha\)-phase. The schematic hexagonal [space group P\(\bar{6}\)m\({}_{2}\) (No 187)] crystal structure of PbTaSe\({}_{2}\) is shown in Figure 1. The unit cell consists of 4 atoms in which there is one Pb atom, one Ta atom and two Se atoms. The atomic positions and lattice parameters of the crystal are fully optimized starting with the experimental values found in earlier studies [28]. The calculated lattice constants a (= b) and c along with experimental and other theoretical values are given in Table 1. It is observed that the present values are very close to the experimental ones [7, 28, 29, 30]. Since optimization of the crystal geometry is one of the most crucial part in any ab-initio investigation, excellent agreement between the computed and experimental lattice constants imply that the results obtained in this study are reliable [31]. The lattice can be viewed as a Pb atomic layer is interspaced with two adjacent TaSe\({}_{2}\) layers. The Pb-Se bonds are comparatively weak compared to the Ta-Se bonds. The structure of PbTaSe\({}_{2}\) is highly sensitive to the external pressure. There are three other different structures of PbTaSe\({}_{2}\) which are found at different elevated pressures and temperatures. These are the P6\({}_{3}\)mc (\(\beta\)-phase), P6/mmm (\(\gamma\)-phase) and Pmmm (\(\delta\)-phase). In this work, we are concerned with the P\(\bar{6}\)m\({}_{2}\) structure of PbTaSe\({}_{2}\) which is stable under normal pressure and temperature. Figure 1: Schematic crystal structure of PbTaSe\({}_{2}\). The crystallographic directions are shown. ### Elastic properties #### 3.2.1 The stiffness constants All the mechanical properties of crystalline solids are determined by the elastic constants (C\({}_{\rm ij}\)). The possible applications are also limited by the elastic behavior. The elastic constants (C\({}_{\rm ij}\)) are connected with the bonding characteristics of materials. The elastic constants also determine the mechanical stability of a solid. The bulk elastic behavior is understood from the polycrystalline elastic moduli. The compound under study is hexagonal and, therefore, it has five independent elastic constants: C\({}_{11}\), C\({}_{12}\), C\({}_{13}\), C\({}_{33}\), and C\({}_{44}\). The elastic constant C\({}_{66}\) is not independent and can be expressed as C\({}_{66}\) =\(\frac{c_{11}-c_{12}}{2}\). The mechanical stability conditions of a hexagonal crystal system can be expressed as follows [32, 33]. \[\text{C}_{11}\textgreater 0;\ \text{C}_{11}\textgreater\text{C}_{12};\ \text{C}_{44} \textgreater 0;\ \text{(C}_{11}\text{+}\text{C}_{12})\text{C}_{33}\text{-}\text{2(C}_{13})^{2} \textgreater 0 \tag{1}\] Above conditions are satisfied by PbTaSe\({}_{2}\), and hence the compound is expected to be mechanically stable. The computed single crystal elastic constants are given in Table 2. There is no available data on the elastic constants of orthorhombic PbTaSe\({}_{2}\) in the P\(\bar{6}\)m\({}_{2}\) structure, to the best of our knowledge. There three diagonal elastic constants C\({}_{11}\), C\({}_{22}\) and C\({}_{33}\) give the measure of the resistance of the crystal to the uniaxial stress in the a-, b- and c-axis, respectively. The non-diagonal elastic tensors are related to the shape deforming shearing stresses. Among these non-diagonal elastic constants, C\({}_{44}\) is closely linked to the hardness of the compound. Very low value of C\({}_{44}\) implies that PbTaSe\({}_{2}\) is a soft material. Different values of C\({}_{11}\) (= C\({}_{22}\)) and C\({}_{33}\) is a reflection of unequal bonding strengths in PbTaSe\({}_{2}\) in the ab-plane and c-direction. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline Compound & a (Å) & c (Å) & c/a & Volume V\({}_{0}\) (Å\({}^{3}\)) & Ref. \\ \hline \multirow{6}{*}{PbTaSe\({}_{2}\)} & 3.40 & 9.37 & 2.75 & – & [28]\({}^{\text{Exp.}}\) \\ \cline{2-6} & 3.39 & 9.29 & 2.74 & – & [28]\({}^{\text{theo.}}\) \\ \cline{2-6} & 3.44 & 9.35 & 2.71 & – & [29, 31]\({}^{\text{exp.}}\) \\ \cline{2-6} & 3.41 & 9.38 & 2.74 & – & [29]\({}^{\text{exp.}}\) \\ \cline{2-6} & 3.45 & 9.35 & 2.71 & – & [30]\({}^{\text{exp.}}\) \\ \cline{2-6} & 3.43 & 9.38 & 2.73 & – & [30]\({}^{\text{exp.}}\) \\ \cline{2-6} & 3.41 & 9.34 & 2.73 & 94.5 & This work \\ \hline \end{tabular} \end{table} Table 1: Calculated and lattice constants a = b and c, c/a ratio, and equilibrium cell volume of hexagonal PbTaSe\({}_{2}\). \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Compound & C\({}_{11}\)= C\({}_{22}\) & C\({}_{12}\) & C\({}_{13}\) = C\({}_{23}\) & C\({}_{33}\) & C\({}_{44}\) = C\({}_{55}\) & C\({}_{66}\) \\ \hline PbTaSe\({}_{2}\) & 129.5 & 38.36 & 11.17 & 144.43 & 13.08 & 45.57 \\ \hline \end{tabular} \end{table} Table 2: The calculated elastic constants, C\({}_{\rm ij}\) (GPa), of PbTaSe\({}_{2}\) in the ground state. #### 3.2.2 Elastic moduli and parameters The elastic moduli in the polycrystalline state can be calculated from the single crystal elastic constants using the well known standard formulae [34, 35, 36, 37, 38, 39, 40, 41]. Some other useful bulk elastic indicators, e.g., the Pugh's ratio (k), Poisson's ratio (\(\sigma\)), and the machinability index (\(\mu_{M}\)) can be calculated from the elastic moduli as follows [39, 40, 41]: \[k=\frac{B}{G} \tag{2}\] \[\sigma=\frac{3B-2G}{6B+2G}\] (3) \[\mu_{M}=\frac{B}{c_{44}} \tag{4}\] In Eqns. 2 - 4, B and G denote the bulk modulus and the modulus of rigidity, respectively. The macro hardness (H\({}_{macro}\)) and the micro hardness (H\({}_{micro}\)) parameters of PbTaSe\({}_{2}\) are also calculated using the following formulae [39, 42, 43]. \[H_{macro}=2[\binom{G}{B}^{2}G]^{0.585}-3 \tag{5}\] \[H_{micro}=\frac{(1-2S)Y}{6(1+S)} \tag{6}\] In Eqn. 6, Y is the Young's modulus of the solid. The Cauchy pressure, C\({}_{p}\), is another useful elastic parameter closely related to the bonding character and brittleness/ductility of solids. For hexagonal crystals, C\({}_{p}\)= (C\({}_{12}\) - C\({}_{44}\)). The computed elastic moduli and parameters are shown in Table 3 below. The machinability index measures the ease with which a solid can be cut into desired shapes. It also measures the level of plasticity and dry lubricity of a compound [44, 45, 46, 47]. The elastic moduli B, G and Y quantify the resistance of the bulk material to volume, shape and length changing deformations. It is seen that the elastic moduli of PbTaSe\({}_{2}\) are low. The machinability index, on the other hand is very high. This implies that the compound under consideration is highly machinable and has significant dry lubricity. Both the macro and micro hardnesses are low suggesting that the overall bonding strength in PbTaSe\({}_{2}\) is weak. The Pugh's ratio can distinguish between the mechanical failure modes of materials. Solids having a Pugh's ratio greater than 1.75 are ductile in nature. If the Pugh's ratio is below this value, then the material is predicted to be ductile [42]. The obtained value of Pugh's ratio of PbTaSe\({}_{2}\) is 1.91 which is greater than 1.75. Therefore, PbTaSe\({}_{2}\) should show ductility. The nature of bonding can be gauged from the Poisson's ratio. The central force interactions dominate in solids when the value of Poisson's ratio lies between 0.25 to 0.50 [48]. For PbTaSe\({}_{2}\), the value of Poisson's ratio is 0.27 meaning that the chemical bondings have central force nature. Furthermore, for a purely covalent crystal the value of Poisson's ratio is \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline Compound & B & G & Y & \(\mu_{m}\) & H\({}_{macro}\) & H\({}_{micro}\) & k & \(\sigma\) & C\({}_{p}\) \\ \cline{2-10} & 58.27 & 30.43 & & & & & & & \\ \hline PbTaSe\({}_{2}\) & 58.27 & 30.43 & 77.75 & 4.45 & 4.51 & 3.49 & 1.91 & 0.27 & 25.28 \\ \hline \end{tabular} \end{table} Table 3: The computed bulk modulus B (GPa), shear modulus G (GPa), Young’s modulus Y (GPa), machinability index \(\mu_{M}\), macro hardness H\({}_{macro}\) (GPa), micro hardness, H\({}_{micro}\) (GPa), Pugh’s ratio k, Poisson’s ratio \(\sigma\) and the Cauchy pressure C\({}_{p}\) (GPa) of PbTaSe\({}_{2}\). around 0.10 and for a crystal with metallic bonding, the value of Poisson's ratio is around 0.33 [49]. Therefore, we expect notable contribution of metallic bonding in PbTaSe\({}_{2}\). The value of the Poisson's ratio can also be used to determine brittleness/ductility. If \(\sigma\) is less than the critical value 0.26, then a material is predicted to be brittle; it will be ductile otherwise. Poisson's ratio is 0.27 for PbTaSe\({}_{2}\) indicating the ductile nature. The Cauchy pressure is another tool used to separate the materials into brittle or ductile group where the critical value of C\({}_{\rm p}\) is zero [50]. The negative value of C\({}_{\rm p}\) indicates that the solid is brittle in nature. The obtained value of C\({}_{\rm p}\) of PbTaSe\({}_{2}\) is positive signifying that our material is ductile in nature. According to the Pettifor's rule [51] materials with large positive Cauchy pressure have significant metallic bonds. On the contrary, negative Cauchy pressure suggests that angular bonds are dominating which makes a compound brittle. The results presented in Table 3 are novel and therefore, no comparison with any other reported values can be made. #### 3.2.3 Elastic anisotropy Elastic anisotropy controls many important mechanical properties of solids related to their practical applications [14]. The elastic anisotropy of PbTaSe\({}_{2}\) has been studied by means of different anisotropy indices. The shear anisotropy factors measure bonding strength among atoms in different planes. There are three different shear anisotropy factors for hexagonal crystal. These factors can be calculated from the following equations [52]: \[\mathrm{A_{1}}=\frac{(C_{11}+C_{12}+2C_{33}-4C_{13})}{6C_{44}} \tag{7}\] for \(\dot{\,}100\dot{\,}\) shear planes between \(<\)011\(>\) and \(<\)010\(>\) directions. \[\mathrm{A_{2}}=\frac{2C_{44}}{C_{11}-C_{12}} \tag{8}\] for \(\dot{\,}010\dot{\,}\) shear planes between \(<\)101\(>\) and \(<\)001\(>\) directions and \[\mathrm{A_{3}}=\frac{C_{11}+C_{12}+2C_{33}-4C_{13}}{3(C_{11}-C_{12})} \tag{9}\] for \(\dot{\,}001\dot{\,}\) shear planes between \(<\)110\(>\) and \(<\)010\(>\) directions. The shear anisotropy factors \(\mathrm{A_{i}}\) (i = 1, 2, 3) have the unit value for isotropic crystals. Departure from unity quantifies the level of anisotropy in shape changing deformation. The directional bulk modulus along a-direction and c-direction can be calculated by using the following relations [53]: \[\mathrm{B_{a}}=\alpha\frac{\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \ understand the in-plane and out-of-plane anisotropy in the bonding strengths. The relevant expressions are [52]. \[\mathrm{K_{c}=(C_{11}+C_{12}-C_{13})} \tag{14}\] \[\mathrm{K_{a}=(C_{33}-C_{13})}\] (15) \[\mathrm{K_{c}/K_{a}=\frac{(c_{11}+c_{12}-2c_{13})}{(c_{33}-c_{13})}} \tag{16}\] All these anisotropy factors are evaluated and listed in Table 4. \(\mathrm{B_{a}=B_{c}}\) for isotropic crystals and and \(\mathrm{K_{c}/K_{a}=1}\), for the same. Table 4 reveals that the shear anisotropy is significant in PbTaSe\({}_{2}\). The anisotropies in the bulk moduli and compressibilities are moderate. ### Electronic Properties #### 3.4.1 Electronic band structure Electronic band structure is one of the most important features of solids which control all the electronic and optical properties. It also gives information regarding bonding in a crystal and electronic stability of a system. The bulk electronic band structure as a function of energy (E-E\({}_{F}\)) along the high symmetry directions in the reciprocal space is calculated in the ground state and is shown in Figure 2. The Fermi level (E\({}_{F}\)) is indicated by the horizontal broken line which was set at zero eV. The bulk electronic band structure shows several interesting behavior. The compound is semimetallic due to weak crossing of the energy bands of the Fermi level around the G-point. The conduction band is hole-like in this case. There is clear signature of semimetallic topological behavior close to the H-point along the A-H line where two nearly linear band segments touch each other exactly on the Fermi level. The bands close to the Fermi level originate from the Pb-6p and Pb-6s and Se-4p and Se-4s electronic orbitals. Contribution from the electrons of the Ta atom is minute. There is significant sp-hybridization. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline Compound & \(\mathrm{A_{1}}\) & \(\mathrm{A_{2}}\) & \(\mathrm{A_{3}}\) & \(\mathrm{B_{a}}\) & \(\mathrm{B_{c}}\) & \(\mathrm{K_{c}/K_{a}}\) \\ \hline PbTaSe\({}_{2}\) & 5.24 & 0.28 & 1.50 & 171.65 & 183.58 & 1.09 \\ \hline \end{tabular} \end{table} Table 4: Shear anisotropy factors (\(\mathrm{A_{1}}\), \(\mathrm{A_{2}}\) and \(\mathrm{A_{3}}\)), directional bulk moduli (\(\mathrm{B_{a}}\), \(\mathrm{B_{c}}\) in GPa) and ratio of linear compressibility coefficients (\(\mathrm{K_{c}/K_{a}}\)) of PbTaSe\({}_{2}\). #### 3.4.2 Electronic energy density of states (DOS) The total and partial densities of states (PDOS and TDOS, respectively) of PbTaSe\({}_{2}\) are calculated from the ground state electronic band structure results. The PDOS and TDOS plots are given in Fig. 3. The vertical broken line indicates the Fermi level. The non-zero value of TDOS at the Fermi level confirms that PbTaSe\({}_{2}\) will exhibit metallic electrical conductivity. To get the contribution of each atom to the TDOS of PbTaSe\({}_{2}\), we have shown the PDOS of Pb, Ta and Se atoms. The TDOS value at the Fermi level is 1.86 states/eV-unit cell. The large peaks in the TDOS in the valence band centered at -3.5 eV and at 3.0 eV in the conduction band are principally responsible for charge transport and optoelectronic properties of PbTaSe\({}_{2}\). These two peaks are due to the Pb-6s, Pb-6p, Se-4s and Se-4p electronic states. The overall contribution of the electronic states of the Ta atom in the energy range shown in Fig. 3 is small. The Fermi level is located to the left of a pseudogap at 0.51 eV in the bonding region. This suggests that PbTaSe\({}_{2}\) has high electronic and structural stability. The degree of electronic correlation in PbTaSe\({}_{2}\) can be estimated from the TDOS at the Fermi level, N(E\({}_{\text{F}}\)). The repulsive Coulomb pseudopotential, V\({}_{\text{c}}\), is a measure of the electronic correlation which is related to N(E\({}_{\text{F}}\)) as follows [54]: \[\text{V}_{\text{c}}=0.26\text{N}(\text{E}_{\text{F}})/[1+\text{N}(\text{E}_{ \text{F}})] \tag{17}\] The calculated value of V\({}_{\text{c}}\) turns out to be 0.15. This shows that electronic correlation is not weak in PbTaSe\({}_{2}\). Figure 2: Electronic band structure of PbTaSe\({}_{2}\) in the ground state. The dashed horizontal line marks the Fermi energy (set to 0 eV). #### 3.4.3 Electronic charge density distribution The charge distribution around the atoms inside the crystal is very important to understand the bonding nature. In this section we have studied the electronic charge density distribution in different crystal planes of PbTaSe\({}_{2}\). The electronic charge density distribution of PbTaSe\({}_{2}\) in the (111) plane and (001) planes are shown in Fig. 4. The color scales on the right hand side of the panels represent the total electron density. The blue color indicates the high charge (electron) density and the red color indicates the low charge (electron) density. The charge density maps show that the atoms in the (111) planes are mainly bound by metallic bonding. The same is true for the atoms in the (001) plane with slight tendency of covalent bonding. This tendency is understood from little deviation of the shapes of charge density distribution around Ta and Se atoms. From the charge density maps of PbTaSe\({}_{2}\) in both the planes, we can see that Ta and Se atoms have high electron density compared to the Pb atoms. The low charge concentration for the Pb atoms implies that the uniform background charges (the red region) probably come primarily from the Pb electrons in the conduction band. Figure 3: Total and partial electronic densities of states (TDOS and PDOS) of PbTaSe\({}_{2}\) in the ground state. #### 3.4.4 Fermi surface The Fermi surface of PbTaSe\({}_{2}\) is shown in Fig. 4. From the band structure of PbTaSe\({}_{2}\), we have seen that three bands cross the Fermi level. These three bands contribute to the three sections of the Fermi surface. Weak crossing at around the H point in the BZ gives rise to the six small segments of electronic Fermi surface with hexagonal symmetry. The Fermi sheet enclosing the central ellipsoid has hole-like character. The central ellipsoid is electronic. This central part is expected to control the charge transport and optical properties of PbTaSe\({}_{2}\). Deviation from spherical shape of the Fermi sheets indicates that the electronic properties of PbTaSe\({}_{2}\) are anisotropic. Figure 4: The electronic charge density distribution maps for PbTaSe\({}_{2}\) in the (a) (111) and (b) (001) planes. The color scale on the left quantifies the amount of charge (in unit of electronic charge). Figure 5: Fermi Surface of PbTaSe\({}_{2}\) seen from different angles. ### Phonon and acoustic properties #### 3.5.1 Phonon dispersion curves Phonons are the quanta of lattice vibrations. Large number of electrical, thermal and lattice dynamical properties of crystalline solids are dependent on the phonon spectrum. The phonon spectrum is determined by the crystal symmetry, stiffness constants and mass of the constituent atoms [55, 56, 57]. We have calculated the phonon dispersion curves and the phonon density of states (PHDOS) of the PbTaSe\({}_{2}\) compound along the high symmetry directions of the BZ. The calculated curves are shown in Fig. 6. As can be seen from Fig. 6a, no negative energy phonon branch exists in the dispersion curves. This is a clear indication of the dynamical stability of the compound under study. The acoustic and the optical branches are separated by a frequency gap. The high PHDOS regions for the acoustic branches at low phonon frequencies should contribute significantly in the thermal transport. The highest energy phonon dispersion branches are due to the vibration of light Se atoms in the crystal. #### 3.5.2 Acoustic properties The sound velocity through a material is very important in relation to the thermal and electrical conduction. The average sound velocity in solids, \(\mathrm{v_{m}}\) is related to the shear modulus (G) and bulk modulus (B). The \(\mathrm{v_{m}}\) is given by the harmonic mean of the average longitudinal and transverse sound velocities, \(\mathrm{v_{i}}\) and \(\mathrm{v_{t}}\), respectively. \[\mathrm{v_{m}}=[\frac{1}{3}(\frac{1}{\mathrm{v_{i}^{3}}}+\frac{2}{\mathrm{v_{ i}^{3}}})]^{1/3} \tag{18}\] Figure 6: Calculated (a) phonon dispersion spectra and the (b) PHDOS for PbTaSe\({}_{2}\) compound at zero pressure. The velocities \(\mathrm{v_{i}}\) and \(\mathrm{v_{t}}\) are calculated from the following equations [40]: \[\mathrm{v_{i}}=[\frac{3B+4G}{3r}]^{1/2} \tag{19}\] \[\mathrm{and}\ \mathrm{v_{t}}=[\frac{G}{r}]^{1/2} \tag{20}\] Table 7 exhibits the calculated crystal density of and the acoustic velocities in PbTaSe\({}_{2}\). ### Thermal properties #### 3.6.1 Debye temperature The Debye temperature is one of the most prominent thermophysical parameter of a material. It is connected to the phonon thermal conductivity, heat capacity, melting temperature, superconducting transition temperature, and electrical conductivity of solids. There are several methods for calculating the Debye temperature, \(\mathrm{\theta_{D}}\). Among those, the one proposed by Anderson [58] is simple and results in reliable estimation of Debye temperature solids belonging to different crystal symmetries. The relevant expression is given below: \[\mathrm{\theta_{D}}=\frac{h}{k_{B}}[(\frac{3n}{4\rho})\frac{N_{A}r}{M}]^{1/3} \mathrm{v_{m}} \tag{21}\] In the above equation, \(\mathrm{h}\) is the Planck's constant and \(\mathrm{k_{B}}\) is the Boltzmann constant, \(\mathrm{n}\) refers to the number of atoms in a molecule, \(\mathrm{N_{A}}\) is the Avogadro's number, \(\mathrm{\rho}\) is the mass density, \(\mathrm{M}\) is the molecular mass and \(\mathrm{v_{m}}\) is the average velocity of sound in the crystalline solid. The calculated Debye temperature is given in Table 8. The Debye temperature of PbTaSe\({}_{2}\) is low, 216.53 K. Thus the phonon thermal conductivity of this compound is expected to be low as well. This low value also points towards weak bonding among the atoms of PbTaSe\({}_{2}\). #### 3.6.2 The melting temperature Information on the melting temperature (T\({}_{\mathrm{m}}\)) is necessary for materials to be used at elevated temperatures. The melting temperature is related to the bonding strength and cohesive energy of the crystal. These parameters also determine the elastic constants. Fine et al. [59] developed a formula for calculating the melting temperature of hexagonal crystal using the elastic constants as follows: \[\mathrm{T_{m}}=354+1.5(2C_{11}+C_{33}) \tag{22}\] The calculated melting temperature of PbTaSe\({}_{2}\) is listed in Table 8. The relatively low melting temperature (959.16 K) of PbTaSe\({}_{2}\) is consistent with its low Debye temperature, hardness and various elastic moduli. #### Thermal conductivity Thermal conductivity is an important thermal transport coefficient that gives the measure of the efficiency of heat transfer by a material. This parameter is temperature dependent. The minimum thermal conductivity (K\({}_{\text{min}}\)) is the limiting value of the thermal conductivity at high temperature when the phonon contribution to the thermal conductivity (K\({}_{\text{ph}}\)) reaches its minimum value and becomes independent of temperature. Based on the Debye model, Clarke derived the formula to calculate the minimum thermal conductivity (\(K^{Clarke}_{min}\)) using the following equation [60]: \[K^{Clarke}_{min}=\text{k}_{\text{B}}\nu_{\text{m}}[\frac{n\tau N_{A}}{M}]^{2/3} \tag{23}\] The minimum thermal conductivity can also be estimated employing the Cahill formalism where the phonon spectrum has been considered within the Einstein model. The Cahill formula [61] for minimum thermal conductivity is given below: \[K^{Cahill}_{min}=\frac{\text{k}_{B}}{2.48}n^{\frac{2}{3}}(\nu_{l}+2\nu_{t}) \tag{24}\] The calculated minimum thermal conductivities of PbTaSe\({}_{2}\) are tabulated below (Table 8). The minimum thermal conductivity of PbTaSe\({}_{2}\) is very low. Semimetallic character of PbTaSe\({}_{2}\) implies that the electronic contribution to the thermal conductivity at high temperature should also be low. Thus the overall low thermal conductivity of PbTaSe\({}_{2}\) can make it a useful thermal barrier material. The temperature dependent phonon thermal conductivity of PbTaSe\({}_{2}\) can be estimated using the scheme developed by Slack [62]. The Slack formula for temperature dependent phonon thermal conductivity is as follows: \[\text{K}_{\text{ph}}(\text{T})=\text{A}[\frac{\text{M}_{\text{av}}\frac{Q_{0} ^{2}}{g}d}{N^{2/3}\text{T}}] \tag{25}\] In Eqn. 25, M\({}_{\text{av}}\) is the average atomic mass (Kg/mol) in the molecule, \(\delta\) denotes the cubic root of average atomic volume, N denotes the number of atoms present in unit cell, and \(\gamma\) denotes the Gruneisen constant calculated using the poison's ratio (\(\sigma\)). \(A\) is a parameter (in W-mol/Kg/m\({}^{2}\)/K\({}^{3}\)) depending on \(\gamma\) which is calculated from [63]: \[A(\gamma)=\frac{5.720\ \cdot 10^{7}\ \cdot 0.849}{2\cdot[1-\frac{0.514}{g} \cdot\frac{0.224}{g^{2}}]} \tag{26}\] The Gruneisen parameter gives the measure of anharmonicity in a solid. The Gruneisen parameter \(\gamma\) is an important quantity in the thermodynamics and lattice dynamics because it is related with bulk modulus, heat capacity at constant volume, thermal expansion coefficient, lattice thermal conduction and volume of the solid. The Gruneisen parameter can be evaluated with the Poisson's ratio as follows [64]: \[\gamma=\frac{3[1+\text{s}]}{2[2-3\text{s}]} \tag{27}\] For crystalline materials the values of \(\gamma\) is usually found in the range of 0.80 - 3.50 [65, 66, 67]. The calculated value of \(\gamma\) of PbTaSe\({}_{2}\) is 2.24 which is well within the established range. The value is in the medium range which implies the medium level of anharmonic effects in PbTaSe\({}_{2}\). All the thermophysical parameters calculated in this section are presented in Table 8 below. ### Bond population analysis The Mulliken bond populations are calculated to understand the bonding nature (ionic, covalent and metallic) of PbTaSe\({}_{2}\). We have also performed the Hirshfeld population analysis (HPA). The calculated values of atomic populations and relevant parameters are presented in Table 9. It is seen from Table 9 that in PbTaSe\({}_{2}\), electrons are transferred from Pb and Ta to Se. It is an indication of ionic bonds. The electron transfer can be attributed to the difference in the electron affinities of Se, Ta, and Pb. On the other hand, non-zero effective valence charge implies that there is some covalent contributions as well. The effective valences in the Mulliken population analysis (MPA) and the HPA are different. This is actually expected since MPA depends on the basis sets used to approximate the wave functions of the orbitals while the HPA is independent of the basis sets. ### Optical properties We have studied the optical properties of PbTaSe\({}_{2}\) to explore the possible scopes of its use in the optical sector. In this section we have calculated the optical properties such as absorption coefficient, dielectric constant, photoconductivity, refractive index, reflectivity and loss function in the photon energy range up to 20 eV with two different polarization directions of [100] and [001]. The methodology for optical calculations is detailed in Refs. [68, 69]. A Gaussian smearing of 0.5 eV, a Drude energy 0.05 eV and an unscreened plasma energy of 10 eV were used to calculate the optical parameters as a function of incident photon energy. All the computed optical parameters are shown in Fig. 7 below. The real part of dielectric function, \(\varepsilon_{1}(\omega)\), is shown in Fig.7a. The spectra of \(\varepsilon_{1}\) start from negative value with a peak around \(\sim\)3.5 and cross the zero line at around 17.5 eV. This is a typical metallic behavior. Fig. 7a also shows the imaginary part of the dielectric function, \(\varepsilon_{2}(\omega)\). This is related to the photon absorption characteristics of PbTaSe\({}_{2}\). The peaks and the spectral weight in \(\varepsilon_{2}(\omega)\) are \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Compound & \(\theta_{\mathrm{D}}\) (K) & \multicolumn{2}{c|}{K\({}_{\mathrm{min}}\) (W/m-K)} & \(\gamma\) & K\({}_{\mathrm{ph}}\) (W/m-K) & T\({}_{\mathrm{m}}\) (K) \\ \cline{3-8} & & \(K^{Clark}_{min}\) & \(K^{Cahill}_{min}\) & & & & \\ \hline PbTaSe\({}_{2}\) & 216.5 & 0.414 & 0.468 & 2.24 & 11.36 & 959.16 \\ \hline \end{tabular} \end{table} Table 8: The Debye temperature \(\theta_{\mathrm{D}}\), minimum thermal conductivity K\({}_{\mathrm{min}}\), Grüneisen parameter \(\gamma\), lattice thermal conductivity K\({}_{\mathrm{ph}}\) at 300 K and the melting temperature T\({}_{\mathrm{m}}\) of PbTaSe\({}_{2}\) compound. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{\begin{tabular}{c} Charge \\ Spilling \\ (\%) \\ \end{tabular} } & \multirow{2}{*}{Atom} & \multicolumn{4}{c|}{Mulliken atomic population} & \multirow{2}{*}{Mulliken} & \multirow{2}{*}{\begin{tabular}{c} Formal \\ charge \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} Effective \\ (Mulliken) \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} Effective \\ valence \\ (Hirshfeld) \\ \end{tabular} } & \multirow{2}{*}{ \begin{tabular}{c} Effective \\ valence \\ (Hirshfeld) \\ \end{tabular} } \\ \hline \multirow{3}{*}{3.66} & Pb & 3.33 & 8.21 & 10.0 & 0.0 & 21.54 & 0.46 & +2 & 1.54 & 1.92 \\ \cline{2-11} & Ta & 2.00 & 6.00 & 0.0 & 14 & 22.00 & 5.00 & +4 & -1.00 & 0.92 \\ \cline{1-1} \cline{2-11} & Se\({}_{1}\) & 2.73 & 5.70 & 10 & 0 & 18.44 & -2.44 & -2 & -0.44 & 1.92 \\ \cline{1-1} \cline{2-11} & Se\({}_{1}\) & 2.73 & 5.70 & 10 & 0 & 18.44 & -2.44 & -2 & -0.44 & 1.92 \\ \hline \end{tabular} \end{table} Table 9: Charge Spilling parameter (%), orbital charge (electron), atomic Milliken charge (electron), effective valance (Mulliken & Hirshfeld charge) (electron) of PbTaSe\({}_{2}\). controlled by the electronic energy density of states of the energy levels involved in the optical transition of electrons and the matrix elements of the transition between the two states involved. Sharp peaks are found at 4.95 eV for [100] and at 5.03 eV for [001] polarization directions of the incident electric field vector. For both polarizations \(\varepsilon_{2}\) gradually decreases with increasing energy and finally goes to zero at \(\sim\) 18 eV. There is significant optical anisotropy in the dielectric function with respect to the polarization states of the electric field. The real part of refractive index, n(\(\omega\)), and the imaginary part, \(\kappa\)(\(\omega\)), are shown in Fig. 7b. The low energy value of n(\(\omega\)) was found to be high in the infrared and visible region. This real part measures the group velocity of electromagnetic wave in the solid. The imaginary part, also known as the extinction coefficient, measures the attenuation of light as it travels through the material. From Fig. 7b it is seen that visible light is highly attenuated by PbTaSe\({}_{2}\). Both the real and imaginary parts of the refractive index decrease monotonically at high energies in the ultraviolet (UV) region of the electromagnetic spectrum. The optical anisotropy is quite pronounced up to 10 eV. The variation of absorption coefficient, \(\alpha\)(\(\omega\)), as a function of photon energy is shown in Fig. 7c. Finite values of \(\alpha\)(\(\omega\)) for both the polarizations at very low energy indicate the metallic state of PbTaSe\({}_{2}\). The absorption coefficient is quite high in the energy range 5 eV to 15 eV in the UV region. This suggests that PbTaSe\({}_{2}\) is a good absorber of ultraviolet radiation. There is significant optical anisotropy in absorption in the energy range 5 eV to 10 eV. The photoconductivity is an important parameter for optoelectronic device applications. Optical conductivity as a function of photon energy is depicted in Fig.7 d. The low energy photoconductivity reaffirms the metallic character of PbTaSe\({}_{2}\). There is significant optical anisotropy in \(\sigma\)(\(\omega\)). The reflectivity, as a function of incident photon energy, is given in Fig. 7e. The reflectivity is high in the infrared and visible region. The reflectivity initially decreases in the near-infrared region then increases gradually and becomes almost non-selective in the energy range 5 eV to 15 eV. Thus PbTaSe\({}_{2}\) has high potential to be used as an efficient solar reflector to reduce solar heating. R(\(\omega\)) decreases sharply at around 18 eV close to the plasma peak in the energy loss spectrum. The calculated energy loss spectrum is shown in Fig. 7f. The energy loss function helps one to understand the screened plasma excitation made by swift charges moving inside the material. The loss function, L(\(\omega\)), shows peak at the characteristic plasma oscillation energy. The position of the peak marks the energy at which the reflectivity and absorption coefficient falls sharply. Above the plasma energy, the material becomes transparent to the incident photons and the optical features become similar to those of insulators. For PbTaSe\({}_{2}\), the plasma peaks are located at 17.6 eV and 16.0 eV for the electric field polarizations along the [100] and [001] directions, respectively. ## 4 Conclusions Using the DFT based first-principles calculations, we have explored a large number of elastic, bonding, lattice dynamic, electronic, thermophysical and optical properties of hexagonal PbTaSe\({}_{2}\) topological semimetal. Most of the reported results are novel. The compound is elastically and dynamically stable with ductile features. It is highly machinable and elastically anisotropic. There is significant metallic and ionic bonding in PbTaSe\({}_{2}\) with little covalent character. The hardness of PbTaSe\({}_{2}\) is moderate. The Debye temperature and phonon thermal conductivity of PbTaSe\({}_{2}\) are also low. The electronic band structure shows semimetallic character with bulk topological features. We have found Weyl points in momentum space, where the valence and conduction bands touch each other as found in previous study [9]. The calculated value of the repulsive Coulomb pseudopotential indicates that PbTaSe\({}_{2}\) possesses electronic correlations. The Fermi surface has both electron- and hole-like regions. The optical parameters have been explored in detail. The compound under study possesses optical anisotropy. PbTaSe\({}_{2}\) is a good absorber of ultraviolet light and reflects visible radiation very effectively. The optical properties also exhibit metallic character and are consistent with the electronic band structure calculations. Figure 7: The (a) real and imaginary parts of dielectric function [\(\varepsilon_{1}\)(\(\omega\)) and \(\varepsilon_{2}\)(\(\omega\))], (b) real and imaginary parts of the refractive index [\(n\)(\(\omega\)) and \(K\)(\(\omega\))], (c) absorption coefficient [\(\alpha\)(\(\omega\))], (d) optical conductivity [\(\sigma\)(\(\omega\))], (e) reflectivity [R(\(\omega\))] and (f) loss function [L(\(\omega\))] of PbTaSe\({}_{2}\) for the [100] and [001] electric field polarizations. ## Acknowledgements S. H. N. acknowledges the research grant (1151/5/52/RU/Science-07/19-20) from the Faculty of Science, University of Rajshahi, Bangladesh, which partly supported this work. ## Data availability The data sets generated and/or analyzed in this study are available from the corresponding author on reasonable request. ## Declaration of interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
2310.02728
Counterdiabatic Driving for Periodically Driven Systems
Periodically driven systems have emerged as a useful technique to engineer the properties of quantum systems, and are in the process of being developed into a standard toolbox for quantum simulation. An outstanding challenge that leaves this toolbox incomplete is the manipulation of the states dressed by strong periodic drives. The state-of-the-art in Floquet control is the adiabatic change of parameters. Yet, this requires long protocols conflicting with the limited coherence times in experiments. To achieve fast control of nonequilibrium quantum matter, we generalize the notion of variational counterdiabatic driving away from equilibrium focusing on Floquet systems. We derive a nonperturbative variational principle to find local approximations to the adiabatic gauge potential for the effective Floquet Hamiltonian. It enables transitionless driving of Floquet eigenstates far away from the adiabatic regime. We discuss applications to two-level, Floquet band, and interacting periodically driven models. The developed technique allows us to capture nonperturbative photon resonances and obtain high-fidelity protocols that respect experimental limitations like the locality of the accessible control terms.
Paul Manuel Schindler, Marin Bukov
2023-10-04T11:08:19Z
http://arxiv.org/abs/2310.02728v3
# Counterdiabatic Driving for Periodically Driven Systems ###### Abstract Periodically driven systems have emerged as a useful technique to engineer the properties of quantum systems, and are in the process of being developed into a standard toolbox for quantum simulation. An outstanding challenge that leaves this toolbox incomplete is the manipulation of the states dressed by strong periodic drives. The state-of-the-art in Floquet control is the adiabatic change of parameters. Yet, this requires long protocols conflicting with the limited coherence times in experiments. To achieve fast control of nonequilibrium quantum matter, we generalize the notion of variational counterdiabatic driving away from equilibrium focusing on Floquet systems. We propose two approaches to find local approximations to the adiabatic gauge potential for the effective Floquet Hamiltonian, based on the inverse-frequency expansion and a non-perturbative variational principle. They enable transitionless driving of Floquet eigenstates far away from the adiabatic regime. We discuss applications to two-level, Floquet band, and interacting periodically-driven models. In particular, we demonstrate that the developed methods allow us to capture non-perturbative photon resonances and obtain high-fidelity protocols that respect experimental limitations like the locality of the accessible control terms. Our work lays the foundations for a quantum control theory of nonequilibrium systems. ## I Introduction Over the last decade, periodic (Floquet) drives have emerged as a useful technique to engineer the properties of quantum systems [1; 2; 3; 4] in a variety of platforms ranging from condensed matter systems [4] to quantum simulators [3; 5; 6]. The periodicity of the drive allows us to describe the dynamics of Floquet systems by an _effective_ time-independent (a.k.a. _Floquet_) Hamiltonian at times integer multiple of the driving period [7; 8; 2]. These effective Hamiltonians can have properties drastically different from those of the non-driven system. Such nonequilibrium behavior is exploited in a _Floquet engineering_ approach [1; 2; 3] to realize Hamiltonians otherwise hardly implementable in the lab: most notably, they can exhibit artificial gauge fields [9; 10; 5; 11] and topological lattice models [12; 13; 14; 15; 16; 17; 18; 19; 20], but can also give rise to dynamically stabilized [21; 22; 23; 24] and localized [25; 26; 27] matter. Moreover, certain periodically driven systems display nonequilibrium phases of matter without static counterparts, e.g., discrete time crystals [28; 29; 30; 31; 32; 33] or anomalous topological insulators [34; 35; 36; 37; 38; 39]. In many body systems the lack of energy conservation due to the periodic drive typically leads the system to heat to a featureless infinite temperature state at sufficiently late times [40; 41]. However, Floquet heating is exponentially suppressed in the driving frequency [42; 43; 44]. Thus, a parametrically long-lived and metastable prethermal plateau forms whose properties can be designed using Floquet engineering. Due to their versatility and accessibility, techniques based on strong periodic drives are developed into a standard toolbox for quantum simulation. However, engineering a local effective Hamiltonian that governs the system within a stable long-lived prethermal plateau, is necessary yet insufficient for a self-contained quantum simulation toolbox. An essential prerequisite for investigating the physics of the effective Hamiltonian is the ability to prepare and probe its eigenstates. At present, manipulating the system's state under the presence of a strong periodic drive remains an outstanding challenge. More generally, control of nonequilibrium quantum matter remains largely unexplored. Yet, developing such a theory is imperative to complete Floquet engineering into a self-contained quantum simulation toolbox. The existence of a time-independent Floquet Hamiltonian may suggest borrowing concepts from equilibrium quantum control. However, as we show in this work, the physics of state manipulation is richer and more intricate for genuinely nonequilibrium systems. Unlike their equilibrium counterparts, the exact effective Hamiltonian for generic Floquet systems is a non-local operator, and its exact analytic form is known only in a handful of cases. This poses considerable challenges for developing analytical control techniques. Additionally, the control terms explicitly break the time-periodicity of the Hamiltonian which renders Floquet's theorem - the cornerstone of Floquet engineering - not directly applicable, further limiting analytical progress. Indeed, if applied carelessly, extra control terms can become resonant with the periodic drive which will inevitably reduce the lifetime of the prethermal plateau. For all these reasons, developing theories to control Floquet systems is a difficult yet important problem, whose solution has the potential to advance quantum simulation. The state-of-the-art approach to control periodically driven systems is the _adiabatic_ change of parameters. However, the adiabatic limit for the (local) effective Hamiltonian does not exist since photon absorption gaps in the quasi-energy spectrum need to be passed diabatically while the co-existing conventional Landau-Zener gaps adiabatically [45; 46; 47]. In generic many-body systems, these two kinds of gaps appear indistinguishable (cf. Fig. 2); thus, manipulating state population dynamics becomes intractable. Moreover, in practice, adiabatic state preparation requires slow protocols to suppress excitations due to diabatic transitions. This stands in contrast to the limited coherence timescales in present-day quantum simulators [48; 49] that require fast processes to avoid decoherence. In this work, we lay the foundations of a quantum control theory away from equilibrium, by focusing on periodically driven systems. We address the problems inherent to adia batic Floquet control schemes by generalizing the concept of transitionless driving [50; 51; 52; 53; 54; 55; 56; 57; 58; 59] to Floquet systems. This theory of Floquet counterdiabatic driving (FCD)][Fig. 1] yields transitionless control protocols that transfer population between Floquet eigenstates in finite time. This is achieved by introducing additional terms which counteract diabatic transitions. In particular, we introduce two complementary approaches. The first, based on inverse frequency expansions (IFE), is tailored to Floquet engineering in the high-frequency regime. As a perturbative method, it is unable to resolve photon resonances in the quasi-energy spectrum. We demonstrate how this deficiency can be turned into a useful engineering feature: passing resonances diabatically can suppress undesired excitation channels. Second, we present a non-perturbative variational approach based on a least-action principle. Provided with a suitable ansatz, this method suppresses diabatic excitations for all avoided crossings in the quasi-energy spectrum - including the non-perturbative photon resonances. We illustrate our theory on three paradigmatic and genuinely nonequilibrium control problems without static counterparts. First, we compare the two approaches using the pedagogical example of a driven two-level system. We demonstrate that both methods lead to transitionless driving between quasi-energy states for conventional resonances. In addition, we show that choosing either the IFE or variational approach allows us to toggle whether photon resonances are passed diabatically or adiabatically. This property can add extra versatility to the Floquet engineering toolbox. Second, we apply FCD driving to the experimentally highly relevant case of Floquet state manipulation in an ultracold atom quantum simulator. We demonstrate that, taking experimental constraints into account, our method leads to a significant reduction in diabatic transitions. Finally, we use our theory to enhance the many-body fidelity of state population transfer in a non-integrable Ising model. In particular, we showcase how periodic drives open up new ways to aid and improve state preparation even in interacting static systems. ## II Theory of counterdiabatic driving for Floquet systems Consider a periodically driven Hamiltonian \(\mathcal{H}_{\lambda}(t)=\mathcal{H}_{\lambda}(t+T)\) with period \(T\) and control parameter \(\lambda\). Common examples include switching on a periodic drive by ramping up its amplitude, changing the drive frequency, or another parameter in the static part of the Hamiltonian, cf. Fig. 1. For every fixed value of \(\lambda\), the stroboscopic evolution (\(t=nT\,n\in\mathbb{Z}\)) is generated by a time-independent _Floquet Hamiltonian_\(\mathcal{H}_{F,\lambda}\), \[e^{-iT\mathcal{H}_{F,\lambda}}=\mathcal{T}e^{-i\int_{0}^{T}\mathcal{H}_{ \lambda}(t)\mathrm{d}t}. \tag{1}\] More generally, there exists a rotating (_Floquet_) frame in which the dynamics (both stroboscopic and non-stroboscopic) are exactly described by the Floquet Hamiltonian, i.e., \[\mathcal{H}_{F,\lambda}=\mathcal{P}_{\lambda}^{\dagger}(t)\mathcal{H}_{ \lambda}(t)\mathcal{P}_{\lambda}(t)-i\mathcal{P}_{\lambda}^{\dagger}(t)\, \frac{\mathrm{d}}{\mathrm{d}t}\mathcal{P}_{\lambda}(t). \tag{2}\] This frame is defined by the micromotion operator \(\mathcal{P}_{\lambda}(t)=\mathcal{P}_{\lambda}(t+T)\equiv\exp(-i\mathcal{K}_{ \lambda}(t))\) generated by the kick operator \(\mathcal{K}_{\lambda}(t)\). Our goal is to achieve transitionless driving between eigenstates of the Floquet Hamiltonian \(\mathcal{H}_{F,\lambda}\) (a.k.a., Floquet states) upon varying the control parameter \(\lambda=\lambda(t)\) from some initial value \(\lambda_{i}\) to a final value \(\lambda_{i}\) in a finite time \(T_{\mathrm{ramp}}\) [Fig. 1]. The difficulty comes from the observation that the time-dependence of the control parameter need not be periodic in general. Hence, Figure 1: **Sketch: Floquet State Manipulation and Floquet counterdiabatic driving.** Periodic driving (blue) is used to engineer properties of the system—here exemplified by the Kapitza pendulum where the otherwise unstable fixed point at \(\theta=\pm\pi\) (see potential left) is stabilized by the driving (see other two potentials). To prepare the target Floquet state \(\ket{\psi_{\mathrm{target}}}\) from the initial state \(\ket{\psi_{\mathrm{initial}}}\), additional control (yellow) has to be added. Control in Floquet systems in general takes one of three forms (see box left): (_i_) amplitude ramps, (_ii_) frequency chirps, and (_iii_) change of the static part of the Hamiltonian. State manipulation without additional counterterms (unassisted) away from the adiabatic limit fails. However, using Floquet counterdiabatic driving (FCD) we can obtain fast transitionless protocols. formally it breaks the periodicity of the Hamiltonian, i.e., \(\mathcal{H}_{\lambda(t+T)}(t+T)\neq\mathcal{H}_{\lambda(t)}(t)\). The adiabatic theorem [60; 61] guarantees transitionless (adiabatic) driving for gapped states only in the limit of infinitely slow variations, i.e., as \(\dot{\lambda}\to 0\) (\(T_{\text{ramp}}\rightarrow\infty\)). In the absence of quasi-energy level-crossings along the protocol, this also holds for Floquet systems [45]. However, the strict adiabatic limit is inaccessible in practice. As a result, changing the control parameter faster than the inverse quasi-energy gap to nearby levels unavoidably leads to sizeable diabatic excitations. In non-driven systems, such diabatic excitations can be exactly suppressed with the help of counterdiabatic (CD) driving. By adding additional counterterms to the Hamiltonian, exact CD driving allows us to follow the adiabatically-connected instantaneous state of \(\mathcal{H}_{\lambda}\) at all times during the application of the protocol. In this paper, we generalize the theory of CD driving to periodically driven systems. In other words, we answer the question as to which counterterms need to be added to the controlled lab-frame Hamiltonian \(\mathcal{H}_{\lambda}(t)\) to achieve transitionless driving in the eigenstates of the instantaneous effective Floquet Hamiltonian \(\mathcal{H}_{F,\lambda}\). ### Floquet Adiabaticity and Floquet Counterdiabatic Driving To suppress diabatic excitations, we first identify the cause for diabatic transitions. This requires generalizing the concept of the adiabatic gauge potential (AGP) \(\mathcal{A}_{\lambda}(t)\)[50; 51; 58] to periodically driven systems and define the Floquet adiabatic gauge potential (FAGP). To this end, we revise the transformation to the Floquet frame that removes the micromotion dynamics, to include the time-dependence of the protocol \(\lambda(t)\)[45] \[\tilde{\mathcal{H}}_{\lambda}=\mathcal{H}_{F,\lambda}-\dot{\lambda}\tilde{ \mathcal{A}}_{\mathcal{P},\lambda}. \tag{3}\] where we take into account the dependence of \(\mathcal{P}_{\lambda}(t)\) on the control parameter \(\lambda\), and \(\tilde{\mathcal{A}}_{\mathcal{P},\lambda}(t)=i\mathcal{P}_{\lambda}^{\dagger} \partial_{t}\mathcal{P}_{\lambda}\). To make the cause for excitations explicit, we now perform a second transformation to a co-moving frame where the Floquet Hamiltonian \(\mathcal{H}_{F,\lambda}\) is diagonal. Noting the implicit dependence on time of the instantaneous change-of-basis operator \(V_{\lambda}\) via the protocol \(\lambda(t)\), we find \[\tilde{\mathcal{H}}_{\lambda}=\tilde{\mathcal{H}}_{F,\lambda}-\dot{\lambda} \left(\tilde{\mathcal{A}}_{F,\lambda}+\tilde{\mathcal{A}}_{\mathcal{P}, \lambda}\right), \tag{4}\] where \(\tilde{\mathcal{H}}_{F,\lambda}=V_{\lambda}^{\dagger}\mathcal{H}_{F,\lambda}V _{\lambda}=\sum_{n}\ket{n}E_{F,n}(\lambda)\bra{n}\) is a diagonal matrix containing the instantaneous Floquet (quasi-)energies \(E_{F,n}\), and \(\tilde{\mathcal{A}}_{F,\lambda}=iV_{\lambda}^{\dagger}\partial_{\lambda}V_{ \lambda}\). In this second frame, the Hamiltonian \(\tilde{\mathcal{H}}_{F,\lambda}\) is diagonal at all times, and therefore all diabatic transitions are necessarily caused by the off-diagonal _Floquet adiabatic gauge potential_ (FAGP) \(\tilde{\mathcal{A}}=\tilde{\mathcal{A}}_{F,\lambda}+\tilde{\mathcal{A}}_{ \mathcal{P},\lambda}\). The two contributions in the FAGP correspond to the change in the eigenbasis of the Floquet Hamiltonian (\(\mathcal{A}_{F}\)) and the change of Floquet frame (\(\mathcal{A}_{\mathcal{P}}\)). Therefore, we can suppress all diabatic transitions in the presence of the periodic drive, by adding the gauge potential term. In the original lab frame, this leads us to the counterdiabatic Hamiltonian \[\mathcal{H}_{\text{CD},\lambda}(t) =\mathcal{H}_{\lambda}(t)+\dot{\lambda}\mathcal{A}_{\lambda}(t), \tag{5}\] \[\mathcal{A}_{\lambda}(t) =i\partial_{\lambda}(\mathcal{P}_{\lambda}(t)V_{\lambda})V_{ \lambda}^{\dagger}\mathcal{P}_{\lambda}^{\dagger}(t).\] Hence, Eq. (5) provides the sought-after Floquet counterdiabatic protocol. We emphasize that the structure of the operator \(\mathcal{A}_{\lambda}\) is independent of the specific form of the protocol \(\lambda(t)\)[62]. Therefore, in the adiabatic limit \(\dot{\lambda}\to 0\), all diabatic transitions vanish since \(\lambda\mathcal{A}_{\lambda}\to 0\). ### Approximate Floquet Counterdiabatic Driving Notice that the definition of the FCD protocol (5) requires (i) solving the Floquet problem, and (ii) fully diagonalizing the Floquet Hamiltonian for all parameter values \(\lambda(t)\). These are Figure 2: **Sketch: Comparison of local Floquet counterdiabatic driving methods.****(A)** Comparison of IFE and variational procedure to obtain the (local)-FCD protocol. For IFE a transformation to the perturbative Floquet frame and back is needed, while the variational method operates on the level of the lab frame. (**B**) Action of the two methods on avoided level crossings in the quasi-energy spectrum. (_i_) For a standard resonance—where both quasi-energies and non-driven or perturbative energies are close in energy—both I-FCD approaches follow the adiabatic path. (_ii_) For a photon resonance—where perturbative energy levels which are multiples of the driving frequency \(\omega\) apart hybridize—the variational method still follows the adiabatic path. However, the IFE method leads to a diabatic transition, violating adiabaticity but possibly suppressing heating. notoriously hard tasks, as obtaining the full spectrum in many-body systems is exponentially hard in the system size, while finding closed-form expressions for the Floquet Hamiltonian is challenging even in few-level systems due to time-ordering, let alone in generic many-body systems. Thus, computing an exact FAGP using Eq. (5) is out of reach for almost all models in practice. Moreover, even if one could obtain the exact FAGP, it often contains long-range or multi-body interactions [58] which are challenging to implement in an actual experiment [63]. Therefore, the relevant question for experimental applications concerns finding an optimal _local_ approximation to the exact CD Hamiltonian (5). We refer to this procedure as _local Floquet counterdiabatic driving_ (I-FCD). Below, we present two distinct yet viable approaches for local FCD. First, we consider an inverse frequency expansion method (IFE), computing \(\mathcal{A}_{\mathcal{P}}\) and \(\mathcal{A}_{F}\) separately. As an alternative, we then derive a least-action principle for the FAGP which allows for a direct approximation of the exact FAGP using a variational principle (variational method). #### ii.2.1 Inverse Frequency Expansion A key observation for deriving the IFE approach is the absence of the periodic time-dependence in the Floquet frame, cf. Eq. (3). Therefore, provided we have access to (some approximation to) the Floquet Hamiltonian \(\mathcal{H}_{F,\lambda}\) and its micro-motion operator \(\mathcal{P}_{\lambda}(t)\), one can apply techniques from static CD driving to find a local approximation \(\mathcal{X}_{F,\lambda}\) to the exact gauge potential \(\mathcal{A}_{F,\lambda}\). In particular, we can use local counterdiabatic driving [64]. That is, using a parametrized ansatz, \(\mathcal{X}_{F,\lambda}\), for the FAGP in the Floquet frame, the optimal parameters are obtained from the CD variational principle \(\delta_{\mathcal{X}_{F,\lambda}}S_{F}=0\) with action \[\begin{split} S_{F}[X_{F,\lambda}]&=\text{Tr}\left( \mathcal{G}_{F}^{2}\left(\mathcal{X}_{F,\lambda}\right)\right),\\ \mathcal{G}_{F}\big{(}X_{F,\lambda}\big{)}&=i\big{[} \mathcal{H}_{F,\lambda},\mathcal{X}_{F,\lambda}\big{]}-\partial_{\lambda} \mathcal{H}_{F,\lambda}\,.\end{split} \tag{6}\] In general, we do not have access to the exact Floquet Hamiltonian \(\mathcal{H}_{F,\lambda}\) and micromotion operator \(\mathcal{P}_{\lambda}\). However, in Floquet engineering one is mostly interested in large driving frequencies compared to local energy scales. In this regime, a perturbative solution in terms of an inverse-frequency expansion can be obtained to a desired order \(n\): \[\mathcal{H}_{F,\lambda} =\mathcal{H}_{F,\lambda}^{(n)}+O(\omega^{-n-1})=\sum_{j=0}^{n} \omega^{-j}\mathcal{H}_{F,\lambda}^{[j]}+O(\omega^{-n-1})\,, \tag{7}\] \[\mathcal{K}_{F,\lambda} =\mathcal{K}_{F,\lambda}^{(n)}+O(\omega^{-n-1})=\sum_{j=0}^{n} \omega^{-j}\mathcal{K}_{F,\lambda}^{[j]}+O(\omega^{-n-1})\,,\] e.g., using a van Vleck [65, 2, 66], Floquet-Magnus [67, 1, 68, 69] or Brillouin-Wigner [70, 71] expansion. The final approximate FAGP \(\mathcal{X}\) that should be implemented in the lab frame, is then obtained by transforming \(\mathcal{X}_{F}\) back to the lab frame and adding the additional Floquet-frame contribution \(\mathcal{A}_{\mathcal{P},\lambda}\): \[\mathcal{X}_{\lambda}^{(n)}(t)=\mathcal{P}_{\lambda}^{(n)}\mathcal{X}_{F, \lambda}^{(n)}\mathcal{P}_{\lambda}^{(n)\dagger}+\mathcal{A}_{\mathcal{P}, \lambda}^{(n)}\,. \tag{8}\] Here, \(\mathcal{X}_{F,\lambda}^{(n)}\) denotes the solution to Eq. (6) computed with respect to the \(n\)'th order Floquet Hamiltonian \(\mathcal{H}_{F}^{(n)}\). Using an inverse frequency expansion for Eq. (8) and recalling that \(\mathcal{K}_{F,\lambda}^{[0]}=0\)[72], we find for the lowest two orders in the inverse frequency: \[\begin{split}\mathcal{X}_{\lambda}^{(0)}(t)&= \mathcal{X}_{F,\lambda}^{(0)}\\ \mathcal{X}_{\lambda}^{(1)}(t)&=\mathcal{X}_{F, \lambda}^{(1)}+\omega^{-1}\left(i\Big{[}\mathcal{K}_{F,\lambda}^{[1]}(t), \mathcal{X}_{F,\lambda}^{(1)}\Big{]}-\partial_{\lambda}\mathcal{K}_{F, \lambda}^{[1]}(t)\right),\end{split} \tag{9}\] where \(\mathcal{X}_{F,\lambda}^{(0)}\) and \(\mathcal{X}_{F,\lambda}^{(1)}\) are obtained from minimizing the action (6) for \(\mathcal{H}_{F}^{(0)}\) and \(\mathcal{H}_{F}^{(1)}\), respectively. For convenience, we provide analytic expressions up to third order in App. B.1. Note that, the IFE approach to the Floquet adiabatic gauge potential, is bound to the validity of the inverse frequency expansion which breaks down whenever photon resonances occur [73, 65, 74]. This is particularly problematic for interacting systems. Moreover, the IFE may also be problematic from a practical perspective. To see why, recall that the main purpose for introducing the variational ansatz Eq. (6) is the ability to select the operator structure of \(\mathcal{X}_{F,\lambda}\), and hence to incorporate experimental constraints such as locality. However, within the IFE approach, the variational principle is applied in the Floquet frame. Thus, the extra additive contribution from the transformation \(\mathcal{P}_{\lambda}^{(n)}\) back to the lab frame - \(\mathcal{A}_{\mathcal{P},\lambda}^{(n)}\) in Eq. (8) - is fixed and cannot be shaped. Therefore, there exists no obvious way to obtain experimentally implementable local approximations to this additional contribution within the IFE approach. Nonetheless, we emphasize that the IFE method can still be useful for many Floquet engineering studies that are designed to work in the high-frequency regime, where the \(\mathcal{H}_{F,\lambda}^{(n)}\) is the object of interest. Here, both \(\mathcal{H}_{F,\lambda}^{(n)}\) and \(\mathcal{K}_{F,\lambda}^{(n)}\) are already known, making the application of IFE straightforward and hence allowing for an improved state manipulation at a low computational cost. Moreover, the incapability of the IFE to capture photon resonances can even be advantageous. In particular, in Floquet engineering applications governed by a perturbative Floquet Hamiltonian \(\mathcal{H}_{F}^{(n)}\), state preparation protocols should ideally follow the eigenstates of the approximate Floquet Hamiltonian \(\mathcal{H}_{F}^{(n)}\) rather than the exact \(\mathcal{H}_{F}\). Since photon resonances are not captured by the perturbative Floquet Hamiltonian, the resulting IFE CD protocol will ignore any photon-induced hybridization gaps and traverse them diabatically, as if they are not present. This can help avoid undesired hybridization of quasi-energy eigenstates and some forms of heating in few-particle and weakly-interacting or integrable systems. Indeed, as we show in Sec. IV.1, using the IFE protocol and a suitably short ramp duration allows us to pass through the spectral gaps of the approximate \(\mathcal{H}_{F}^{(n)}\) adiabatically, while traversing photon resonances diabatically. Variational Procedure Considering the shortcomings of the IFE-approach it would be advantageous to have a variational principle akin to Eq. (6) for the lab frame FAGP without relying on the inverse frequency expansion or any additional change-of-frame transformations. Using the definition of \(\mathcal{H}_{F}\) in Eq. (2) and the action for \(\mathcal{A}_{F}\) (6), we now state the following variational principle for \(\mathcal{A}_{\lambda}\): \[\begin{split} S[\mathcal{X}_{\lambda}]&=\int_{0}^{T }\mathrm{Tr}\Big{(}\mathcal{G}^{2}(\mathcal{X}_{\lambda}(t))\Big{)}\,\mathrm{d }t,\\ \mathcal{G}(\mathcal{X}_{\lambda})&=i[\mathcal{H}_{ \lambda}(t),\mathcal{X}_{\lambda}(t)]+\partial_{t}\mathcal{X}_{\lambda}(t)- \partial_{\lambda}\mathcal{H}_{\lambda}(t)\,,\end{split} \tag{10}\] where the integral over time and the partial time derivative are evaluated at fixed \(\lambda\). To see where Eq. (10) comes from intuitively, notice that plugging the definition of \(\mathcal{H}_{F,\lambda}\) (2) into the action (6), we obtain \[\mathcal{P}_{\lambda}^{\dagger}\mathcal{G}\mathcal{P}_{\lambda}=i[\mathcal{H} _{\lambda}(t),\mathcal{X}_{F}(t)+\mathcal{A}_{\mathcal{P}}]+\partial_{t}( \mathcal{X}_{F}(t)+\mathcal{A}_{\mathcal{P}})+\partial_{\lambda}\mathcal{H}_ {\lambda}(t)\,,\] where the explicit time dependence corresponds to the periodic time dependence only. Replacing \(\mathcal{X}_{F}(t)+\mathcal{A}_{\mathcal{P}}\) by an ansatz operator \(\mathcal{X}\), taking the trace, and integrating the action over one period \(T\) we arrive at Eq. (10). A rigorous detailed derivation using the extended Hilbert space is given in App. B. Equation (10) is the desired Floquet variational principle, allowing for the determination of an approximate FAGP using only lab frame quantities. Notably, the ansatz \(\mathcal{X}_{\lambda}(t)\) for the variational FAGP carries an explicit periodic time dependence. Therefore, a complete basis to expand \(\mathcal{X}_{\lambda}(t)\) in must, in addition to all operators acting on the Hilbert space, also include a complete set of periodic functions. Hence, in general, \(\mathcal{X}_{\lambda}(t)\) contains infinitely many terms arising from the Fourier harmonics of the periodic time dependence. However, in practical applications, it is often sufficient to truncate the infinite sum to a finite-dimensional subset including only a finite number \(N_{h}\) of Fourier harmonics. Then, we can choose an ansatz with \(N_{h}\) harmonics and \(N_{O}\) local operators \[\mathcal{X}_{\lambda}(t)=\sum_{m=1}^{N_{O}}\sum_{\ell=-N_{h}}^{N_{h}}\chi_{ \ell m}e^{i\ell\,\omega t}\mathcal{O}_{m}\,, \tag{11}\] where \(\chi_{\ell m}\) are the variational parameters. The operators \(\mathcal{O}_{m}\) are chosen to reflect any external constraints, e.g., locality or accessibility in the lab. In order to close the truncated algebra under multiplication we ignore any Fourier harmonics \(e^{i\ell\,\omega t}\) with \(|\ell|>N_{h}\) resulting from a product of two time-dependent functions. For the variational principle, we have to truncate \(\mathcal{G}\) to the given number of harmonics \(N_{h}\) before computing the action \(S\). This leads to the truncated variational principle \[S^{(N_{h})}[\mathcal{X}_{\lambda}] = \int_{0}^{T}\mathrm{Tr}\Big{(}\mathcal{G}^{(N_{h})}(\mathcal{X}_{ \lambda}(t))^{2}\Big{)}\mathrm{d}t \tag{12}\] \[\mathcal{G}^{(N_{h})}(\mathcal{X}_{\lambda}) = i[\mathcal{H}_{\lambda}(t),\mathcal{X}(t)]+\partial_{t}\mathcal{ X}_{\lambda}(t)-\partial_{\lambda}\mathcal{H}_{\lambda}(t)\big{|}_{N_{h}}\,,\] which is equivalent to Eq. (10) if a complete ansatz (\(N_{h}\to\infty\)) is considered. A detailed algorithm to compute an approximate variational FAGP numerically is found in App. E. Note that the action (10) is quadratic in the gauge potential such that the minimization is convex and guaranteed to converge to a global optimum. Let us re-iterate that the variational principle, Eq. (10), is a fully non-perturbative method that allows for the direct determination of a local approximate FAGP in the lab frame. As such, it overcomes all of the shortcomings of the IFE approach, at the expense of dealing with an infinite-dimensional variational space. However, as we will see below, in practical applications, a truncation to a finite number of harmonics proves sufficient, see Eqs. (11) and (12). ## III Counter-diabatic frequency modulation The frequency modulation (\(\lambda=\omega\)), often referred to as _chirping_, is commonly considered for state-manipulation in experimental setups [75; 38; 76] as it can lead to drastic changes in the system's response. However, we stress that the Floquet adiabaticity of chirps requires extra caution as it leads to a change in the driving period: \(T=T(t)\). This becomes evident when considering the micromotion contribution to the adiabatic gauge potential, \(\mathcal{A}_{\mathcal{P}}=-i\mathcal{P}_{\omega}\partial_{\omega}\mathcal{P} _{\omega}^{\dagger}\). In general, the micromotion operator \(\mathcal{P}_{\omega}(t)\) need not reduce to the identity \(\mathcal{P}_{\omega}(t)\not\equiv\mathds{1}\) at any time \(t\neq nT(n\in\mathbb{Z})\), and we can write \[\mathcal{P}_{\omega}(t)=\sum_{\ell}\mathcal{P}_{\ell,\omega}e^{i\ell\,\omega t }\,.\] Hence, \(\|\mathcal{A}_{\mathcal{P}}\|\sim t\) as \(t\!\!\to\!\!\infty\), since \(\partial_{\omega}e^{i\ell\,\omega t}=t\times i\ell e^{i\ell\,\omega t}\). This leads to a non-zero contribution \(\|\mathcal{A}_{\mathcal{P}}\|\propto i\mathcal{I}_{\tau}\!\!\to\!\!0\) in the FAGP even in the adiabatic limit, where \(\dot{\lambda}\!\!\to\!\!0\) and \(T_{\mathrm{ramp}}\!\!\to\!\!\!\infty\). Therefore, even in the adiabatic limit, the evolution does not follow the instantaneous eigenstates of the Floquet Hamiltonian (3). This suggests, that the infinite slow evolution follows the eigenstates of a different effective Hamiltonian. In fact, one can convince oneself that including the explicitly time-dependent term \(-\mathcal{P}_{\omega}^{\dagger}(\sum_{\ell}\mathcal{P}_{\ell,\omega}\partial_{ \omega}e^{i\ell\,\omega t})\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \(\omega(t)=\omega_{0}+(\omega_{1}-\omega_{0})t/T_{\rm ramp}\) [note the missing factor of 2 compared to \(\nu(t)\)], Sec. IV.2 for more details. As a consequence, the variational principle (10) also needs to be adjusted accordingly (see App. D.1) \[\mathcal{G}_{\omega}(X_{\omega})=i[\mathcal{H}_{\omega}(t),\mathcal{X}_{\omega }(t)]+\partial_{t}\mathcal{X}_{\omega}(t)-\frac{\dot{\nu}}{\nu}\mathcal{H}_{ \omega}(t)-i\dot{\omega}\partial_{\omega}\mathcal{H}_{\omega}(t) \tag{14}\] where the partial derivative \(\partial_{\omega}\) acts only on explicitly \(\omega\)-dependent terms and does not act on the Fourier harmonics \(e^{it\omega t}\)[77]. The l-FCD Hamiltonian for chirps then reads as \[\mathcal{H}_{\rm l-FCD}=\mathcal{H}+\mathcal{X}_{\omega}\,, \tag{15}\] where any \(\dot{\omega}\) and \(\dot{\nu}\) dependence is already implicitly included in \(\mathcal{X}_{\omega}\). ## IV Applications To demonstrate the usefulness of Floquet CD driving, we now apply the techniques we developed in Sec. II to three examples of increasing complexity. The control setups we consider include resonant drives and frequency chirps, chosen to emphasize the out-of-equilibrium character of the controlled system. Their lack of static counterparts brings about genuinely nonequilibrium effects absent in static control problems. We first discuss a periodically-driven two-level system exposed to an additional external control field. While the IFE approach gives a good approximation to the FAGP in the off-resonant high-frequency regime, it breaks down for resonant control drives. The variational approach, on the other hand, is capable of finding accurate Floquet gauge potentials even on resonance. We then move on to apply the variational technique to prepare the state of a Floquet-engineered fermionic band model by speeding up a frequency chirp. We place a particular emphasis on experimental constraints and discuss how these can be incorporated to compute a sufficiently local approximation to the exact FAGP. Finally, we show how Floquet control can be deployed to open up nonequilibrium control channels in a static non-integrable many-body spin chain, thus enhancing equilibrium control techniques. In particular, we find that l-FCD driving can aid the preparation of many-body ground states belonging to different quantum phases of matter even when all accessible equilibrium control channels require crossing the critical point. In all examples, we quantify the advantage brought in by l-FCD driving by computing the _fidelity_. We use either the instantaneous fidelity \[F(t)=\left|\left\langle\psi_{0,\lambda(t)}\left|\psi(t)\right\rangle\right|^{2}\right. \tag{16}\] during the protocol between the time-evolved state \(\left|\psi(t)\right\rangle\) and the instantaneous Floquet eigenstate \(\left|\psi_{0,\lambda(t)}\right.\) [cf. App. A], or the final fidelity at the end of the ramp, \(F(T_{\rm ramp})\). In general, the closer the fidelity to unity, the better the FAGP approximation. ### Linearly polarized Two-Level System A hallmark feature of Floquet systems is the existence of photon resonances in the quasi-energy spectrum. To build intuition about the methods introduced in Sec. II, we investigate a pedagogical example of a linearly driven two-level system [78]. We are primarily interested in comparing the behavior of the IFE and variational approaches to the FAGP in the regime of photon resonances where the IFE method is expected to break down. Consider the Hamiltonian of the linearly polarized two-level system \[\mathcal{H}_{\lambda}(t)=\lambda\,S^{z}+\left[g+A\cos(\omega t)\right]S^{x}\,, \tag{17}\] with level-splitting \(\lambda\), level-hybridization \(g\), drive amplitude \(A(=\)2.5 \(g)\) and drive frequency \(\omega=2\pi/T(=\)100 \(g)\). Despite its simple Hilbert-space structure, there exists no known closed-form expression for the Floquet Hamiltonian. Figure 3: **FCD for linearly driven two-level system** (17) in high-frequency (**A**) and one-photon resonance (**B**) regime. The level-splitting \(\lambda\) is varied linearly within \(\lambda\in[-5g,\,5g]\) and \(\lambda\in[0.9\omega,\,1.1\omega]\), respectively. (_i_) Numerically exact instantaneous Floquet energies \(E_{F}\) as a function of time \(t\). (_ii_) Instantaneous fidelity \(F(t)\) for unassisted (gray), IFE (red circles) and variational protocol (blue boxes). _Inset_: Deviation of fidelity from unity, note the logarithmic scale. (_iii_) \(x\) (blue, solid) and \(y\) (orange, dashed) component of the variational gauge potential \(\mathcal{X}\), see Eqs. (20) and (22), (the \(z\) component always vanishes). In high-frequency regime (**A**) the IFE method successfully suppresses diabatic transitions but fails in the presence of photon resonances (**B**). The variational method suppresses diabatic transitions in all scenarios. We use a linear ramp \(\lambda(t)=\lambda_{\rm i}+(\lambda_{\rm f}-\lambda_{\rm i})t/T_{\rm ramp}\) for the control parameter \(\lambda\in[\lambda_{\rm i},\lambda_{\rm f}]\). Other parameters are \(\omega/g=100\), \(A/g=2.5\), \(T_{\rm ramp}g=0.5\). **high-frequency Regime.** Let us first focus on the high-frequency regime. In particular, we consider a linear ramp of the level-splitting \(\lambda\) \[\lambda(t)=(\lambda_{\rm f}-\lambda_{\rm i})\frac{t}{T_{\rm ramp}}+\lambda_{\rm i} \tag{18}\] from \(\lambda_{\rm i}=-5g\) to \(\lambda_{\rm f}=5g\) with an duration \(T_{\rm ramp}=1/(2g)\). Such control setups arise naturally when trying to manipulate the behavior of Floquet-engineered systems using external controls, e.g., to demonstrate the presence of a non-zero Berry curvature in a Floquet topological band [14; 79], or to probe the metastability of dynamically stabilized matter [80]. Like in these situations, we assume that the system is initiated in a Floquet eigenstate before the ramp \(\lambda(t)\) is turned on. Note that, due to the invariance of the quasi-energies with respect to a shift by \(n\omega\) (\(n\in\mathbb{Z}\)) the Floquet Hamiltonian does _not_ have a well-defined ground state. Therefore, we use as an initial state the Floquet eigenstate which has a larger overlap with the ground state of the non-driven Hamiltonian \(\lambda_{\rm i}\,S^{z}\). Since the frequency is large compared to all other parameters, \(\omega\ll g,A,\lambda\), the system is well described by the lowest order Floquet Hamiltonian (see App. F.1) \[\mathcal{H}_{F}^{(0)}=\lambda S^{z}+gS^{x},\qquad\mathcal{K}_{F}^{(0)}=0\,, \tag{19}\] The Floquet energies exhibit an avoided crossing around \(\lambda=0\), see Fig 3A(i). In fact, in the absence of any counterterms (referred henceforth as _unassisted_ control, cf. Fig. 1), the protocol leads to an almost complete loss of fidelity, see grey line in Fig. 3A(ii). Note that, the lowest-order Floquet Hamiltonian (19) describes a standard Landau-Zener crossing [81; 82; 83; 84] as a function of \(\lambda\). Hence, the corresponding approximate FAGP for (19) takes the well-known form [58] \[\mathcal{A}_{\lambda}^{(0)}(t)=-\frac{g}{\lambda^{2}+g^{2}}S^{y}\,, \tag{20}\] where, in addition, we also used that to lowest order \(\mathcal{A}_{\lambda}^{(0)}(t)=\mathcal{A}_{F\lambda}^{(0)}\), following Eq. (9). In contrast to the unassisted protocol, the IFE FCD protocol significantly suppresses diabatic excitations leading to high fidelity throughout the entire protocol duration, see red circles in Fig 3A(ii). For comparison, we can also consider a variational FAGP. To this end, we use the ansatz \[\mathcal{X}(t)=[y_{0}+y_{1}\cos(\omega t)]S^{y}+x_{1}\sin(\omega t)S^{x}+z_{1} \sin(\omega t)S^{z}\,, \tag{21}\] with variational parameters \(\{y_{0},x_{1},y_{1},z_{1}\}\). Since real symmetric Hamiltonians are diagonalized by orthogonal transformations, the generating AGP can be chosen imaginary-valued [58]. To generalize this observation to Floquet systems for the ansatz above, note that \(\cos(\omega t)\propto e^{i\omega t}+e^{-i\omega t}\) can be considered a real-valued, and \(\sin(\omega t)\propto i(e^{i\omega t}-e^{-i\omega t})\) an imaginary-valued function [see App. B.2 for the details]. Similar to the IFE-FAGP, the variational FAGP fully counteracts the diabatic transitions, see blue squares in Fig 3A(ii). To reconcile the IFE and variational approaches, we apply a high-frequency expansion within the variational principle (12). This allows us to analytically derive the form of the variational FAGP shown in Fig 3A(iii). As expected, the latter coincides with Eq. (20) up to \(O(\omega^{-1})\) corrections (see App. F.1). **Photon Resonance Regime.** Let us now turn to the one-photon resonance regime at \(\lambda\approx\omega\). In particular, we consider the linear ramp from Eq. (18) with \(\lambda_{\rm i}=0.9\omega\) and \(\lambda_{\rm f}=1.1\omega\). In this regime, the energy levels appear far separated [Fig. 3B(i)], such that identifying an avoided level crossing is not immediately obvious. However, the energy separation is almost resonant with the external drive \(E_{F,1}-E_{F,2}\approx\omega\), leading to a hybridization of the states and an avoided crossing at around \(\lambda\approx\omega\), see Fig. 3B(i). Since the ramp duration is small compared to the inverse photon absorption gap, the Landau-Zener condition for adiabatic passage is not satisfied, and the unassisted drive creates excitations. However, this Floquet resonance is not captured by the high-frequency expansion [85], and, in addition, the IFE CD protocol fails as well, see Fig. 3B(ii). The variational approach, on the other hand, fully captures this non-perturbative resonance and allows for transitionless driving through the avoided crossing, see Fig. 3B(ii). In fact, using the ansatz (21) and taking a large frequency limit in the variational minimization (\(\omega\!\approx\!\lambda\!\gg\!A,g\)), we find (see App. F.1) \[\mathcal{X}_{\lambda}(t)=\frac{2A}{A^{2}+4(\lambda-\omega)^{2}}\left[\cos( \omega t)S^{y}-\sin(\omega t)S^{x}\right]+O(\omega^{-1})\,, \tag{22}\] The structure of the variational FAGP is shown in Fig. 3B(iii). This simple model already illustrates the potential advantages of the variational over the IFE method. Indeed, using a suitable ansatz, the variational method allows us to capture both perturbative and non-perturbative effects in the quasi-energy spectrum and the Floquet eigenstates. We expect this technique to be particularly useful in atomic physics where high-precision few-level control can be sped up using FCD drives. For the purposes of Floquet engineering governed by the _approximate_ Floquet Hamiltonian, however, following the adiabatically connected quasi-energy manifold in the presence of photon-resonances can sometimes be undesirable, [see discussion of Fig. 2]. In such cases, one can use the IFE protocol and a short \(T_{\rm ramp}\) to traverse standard quasi-energy gaps adiabatically, while passing through photon-resonances diabatically. Therefore, depending on the control problem at hand, either the IFE or the variational approach is preferable. ### Fermionic Band Model The experimental realization of topological band models [18] represents a milestone achievement of ultracold atom quantum simulators. Due to atoms being neurally charged, implementing topological Hamiltonians in ultracold atom platforms, such as the Harper-Hofstadter [12; 79], Haldane [14], or Shackley [75] model, would be unthinkable without the help of Floquet engineering. However, studying a topologically non-trivial state in an experiment requires a high-fidelity state preparation procedure. Therefore, (l-)FCD can play an important role in enhancing those state preparation schemes at a moderate-to-low cost. Our methods are straightforward to apply to generic Floquet-engineered (topological) band systems. We illustrate this using the setup from a recent experiment demonstrating Floquet topological pumping in a system of one-dimensional non-interacting fermions [75]. The system is effectively described by the free fermion Hamiltonian in momentum space \[\mathcal{H}_{\lambda}(t)=\sum_{q=1}^{N_{f}}\mathbf{\Psi}_{q}^{\dagger}\cdot\mathbf{h}_{ \lambda}(q,t)\cdot\mathbf{\Psi}_{q}, \tag{23}\] where \(N_{f}\) is the number of fermions and \(\mathbf{\Psi}_{q}^{\dagger}=(c_{p,q}^{\dagger},\,c_{s,q}^{\dagger})\) with \(c_{sq}^{\dagger}\) is the fermionic creation operator at quasi-momentum \(q\) in the \(x\)=\(s,\,p\) Bloch band. The Bloch Hamiltonian is given by \[\begin{split}\mathbf{h}_{\lambda}(q,t)=&\left(\epsilon _{+}+\sum_{j=1}^{3}J_{+}^{j}\cos(jqa-jA)\right)\mathbbm{1}\\ &-aF(t)\left(\eta_{sp}^{0}+2\eta_{sp}^{1}\cos(qa-A)\right)\sigma^ {x}\\ &+\left(\epsilon_{-}-2\sum_{j=1}^{3}J_{-}^{j}\cos(jqa-jA)\right) \sigma^{x}\,,\end{split} \tag{24}\] with the external time-periodic driving force \(F(t)\)=\(\omega K_{1}\cos(\omega t)\)+\(2\omega K_{2}\cos(2\omega t+\varphi)\), and the Peierls phase \(A(t)\)=\(K_{1}\sin(\omega t)\)+\(2K_{2}\sin(2\omega t+\varphi)\); lattice spacing constant is \(a\); the mean and difference between the \(s\) an \(p\)-band in energies is \(2\epsilon_{\pm}=\epsilon_{p}\pm\epsilon_{s}\); the intra-band hopping is \(2J_{\pm}^{j}=J_{p}^{j}\pm J_{s}^{j}\), and the inter-band hoppings are denoted by \(\eta_{sp}^{0,1}\)[86], see sketch in Fig. 4A. Depending on the values of the control parameters (\(K_{1},K_{2},\omega,\varphi\)) the Floquet-Bloch bands of Eq. (23) can be either topological or trivial [75]. The two-tone drive breaks the symmetry protecting the topological phase, thus opening up an adiabatic path between the trivial and topological phases. In the experiment, the initial state is the completely filled \(s\)-band of the static system \(K_{1}\)=\(K_{2}\)=0 and belongs to the trivial phase. During the state preparation protocol, first the amplitude \(K_{2}\), then the frequency \(\omega\), and eventually the second amplitude \(K_{1}\) are linearly ramped to prepare the initial state of the Floquet pump. We note in passing that, only during the topological Floquet pump is the symmetry restored and the state becomes topologically non-trivial. The theory of FCD driving is applicable to all steps of the state preparation protocol, and can even speed up the subsequent Floquet pump itself. However, the overall bottleneck for the entire state preparation sequence is set by the smallest gap encountered during the frequency ramp of the three-step state preparation protocol. Moreover, for frequency ramps, the size of the Floquet zone varies with time which may lead to the appearance of photon absorption resonances during the drive. Unlike amplitude (i.e., coupling) ramps, frequency chirps have no equilibrium analogue. Therefore, from a theoretical standpoint, they represent the most challenging and interesting of the three protocol steps. For these reasons, we set \(\lambda(t)=\omega(t)\) and focus exclusively on the frequency chirp. In the experiment, a linear frequency ramp was considered [75]. However, since this linear frequency chirp overshoots the instantaneous frequency \(\nu_{\text{f}}\neq\omega_{\text{f}}\), see Fig. 4B, and hence leads to a potentially wrong final state in the adiabatic limit [see Sec. III], we consider instead a cubic chirp. The cubic ramp with initial value \(\omega(t_{\text{i}})=\omega_{\text{i}}\) and final value \(\omega(t_{\text{f}})=\omega_{\text{f}}\) is defined as \[\omega(t)=\omega_{\text{i}}+(\omega_{\text{f}}-\omega_{\text{i}})\left(\frac{t -t_{\text{i}}}{t_{\text{f}}-t_{\text{i}}}\right)^{3}. \tag{25}\] Since \(\omega(t_{\text{f}})=\nu(t_{\text{f}})\), using the cubic chirp guarantees that the final state corresponds to an eigenstate of the Floquet Hamiltonian at the correct drive frequency, cf. Fig. 4B. During the state preparation protocol the quasi-energy bands hybridize such that both Floquet bands attain admixtures of the original \(s\) and \(p\) bands, see Fig. 4. The hybridization of the two bands results in avoided level crossings for the different quasi-momenta modes, with gaps \(\Delta_{q}\). Therefore, performing the unassisted state preparation protocol in a finite duration \(T_{\text{ramp}}\) leads to a significant loss in fidelity \(F(q,t)\), for all quasi-momenta where the ramp duration is small compared to the inverse gap, \(T_{\text{ramp}}\gg 1/\Delta_{q}\), see Fig. 5A(i). Hence, in the following, we consider FCD-assisted state preparations. As the Floquet engineering effect in this example relies on photon resonances, we deploy the variational approach. Note that in order for the system to evolve according to the dynamics of the Hamiltonian (23) once the state preparation Figure 4: **Fermionic Floquet band model: (A)** Schematic of setup with \(s\) (blue) and \(p\) (red) band energies \(\epsilon_{s,p}\), intra-band hoppings \(J_{s,p}^{1,2,3}\) and drive-induced inter-band hoppings \(\eta_{sp}^{0,1}\). **(B)** In this work we replace the linear drive (left panel) by a cubic drive (right panel), see Eq. (25), to mitigate the error when considering the time-modulated frequency \(\omega(t)\) (gray dashed) instead of the instantaneous frequency \(\nu(t)\) (black solid). **(C)** Quasi-energy bands at the beginning (\(i\)) and end (\(ii\)) of the frequency ramp as a function of quasi-momentum, color indicates overlap with \(s\) (blue) and \(p\) (red) band of the non-driven model. During the state preparation the two bands become \(s\) and \(p\) band character. procedure is over, the FAGP must vanish at the end of the protocol. This property is also guaranteed by using the cubic ramp, since \(\dot{\nu}(t_{\mathrm{f}})=\dot{\omega}(t_{\mathrm{f}})=0\). To compute the FAGP, recall that the Hamiltonian in momentum space reduces to a collection of independent two-level systems, one for each quasimomentum \(q\), cf. Eq. (24). Therefore, we can leverage the techniques developed in the two-level system example, Sec. IV.1, to compute the FAGP for each fixed momentum \(q\) individually. To be precise, we choose an ansatz for the adiabatic gauge potential of the form \[\mathcal{X}_{q;\lambda}(t)=\sum_{\alpha=x,y,z}\sum_{z\in\ell}^{N_{h}}\left( \chi_{q;\lambda}^{\alpha,\ell}e^{i\ell\omega t}\right)\,\sigma^{\alpha}\,, \tag{26}\] with variational parameters \(\chi_{q;\lambda}^{\alpha,\ell}\) which are smooth functions of \((q,\,\lambda)\). The variational parameters are obtained numerically using the action (14).The amplitude, \(\|\mathcal{X}(q,t)\|\), of the resulting variational FAGP is shown in Fig 5B(i). Note that the ansatz (26) cannot reproduce the Floquet gauge potential exactly, due to the finite number \(N_{h}\) of Fourier harmonics kept. Still, as seen in Fig. 5A(ii), the associated FCD protocol leads to almost perfect transitionless driving. The FAGP is localized in quasi-momentum space [see Fig 5B(i)], involving many _lattice harmonics_, \(e^{ljqa}\), \(j=0,\ldots,N_{f}-1\). Since the \(j\)'th lattice harmonic \(e^{ijqa}\) corresponds to a \(j\)'th-neighbor coupling in real space, the exact Floquet adiabatic gauge potential is delocalized in real space. Therefore, to implement the exact FCD protocol in an experiment, precise control over all individual \(j\)'th neighbor couplings is required. At the moment, to the best of our knowledge, engineering such long-range couplings is out of reach for most cold atomic experimental platforms [although arbitrary-range hopping is feasible using the techniques developed for synthetic dimensions [87, 88, 89, 90].]. Fortunately, the versatile character of the variational FAGP approach allows us to build any experimental restrictions directly into the ansatz, so long as one is willing to tolerate a certain fidelity degradation. Frequent experimental constraints reflect the accessibility of control channels or the locality of available control terms. For example, depending on the experimental platform, the imaginary-valued terms \(\propto\sigma^{y}\) required for the gauge potential, may be difficult to implement. However, rotating to a moving frame we can replace any \(\propto\sigma^{y}\) term using \(\sigma^{\mathrm{x}}\) and \(\sigma^{\mathrm{z}}\) terms [64]. Another interesting way to generate the gauge potential terms is using Floquet engineering [91]. To alleviate the problem of locality in our system, we consider a local approximation (l-FCD) to the exact FCD protocol restricting the control to a limited number, \(N_{q}\) (\(j=0,\ldots,N_{q}\)), of lattice harmonics. This approximation may be implemented by considering a real space ansatz for the variational FAGP and then solving the variational principle in Eq. (10) for the many-particle Hamiltonian (23) instead of considering each momentum individually. However, since we already have access to a nonlocal gauge potential that does the job, we can simply truncate the exact result to the given number of lattice harmonics \(N_{q}\). As anticipated, while the l-FCD counterterms take a simpler form in quasi-momentum space they cannot suppress all excitations leading to finite infidelity, see Fig. 5B(ii) and Fig. 5A(iii). Nevertheless, compared to the unassisted protocol, the local FCD still leads to a notable increase in fidelity, see Fig. 5C. In particular, considering a 99 % fidelity threshold the l-FCD protocols lead to a 5-20 % reduction in ramp-time depending on the amount of added lattice harmonics \(N_{q}=1\)-3, respectively. These numbers are, however, strongly model-dependent and should not be considered as a generic reference point for the speedup offered by FCD driving, as we will see in Sec. IV.3. In general, Floquet CD driving, like the conventional static CD driving, may require the implementation of experimentally infeasible long-range counterterms. However, as demonstrated in the experimentally relevant example above, local approximations can still lead to a notable and experimentally desirable increase in fidelity compared to unassisted protocols with only moderate control requirements. ### Periodically Driven Many-Body System An important problem in quantum many-body control is the preparation of ground states of an ordered phase starting from a ground state in the disordered phase. This is a notoriously Figure 5: **Fermionic Floquet band model FCD assisted state preparation: (A) Momentum resolved instantaneous Infidelity \(1-F(q,t)\) for unassisted \((i)\), exact FCD \((ii)\) and l-FCD \((iii)\) for \(N_{q}=2\) protocol for \(T_{\mathrm{ramp}}=1\,\mathrm{ms}\). (B) Amplitude \(\|\mathcal{X}(q,t)\|\) of corresponding FAGP potential for FCD \((i)\) and l-FCD \((ii)\). (C) Final infidelity \(1-F\) as a function of \(T_{\mathrm{ramp}}\) for unassisted (cyan diamonds), and l-FCD protocols (purple squares) for \(N_{q}=1,2,3\) (light to dark). Inset shows zoom into the region around the 1 % infidelity threshold (gray line) and ramp times needed for the different protocols to pass the threshold. variational method suppresses all diabatic transitions but requires experimentally infeasible highly non-local counter term. Approximating counterterms with local terms still yields a boost in fidelity compared to using no counter term. Parameters: We use the cubic chirp Eq. (25) with \(\omega_{i}\)=\(2\pi\times 5\,\mathrm{kHz}\) and \(\omega_{t}\)=\(2\pi\times 7.5\,\mathrm{kHz}\), \(N_{f}\)=\(500\) fermions and \(N_{h}\)=\(32\) Fourier harmonics.** hard task since by definition there exists no adiabatic path in the thermodynamic limit connecting the two phases [92; 93; 94; 95] as long as the protecting symmetry is left intact. Therefore, in practice one either makes use of the finite size gap which vanishes in the thermodynamic limit and thus leads to divergent unassisted protocol durations, or one needs to introduce additional control channels to break the protecting symmetry explicitly. However, in some cases such symmetry breaking controls cannot be directly implemented, for example if the system is intrinsically symmetric. As a way out, nonequilibrium periodic drives have been proposed as a useful tool to engineer the implementation of additional terms assisting the state preparation protocol [96; 97; 98; 99; 91; 91; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99]. Here, we show that FCD driving belongs to this list, providing a proof-of-principle demonstration of how periodic driving assisted by l-FCD can be used to improve state preparation protocols for static many-body systems. We consider as a toy model the _transverse field Ising_ (TFI) chain of \(L\) spins, \[\mathcal{H}_{\text{TFI}}=\sum_{\ell=1}^{L}\left[J\sigma_{\ell}^{z}\sigma_{ \ell+1}^{z}+h_{x}\sigma_{\ell}^{x}+h_{y}\sigma_{\ell}^{y}\right], \tag{27}\] with ferromagnetic interactions \(J{<}0\), and two transverse fields \(h_{x}\) and \(h_{y}\); we use periodic boundary conditions. Independent of the ratio \(h_{x}/h_{y}\), the system undergoes a quantum phase transition at \(h{=}\sqrt{h_{x}^{2}+h_{y}^{2}}{=}J\) from a disordered paramagnetic (PM) phase (\(h{>}|J|\)) to an ordered ferromagnetic (FM) phase (\(h{<}|J|\)), where the energy gap closes. Therefore, in the static system and in the thermodynamic limit, there exists no adiabatic path connecting the two phases. In order to explicitly break the underlying \(\mathbb{Z}_{2}\) symmetry, access, e.g., to a parallel \(z\)-component field (\(\propto\sum_{\ell}\sigma_{\ell}^{z}\)) is needed. Notice that the instantaneous Hamiltonian preserves the \(\mathbb{Z}_{2}\) symmetry exactly for all parameter values. Hence, the PM and FM phases are strictly disconnected in equilibrium. In the following, we make the plausible assumption that we can only access terms already present in the Hamiltonian (27) for control purposes. We will now demonstrate how one can use nonequilibrium periodic driving to engineer an adiabatic path connecting the two phases using only the \(h_{x}\) and \(h_{y}\) field components. Consider the _circularly driven_ TFI model \[\mathcal{H}(t;A,\omega)=\sum_{j=1}^{L}\left[J\sigma_{j}^{z}\sigma_{j+1}^{z}+A \left(\cos(\omega t)\sigma_{j}^{x}+\sin(\omega t)\sigma_{j}^{y}\right)\right], \tag{28}\] where we periodically modulate the \(h_{x}\) and \(h_{y}\) components with amplitude \(A{=}A(t)\) and frequency \(\omega{=}\omega(t)\), see also sketch in Fig. 6A. We stress that the time dependence explicitly breaks the integrability of the system, and brings out its intrinsically many-body character. Indeed, using the reference frame transformation \(V(t)=\exp\!\left(-i\omega(t)t\sum_{n}\sigma_{n}^{z}/2\right)\), Eq. (28) is mapped to the _non-integrable_ mixed field Ising model (MFI) \[\mathcal{H}_{\text{MFI}}(t;A,\omega)=\sum_{j=1}^{L}\left[J\sigma_{j}^{z} \sigma_{j+1}^{z}+\tilde{h}_{x}\sigma_{j}^{x}+\tilde{h}_{z}\sigma_{j}^{z} \right], \tag{29}\] with transverse field \(\tilde{h}_{x}(t){=}A(t)\) and longitudinal field \(\tilde{h}_{z}(t){=}-\nu(t)/2\), and instantaneous drive frequency \(\nu(t)=\partial_{t}(\omega(t)t)\) (see Eq. (13). We deliberately choose the circular drive in order to avoid potential heating processes present ubiquitously in generic periodically driven many-body systems. The key insight behind applying the circular drive is that it leads to an effective \(z\)-field which opens up a new parameter space direction to circumvent the quantum critical point, see Fig. 6B. However, we stress that having access to a \(z\)-field control is only necessary but not sufficient for high-fidelity many-body state manipulation. Indeed, a naive application of the circular protocol does not directly lead to a significant Figure 6: **Local Floquet counterdiabatic driving for circularly driven spin chain.****(A)** Sketch of circularly driven spin chain, see Eq. (28). **(B)** Rotating frame fields in \(\tilde{h}_{x}\)-\(\tilde{h}_{z}\) plane. The non-driven protocol (gray dashed) ramps through the quantum critical point (QCP, red) at \((\tilde{h}_{x},\tilde{h}_{z})=(J,0)\). The driven protocol (blue solid) yields an effective non-zero \(\tilde{h}_{z}\) component allowing to circumvent the QCP. **(C)** Time-dependence of state-preparation schemes. (i) Variational parameter \(\chi(t)\) (30) computed using the algorithm detailed in App. E. (ii) Instantaneous Fidelities \(F(t)\) for non-driven (gray circles), driven (blue squares) and I-FCD assisted driven protocols (turquoises triangles). **(D)** Final fidelity for protocols from C for different ramp times \(T_{\text{ramp}}\) (i) and different system sizes \(L\) (ii). The Floquet engineered finite \(\tilde{h}_{z}\) component leads to a notable increase in fidelity and enhanced scaling with ramp time and system size. Around \(JT_{\text{ramp}}=5\) the I-FCD protocol enables state-preparation close to unit fidelity almost independent of system size. _Parameters:_ We use \(A(t)/J=[10-9.5\times\lambda(t)]\) and \(\omega(t)=[\omega_{\text{max}}\times\sin(\pi\lambda(t))]\) where \(\omega_{\text{max}}=0.2J\) and \(A(t)\) is a cubic ramp, see Eq. (25), from \(\lambda_{\text{i}}=0\) to \(\lambda_{\text{f}}=1\). Further, \(L=24\) and \(JT_{\text{ramp}}=5\), if not stated otherwise. We add a small symmetry breaking field \(h_{z}/J{=}10^{-3}\) to single out one of the degenerate states in the FM phase. increase in fidelity and enhanced scaling in the ramp time \(T_{\text{ramp}}\) and system size \(L\), compared to the \(\omega(t)\)\(\equiv\)0 protocol, see Fig. 6C and D, respectively. In order to improve the performance of the protocol, we now apply FCD driving. Notice that the exact AGP in a generic many-body system requires the implementation of highly non-local both long-range and multi-body interactions including Pauli strings \(\sigma_{j_{1}}\sigma_{j_{2}}\ldots\sigma_{j_{s}}\) of arbitrary length \(s=1,\ldots,L\)[58; 64]. The multi-body terms appear as a direct consequence of interactions and are also featured in the FAGP. As a result, obtaining the exact FAGP is computationally expensive. In addition, implementing such highly non-local multi-body terms is experimentally infeasible [100]. To comply with these constraints, we exploit again the variational character of l-FCD driving: we focus on CD protocols that require only local single-particle counter-drives built from the accessible \(h_{x}\) and \(h_{y}\) fields already present in Eq. (27). In particular, we choose the variational ansatz \[\mathcal{X}(t)=\chi(A,\omega)\sum_{j=1}^{L}\left[\cos(\omega t)\sigma_{j}^{y} -\sin(\omega t)\sigma_{j}^{x}\right]\,, \tag{30}\] with a single variational parameter \(\chi\), see Fig. 6C(i). The ansatz (30) gives rise to an \(y\)-field \(\tilde{h}_{y}\)=\(\chi\) in the rotating frame CD Hamiltonian (29). Since we are only interested in transitionless driving of the groundstate, we can further replace the trace \(\text{Tr}(\cdot)\) in the variational principle (10) by the expectation value \(\langle\cdot\rangle_{\phi_{0}(t)}\)[58] with respect to the instantaneous Floquet groundstate \(|\psi_{0}(t)\rangle\). In general, the notion of Floquet ground states is ill-defined as quasi-energies \(E_{F}\) are only defined up to integer multiples of the driving frequency \(E_{F}\)\(\hat{=}E_{F}+n\omega\), \(n\in\mathbb{Z}\). Usually, in such cases \(|\psi_{0}(t)\rangle\) may represent the manifold of adiabatically connected target Floquet eigenstates. Here, however, we can use the exact mapping to the MFI model to define the instantaneous Floquet ground states using the instantaneous ground state of the static MFI model (29). We emphasize that using the groundstate expectation value instead of the trace leads to an enhanced state preparation protocol, see App. F.3, but is computationally more demanding as it requires the computation of the instantaneous Floquet groundstate at each step of the protocol. Remarkably, using the numerically computed l-FCD assisted protocol (see Fig. 6C(i)) leads to an increase of fidelity by almost one order of magnitude compared to the unassisted driven protocol, see Fig. 6C. Notably, for protocols as short as \(JT_{\text{ramp}}\)=5, the l-FCD assisted protocol leads to almost unit fidelity for a wide range of system sizes \(L\)=\(8\)\(-\)\(24\), see Fig. 6D(ii). Let us comment, that already few Floquet cycles are sufficient to yield a nonequilibrium-based enhancement. This emphasizes the advantage of following the Floquet states even in the absence of periodic drives in the original control setup. In conclusion, we demonstrated that combining l-FCD driving with ideas from Floquet theory enables the engineering of more efficient state preparation schemes without any additional control requirements. More generally, using nonequilibrium drives allows us to open up new effective control directions via suitably engineered change-of-reference transformations. In particular, we have shown that l-FCD is also applicable in nonintegrable quantum many-body systems. ## V Discussion & Outlook ### Summary Periodic driving has emerged as a powerful tool to engineer complex Hamiltonians in quantum simulation. However, to complete this set of ideas into a self-contained toolbox, requires us to develop a technique for high-precision control of the engineered Hamiltonians that allows, among others, for high-fidelity state manipulation. The current state of the art in Floquet control is adiabatic manipulation of parameters. Still, adiabatic control requires long protocols, which appears at odds with the limited coherence times in experiments that benefit from short preparation protocols. To achieve fast control of nonequilibrium quantum matter we consider a conceptually different approach that aims at suppressing diabatic transitions by adding counterterms to the lab-frame Hamiltonian. We introduced two complementary approaches to (local) Floquet counterdiabatic driving. They enable the computation of controlled approximations to the exact Floquet adiabatic gauge potential used to generate optimal transitionless driving protocols. Both methods rely only on the structure of the Hamiltonian, i.e., do not require access to eigenvalues or eigenvectors. The first, based on an inverse frequency expansion, is tailored to Floquet engineering applications in the high-frequency regime. However, as it inherits issues from the inverse frequency expansion, this method breaks down in the presence of photon resonances. Therefore, we developed a second, non-perturbative, variational procedure based on a least-action principle in the lab-frame which overcomes the problems of the perturbative high-frequency approach. Using the example of a linearly driven two-level system, we demonstrated that the variational method works in the genuinely nonequilibrium regime of resonant drives. Moreover, by choosing which of the inverse frequency expansion or variational methods to use, we can engineer whether the system should pass through photon resonances diabatically or adiabatically. The control framework we developed in this example can be directly extended to yield fast and efficient control and population transfer between isolated states in few-level systems, such as atoms and molecules, exposed to a periodic drive. In a second application, we sped up a frequency-modulation drive to prepare the ground state of a Floquet fermionic band insulator. Such frequency chirps represent an example of nonequilibrium protocols without equilibrium control counterparts. We used this example to illustrate the versatility of our method to obtain short high-fidelity protocols that respect experimental limitations. In particular, we showed how to avoid the implementation of experimentally inaccessible non-local terms present in the exact Floquet adiabatic gauge potential. Finally, we applied local Floquet counterdiabatic driving to transfer the population from a disordered to an ordered ground state in a non-integrable Ising spin chain. This example illustrates a conceptually new approach to enhancing state preparation in static systems. Using periodic drives we can construct a nonequilibrium adiabatic path that connects two states of matter, even when these states are not connected by adiabatic paths in the original non-driven Hamiltonian. This can be useful, e.g., in quantum annealing where phase transitions in the parameter landscape pose significant limitations to the performance of solving optimization problems [101; 102]. In summary, Floquet counterdiabatic driving can significantly enhance state preparation without additional hardware requirements. ### Outlook Besides having immediate applications in quantum simulation and quantum computing, our work opens up a couple of new exciting directions. First, there are certain straightforward extensions of our nonequilibrium control methods to more complex systems. In particular, the algebraic structure, underlying the exactly solvable Floquet toy model in App. C, can be generalized to discover new classes of analytically solvable Floquet models. For realistic models, where this dynamical integrability is broken, expressive matrix-product operator ansatze may be useful to approximate the adiabatic gauge potential beyond system sizes amenable to exact diagonalization. Second, our work paves the way for a more general control theory for nonequilibrium systems. Specifically, the variational procedure can be used as a starting point to generalize more sophisticated shortcuts to adiabaticity techniques [59; 103] to periodically driven systems, such as counterdiabatic-optimized local driving [104] or a second periodic drive may even be implemented to help engineer the Floquet adiabatic gauge potential [91]. In fact, our theory also applies to classical periodically driven systems [58; 105], and can be extended to other nonequilibrium systems described by local effective Hamiltonians, such as random multipolar [106] and quasi-periodic driving [107]. In addition, we expect our work to provide new insights into the study of thermalization in nonequilibrium systems. On one hand, the adiabatic gauge potential can be used to detect chaos and integrability breaking [108; 109]. On the other hand, an important open problem is the manipulation of prethermal states of matter where additional control fields may interfere with and compromise the stability of the prethermal plateau. Finally, let us re-emphasize that the non-perturbative variational principle enables the study of Floquet systems beyond the restrictions inherent to high-frequency expansions. For example, the resulting non-perturbative variational adiabatic gauge potential allows for computing linear response functions [58] and detecting photon resonances, without needing to compute the exact non-local Floquet Hamiltonian (see also App. F.2.2). Furthermore, the relation between the gauge potential and the quantum geometric tensor opens up new ways to investigate quantum geometry [58] away from equilibrium. In addition, as the variational gauge potential contains information about the entire non-perturbative quasi-energy spectrum, the Floquet variational principle we developed could even provide a way to obtain a variational Floquet Hamiltonian. ## Acknowledgements We thank P. W. Claeys, M. Kolodrubetz, and A. Polkovnikov for insightful discussions. Funded by the European Union (ERC, QuSimCtrl, 101113633). Views and opinions expressed are however those of the authors only and do not necessarily reflect those of the European Union or the European Research Council Executive Agency. Neither the European Union nor the granting authority can be held responsible for them. The simulations make use of the QuSpin Python package [110; 111]. Simulations were performed on the MPI PKS HPC cluster. ## Appendix A Instantaneous Floquet eigenstates Notice that, to evaluate the instantaneous fidelity (16), \(F(t)=\left|\left\langle\phi_{0,\lambda(t)}\middle|\psi(t)\right\rangle\right|^ {2}\), we have to compute the 'instantaneous' Floquet eigenstate \(\left|\psi_{0,\lambda(t)}\right\rangle\). However, the notion of an instantaneous Floquet eigenstate is ill-defined. Instead, there exists an entire family of Floquet eigenstates \(\left\{\left|\psi_{0,\lambda}(\phi)\right\rangle\right\}_{\phi\in[0,\,2\pi]}\) connected by the micromotion operator \(\mathcal{P}_{\lambda}(\phi)\). Therefore, the question arises which state, or which phase \(\phi\), yields the correct representative. Considering a trivial protocol, where all parameters are fixed, it becomes evident from the Floquet theorem (3) that \(\phi(t)=\omega t+\phi_{0}\) is the correct choice for the phase. In fact, the same relation holds true for frequency modulations as well, which can be motivated by considering the integrated phase \(\phi(t)-\phi_{0}=\int_{0}^{t}\nu(s)\mathrm{d}s=\omega(t)t\) with respect to the instantaneous frequency \(\nu(t)\). To compute the instantaneous Floquet eigenstates for a phase \(\phi=\omega t\), we first evaluate the Floquet Hamiltonian \(\mathcal{H}_{F}[t]\) via \[e^{-i\mathcal{H}_{F}[t]\,|T}=\mathcal{T}e^{-i\int_{t}^{t=T}\mathcal{H}_{ \lambda(t)}(s)\mathrm{d}s}\,, \tag{17}\] and then diagonalizing it. ## Appendix B Additional Theoretical Results ### Explicit Expressions for Higher Order Inverse Frequency Expansion Approach In this section, we report on higher order expressions for the inverse frequency expansion (IFE) (9). Let us recall the inverse frequency expansion for the kick operator and Floquet Hamiltonian (7) \[\mathcal{H}_{F,\lambda}^{(n)} =\sum_{j=0}^{n}\omega^{-n}\mathcal{H}_{F,\lambda}^{[n]}=\mathcal{ H}_{F,\lambda}+O\left(\omega^{-n-1}\right), \tag{18}\] \[\mathcal{K}_{\lambda}^{(n)} =\sum_{j=0}^{n}\omega^{-n}\mathcal{K}_{\lambda}^{[n]}=\mathcal{ K}_{\lambda}+O\left(\omega^{-n-1}\right),\] as well as the general structure of the transformation \[\mathcal{X}_{\lambda}(t)=\mathcal{P}_{\lambda}\mathcal{X}_{F,\lambda}\mathcal{P}_{ \lambda}^{\dagger}+i(\partial_{\lambda}\mathcal{P}_{\lambda})\mathcal{P}_{ \lambda}^{\dagger}\,. \tag{10}\] Using again \(K_{\lambda}^{[0]}=0\) and that \(\mathcal{X}_{\lambda}^{(n)}\) is the (approximate) AGP for \(\mathcal{H}_{\lambda}^{(n)}(\lambda)\), the IFE terms up to the third order read \[\mathcal{X}_{\lambda}^{(0)}(t)=\mathcal{X}_{\lambda}^{(0)}\,, \tag{11a}\] \[\mathcal{X}_{\lambda}^{(1)}(t)=\mathcal{X}_{\lambda}^{(1)}+\omega^{-1}\left( i\Big{[}\mathcal{K}_{\lambda}^{[1]}(t),\mathcal{X}_{\lambda}^{(1)}\Big{]}- \partial_{\lambda}\mathcal{K}_{\lambda}^{[1]}(t)\right),\] (11b) \[\mathcal{X}_{\lambda}^{(2)}(t) =\mathcal{X}_{\lambda}^{(2)}-\partial_{\lambda}\mathcal{K}_{ \lambda}^{(2)}(t)\] (11c) \[+\Big{[}\mathcal{K}_{\lambda}^{(2)}(t),\mathcal{X}_{\lambda}^{(2 )}\Big{]}-\frac{i}{2}\Big{[}\mathcal{K}_{\lambda}^{(1)}(t),\partial_{\lambda }\mathcal{K}_{\lambda}^{(1)}(t)\Big{]}\] \[+\Big{[}\Big{[}\mathcal{K}_{\lambda}^{(1)}(t),\mathcal{X}_{ \lambda}^{(2)}\Big{]},\mathcal{K}_{\lambda}^{(1)}(t)\Big{]}\,,\] and \[\mathcal{X}_{\lambda}^{(3)}(t) =\mathcal{X}_{\lambda}^{(3)}-\partial_{\lambda}\mathcal{K}_{ \lambda}^{(3)}(t) \tag{12d}\] \[+\Big{[}\mathcal{K}_{\lambda}^{(3)}(t),\mathcal{X}_{\lambda}^{(3) }\Big{]}\] \[-\frac{i}{2}\Big{[}\mathcal{K}_{\lambda}^{(2)}(t),\partial_{ \lambda}\mathcal{K}_{\lambda}^{(2)}(t)\Big{]}\] \[+\Big{[}\Big{[}\mathcal{K}_{\lambda}^{(2)}(t),\mathcal{X}_{ \lambda}^{(3)}\Big{]},\mathcal{K}_{\lambda}^{(2)}(t)\Big{]}\] \[-\frac{1}{6}\Big{[}\Big{[}\mathcal{K}_{\lambda}^{(1)}(t), \partial_{\lambda}\mathcal{K}_{\lambda}^{(1)}\Big{]},\mathcal{K}_{\lambda}^{( 1)}(t)\Big{]}\] \[-\frac{i}{6}\Big{[}\Big{[}\Big{[}\mathcal{K}_{\lambda}^{(1)}(t), \mathcal{X}_{\lambda}^{(3)}\Big{]},\mathcal{K}_{\lambda}^{(1)}(t)\Big{]}, \mathcal{K}_{\lambda}^{(1)}(t)\Big{]}\,,\] where for brevity the third order expression also includes higher order terms in \(\Big{[}\mathcal{K}_{\lambda}^{(2)}(t),\partial_{\lambda}\mathcal{K}_{\lambda }^{(2)}(t)\Big{]}\) and \(\Big{[}\Big{[}\mathcal{K}_{\lambda}^{(2)}(t),\mathcal{X}_{\lambda}^{(3)} \Big{]},\mathcal{K}_{\lambda}^{(2)}(t)\Big{]}\). Let us emphasize that a similar expansion of \(\mathcal{X}_{\lambda}^{(n)}\) in powers of \(\omega^{-1}\) does not exist in general, i.e., the difference between \(\mathcal{X}_{\lambda}^{(n)}\) and \(\mathcal{X}_{\lambda}^{(n-1)}\) might be of higher order than \(O(\omega^{-n})\). Indeed, if the \(n\)'th order correction opens a gap then the FAGP generally scales as \(\omega^{n}\) as it is proportional to \(1/\text{gap}\) close to an avoided gap closing. A common scenario where this can occur is the presence of quasi-conservation laws. There, the low order Floquet Hamiltonians may preserve a symmetry but higher order terms explicitly break the symmetry, possibly lifting degenerate energy levels. ### Alternative Derivation of Variational Method using extended Floquet Hilbert Space In this section we provide an additional derivation of the variational method using the so-called extended Floquet Hilbert space approach also referred to as two-time treatment [66]. Floquet Theorem for StatesTo this end, we will first recap the derivation of the extended Floquet Hilbert space approach. In the following, we denote the periodic time-dependence by \(\phi=\omega t\) in order to avoid confusion between periodic time-dependence and the time-dependence associated with the change of parameter \(\lambda=\lambda(t)\). Using this convention Eq. (3) reads \[\mathcal{H}_{F,\lambda}=\mathcal{P}_{\lambda}^{\dagger}(\phi)\mathcal{H}_{ \lambda}(\phi)\mathcal{P}_{\lambda}(\phi)-i\phi\mathcal{P}_{\lambda}(\phi)^{ \dagger}\frac{\mathrm{d}}{\mathrm{d}\phi}\mathcal{P}_{\lambda}(\phi)\,. \tag{12}\] In the following, we will suppress the \(\lambda\) index for all operators. As \(\mathcal{H}_{F}\) is time-independent and hermitian there exist a complete set of orthonormal eigenstates \(\left|\epsilon_{n}^{F}\right\rangle\) such that \(\mathcal{H}_{F}\left|\epsilon_{n}^{F}\right\rangle=\epsilon_{n}^{F}\left| \epsilon_{n}^{F}\right\rangle\). Transforming back to the lab frame the time-dependent Schrodinger equation (TDSE) \[\left(i\frac{\mathrm{d}}{\mathrm{d}t}-\mathcal{H}(\phi)\right)\left|\psi(t) \right\rangle=0\,, \tag{13}\] is therefore solved by the so-called Floquet states \[\left|\psi_{n}(t)\right\rangle=e^{-i\epsilon_{n}^{F}t}\left|u_{n}(\phi)\right\rangle\,, \tag{14}\] with the periodic Floquet functions \(\left|u_{n}(\phi)\right\rangle=\mathcal{P}(\phi)\left|\epsilon_{n}^{F}\right\rangle\). Therefore, any solution of the TDSE can be written as \[\left|\psi(t)\right\rangle=\sum_{n}c_{n}e^{-i\epsilon_{n}^{F}t}\left|u_{n}(\phi) \right\rangle\,, \tag{15}\] with complex expansion coefficients \(c_{n}=\left\langle\psi(0)|u_{n}(0)\right\rangle\). Note that there are two distinct time-dependences in \(\left|\psi_{n}(t)\right\rangle\), a periodic time-dependence inherited from the micromotion and the phase accumulation \(e^{-i\epsilon_{n}^{F}t}\) as known from usual static systems. Therefore, it is natural to split the time-derivative \(\frac{\mathrm{d}}{\mathrm{d}t}\) in the TDSE into two contributions \(\frac{\mathrm{d}}{\mathrm{d}t}=\partial_{t}+\frac{\mathrm{d}\phi}{\mathrm{d}t} \partial_{\phi}\) such that \[i\partial_{t}\left|\psi(t)\right\rangle=\left(i\nu\partial_{\phi}+\mathcal{H}(t )\right)\left|\psi(t)\right\rangle\,, \tag{16}\] where we used the instantaneous frequency \(\nu=\dot{\phi}=\omega+\dot{\omega}t\) and \(\left|\psi(t)\right\rangle=\sum_{n,m}c_{nm}(t)e^{-in\phi}\left|m\right\rangle\) is considered to be expanded in Fourier modes \(\left\{e^{-in\phi}\right\}_{n\in\mathbb{Z}}\) as well as some physical basis \(\{\left|m\right\rangle\}\). Extended Floquet Hilbert SpaceThis Fourier expansion of the state motivates the introduction of an extended Floquet Hilbert space where the periodic time dependence is lifted to its own degree of freedom. Then, the periodic functions \(\left\{e^{in\phi}\right\}_{n\in F}\) form an orthogonal basis of the Hilbert space of periodic smooth functions \(\mathcal{L}_{\circ}\) with inner product \[\left(n\left|m\right\rangle=\int_{0}^{2\pi}e^{-in\phi}e^{im\phi}\frac{\mathrm{d} \phi}{2\pi} \tag{17}\] where we introduced the bra-ket notation \(e^{in\phi}\rightarrow\left|n\right\rangle\). Likewise, expanding the Hamiltonian in Fourier modes \(\mathcal{H}(t)=\sum_{n}\mathcal{H}_{n}e^{-in\phi}\) we can promote the product \(e^{-i\phi n}\) to an operator acting on \(\mathcal{L}_{\circ}\), i.e., \(\mathcal{L}_{n}\dot{\bullet}e^{in\phi}:\mathcal{L}_{\circ}\rightarrow\mathcal{L}_{\circ}\) with \(\mathcal{L}_{n}\left|m\right\rangle=\left|n+m\right\rangle\). Eventually, we can also express the derivative \(\partial_{\phi}\) as an operator on the Hilbert space \(\mathcal{L}_{\circ}\) in fact from \(-i\partial_{\phi}e^{in\phi}=ne^{in\phi}\) it follows that the derivative acts as a number operator on the space of all harmonics \(\mathcal{N}\dot{=}-i\partial_{\phi}:\mathcal{L}_{\circ}\rightarrow\mathcal{L}_{\circ}\) with \(\mathcal{N}\ket{n}=n\ket{n}\). The extended Floquet Hilbert space \(\mathcal{F}=\mathcal{H}\otimes\mathcal{L}_{\circ}\) is the combination of the physical Hilbert space \(\mathcal{H}\) and \(\mathcal{L}_{\circ}\). Elements in this Hilbert space are denoted by \(\ket{\Xi}\rangle\in\mathcal{F}\) and can be expanded as \(\ket{\Xi}\rangle=\sum_{n,m}c_{nm}\ket{n}\otimes\ket{m}\). Therefore, we can rewrite Eq. (100) as \[i\partial_{t}\ket{\psi}\rangle=\mathcal{Q}\ket{\psi}\rangle\;, \tag{111}\] with the time-independent quasi-energy operator \(\mathcal{Q}=\nu\mathcal{N}+\sum\mathcal{H}_{n}\mathcal{E}_{n}\). Then, Eq. (111) can be solved by a separation Ansatz \(\ket{\psi(t)}\rangle=e^{-iqt}\ket{\Phi}\rangle\), with eigenfunctions \(\ket{\Phi}\rangle\) which satisfy \(\mathcal{Q}\ket{\Phi}\rangle=q\ket{\Phi}\rangle\) with quasi-energies \(q\). Therefore, the quasi-energy operator \(\mathcal{Q}\) is a time-independent object and contains all information about the Floquet problem, i.e., diagonalizing it amounts to computing the Floquet functions \(\ket{u_{n}}\) and quasi-energies \(\epsilon_{n}^{F}\) in Eq. (101). Note that, using the extended Floquet Hilbert space the instantaneous frequency \(\nu=\phi\) appears naturally as the frequency of interest. Before we continue deriving the variational ansatz a short comment about the extended Floquet Hilbert space is in order. Note that, since \(\ket{\Phi}\rangle\) is an eigenfunction of \(\mathcal{Q}\) with eigenvalue \(q\), also \(\mathcal{E}_{n}\ket{\Phi}\rangle\) is an eigenfunction of \(\mathcal{Q}\) with eigenvalue \(q-n\omega\). However, projecting \(\ket{\psi(t)}\rangle=e^{-iqt}\ket{\Phi}\rangle\) and \(\ket{\psi^{\prime}(t)}\rangle=e^{-i(q-n\omega)t}\mathcal{E}_{n}\ket{\Phi}\rangle\) back to the physical Hilbert space the two states are in fact the same \(\ket{\psi(t)}=\ket{\psi^{\prime}(t)}\). Hence, \(\mathcal{F}\) contains infinitely many copies of the physical problem each separated by \(n\omega\). Floquet Counterdiabatic DrivingReintroducing the control parameter \(\lambda\) in Eq. (111) we have \[i\partial_{t}\ket{\psi(t)}\rangle=\mathcal{Q}_{\lambda}\ket{\psi(t)}\rangle\] with \[\mathcal{Q}_{\lambda}=\nu(\lambda)\mathcal{N}+\sum_{n}\mathcal{H}_{n}(\lambda )\mathcal{E}_{n}\;. \tag{112}\] Note that formally Floquet counterdiabatic driving is equivalent to usual static counterdiabatic driving if one replaces the Hamiltonian by the quasi-energy operator \(\mathcal{Q}_{\lambda}\). In fact, as \(\mathcal{Q}_{\lambda}\) is hermitian there exists for each fixed \(\lambda\) a unitary transformation \(\mathcal{V}_{\lambda}\) such that \(\tilde{\mathcal{Q}}_{\lambda}{=}\mathcal{V}_{\lambda}^{\dagger}\mathcal{Q}_{ \lambda}\mathcal{V}_{\lambda}=\nu\mathcal{N}+\tilde{\mathcal{H}}\) is diagonal. Thus, in the moving frame \(\ket{\tilde{\psi}(t)}\rangle=\mathcal{V}_{\lambda(t)}^{\dagger}\ket{\psi(t)}\rangle\) the TDSE reads \[\left(i\partial_{t}+\tilde{\mathcal{Q}}_{\lambda}-\dot{\lambda}\tilde{ \mathcal{H}}_{\lambda}\right)\ket{\tilde{\psi}(t)}\rangle=0\;, \tag{113}\] with the _Floquet adiabatic gauge potential_ (FAGP) \(\tilde{\mathcal{A}}_{\lambda}=i\mathcal{V}_{\lambda}^{\dagger}\partial_{ \lambda}\mathcal{V}_{\lambda}\). As \(\tilde{\mathcal{Q}}\) is diagonal all transitions are driven by the FAGP \(\tilde{\mathcal{A}}_{\lambda}\), such that the Floquet counterdiabatic quasi-energy operator reads \[\mathcal{Q}_{\text{CD}}=\mathcal{Q}_{\lambda}+\dot{\lambda}\mathcal{A}_{\lambda }\;. \tag{114}\] The Floquet theorem in the extended Hilbert space guarantees that, for fixed \(\lambda\), \(\mathcal{Q}_{\lambda}\) can be diagonalized by a unitary \(\mathcal{V}_{\lambda}\) which corresponds to a time-periodic operator [65]. Therefore, also \(\mathcal{A}_{\lambda}(\phi)\) is a time-periodic operator, for fixed \(\lambda\). The FAGP satisfies the usual identities also satisfied by standard AGPs, see e.g., Ref. [58], upon using the correct extended Floquet Hilbert space analoga, \[(\mathcal{A}_{\lambda})_{nm} =i\left\langle\langle n_{\lambda}|\partial_{\lambda}\mathcal{Q}_{ \lambda}\ket{m_{\lambda}}\rangle\right\rangle\,, \tag{115a}\] \[(\mathcal{A}_{\lambda})_{nm} =i\frac{\left\langle\langle n_{\lambda}|\ \partial_{\lambda}\mathcal{Q}_{\lambda}\ket{m_{\lambda}}\rangle\right\rangle}{q _{n}(\lambda)-q_{m}(\lambda)}\;,\] (115b) \[[\mathcal{A}_{\lambda},\mathcal{Q}_{\lambda}] =i\partial_{\lambda}\mathcal{Q}_{\lambda}+i\mathcal{M}_{\lambda}\;,\] (115c) \[0 =[\mathcal{Q}_{\lambda},i\partial_{\lambda}\mathcal{Q}_{\lambda}-[ \mathcal{A},\mathcal{Q}_{\lambda}]]\;, \tag{115d}\] where \(\ket{n_{\lambda}}\rangle\) is the eigenstate of \(\mathcal{Q}_{\lambda}\) with eigenvalue \(q_{n}(\lambda)\) and \(\mathcal{M}_{\lambda}=-\sum_{n}\ket{n_{\lambda}}\rangle\,\partial_{\lambda}q_{ \lambda}\,\langle\langle m_{\lambda}|\) is the generalized force operator. Let us briefly sketch the proofs of Eqs. (115) which are equivalent to the usual AGP case (c.f. [58]). Equation (115a) is a direct result of the definition of \(\mathcal{A}=i\mathcal{V}_{\lambda}\partial_{\lambda}\mathcal{V}_{\lambda}^{\dagger}\). Likewise b follows directly from a using \(\mathcal{Q}_{\lambda}\ket{n_{\lambda}}\rangle=q_{n}(\lambda)\ket{n_{\lambda}}\rangle\), with c being the operator expression of b. Eventually, Eq. (115d) follows from Eq. (115c) using that \([\mathcal{Q}_{\lambda},\mathcal{M}_{\lambda}]=0\), which in turn is a direct result of the two operators sharing the same eigenstates. Variational PrincipleEquations (115) can be used to derive the variational principle in complete analogy to the static case(see e.g. [58]). For completeness, we provide a brief derivation. To this end, let us introduce \[\mathcal{G}_{\lambda}(\mathcal{X})=i[\mathcal{Q}_{\lambda},\mathcal{X}_{\lambda} ]-\partial_{\lambda}\mathcal{Q}_{\lambda} \tag{116}\] where \(\mathcal{X}_{\lambda}\) is an hermitian test operator. Then, Eq. (115c) can be written as \(\mathcal{G}_{\lambda}(\mathcal{A}_{\lambda})=\mathcal{M}_{\lambda}\). Therefore, the FAGP is the unique operator minimizing the Frobenius distance \[D_{\lambda}^{2}(\mathcal{X}_{\lambda}) =\operatorname{Tr}_{\mathcal{F}}(\,[\mathcal{G}_{\lambda}( \mathcal{X}_{\lambda})-\mathcal{M}_{\lambda}]^{2}) \tag{117}\] \[=\operatorname{Tr}_{\mathcal{F}}(\,\mathcal{G}_{\lambda}^{2}( \mathcal{X}_{\lambda}))-2\operatorname{Tr}_{\mathcal{F}}(\,\mathcal{G}_{\lambda} (\mathcal{X}_{\lambda})\,\mathcal{M}_{\lambda})+\operatorname{Tr}_{\mathcal{F}}( \,\mathcal{M}_{\lambda}^{2})\;.\] Note that, for periodic operators that we consider, the trace over the extended Floquet Hilbert space can be written as \(\operatorname{Tr}_{\mathcal{F}}(\,\cdot)=\operatorname{Tr}_{\mathcal{F}}(\,(0 \mid\cdot)\,|0))\), where \(\operatorname{Tr}_{\mathcal{F}_{\mathcal{F}_{\mathcal{F}_{\mathcal{F}_{\mathcal{F}_{ \mathcal{F}_{\mathcal{F}_{\mathcal{F}_{\mathcal{F}_{\mathcal{F}_{\mathcal{F}}}}}}}}}}}(1 \mathcal{I}_{\mathcal{I}_{\mathcal{I}_{\mathcal{F}_{\mathcal{F}_{\mathcal{F}_{ \mathcal{F}_{\mathcal{F}_{\mathcal{F}_{\mathcal{F}}}}}}}}})=1\). Using the cyclicity of the trace it is easy to show that \(\operatorname{Tr}_{\mathcal{F}}(\,[\mathcal{X},\mathcal{Q}]\,\mathcal{M})=0\) and thus \[D_{\lambda}^{2}(\mathcal{X})=\operatorname{Tr}_{\mathcal{F}}(\,\mathcal{G}_{ \lambda}^{2}(\mathcal{X}))+\operatorname{Tr}_{\mathcal{F}}(\,\mathcal{M}_{ \lambda}^{2})-2\operatorname{Tr}_{\mathcal{F}}(\,\partial_{\lambda}\mathcal{Q}_{ \lambda}\mathcal{M}_{\lambda})\;. \tag{118}\] As the two last terms in Eq. (118) are independent of \(\mathcal{X}\) minimizing \(D_{\lambda}^{2}(\mathcal{X})\) is equivalent to minimizing the action \[S[X]=\operatorname{Tr}_{\mathcal{F}}(\,\mathcal{G}_{\lambda}^{2}(\mathcal{X}))\;. \tag{119}\] Equation (119) can be projected to the physical Hilbert space by using \(\operatorname{Tr}_{\mathcal{F}}(\,\cdot\,)\rightarrow\,\int_{0}^{2\pi} \operatorname{Tr}_{\mathcal{F}}(\,\cdot\,)\mathrm{d}\phi\,/\,(2\pi)) and \(i[\mathcal{X}_{\lambda},\mathcal{N}]\rightarrow\partial_{\phi}\mathcal{X}_{\lambda}\) to read \[S[X_{\lambda}] ## Appendix C Exactly Solvable Model In this section, we report on the analytically solvable circularly driven two-level system \[\mathcal{H}_{\rm c}(t)=\Delta S^{z}+A\left[\cos(\omega t)S^{x}+\sin(\omega t)S^{ y}\right], \tag{108}\] with level splitting \(\Delta\), driving amplitude \(A\) and driving frequency \(\omega\). This model is the non-interacting limiting case of the circularly driven Ising model (28) considered in the main text. Likewise, it can be mapped to a static problem using the frame transformation \(V=\exp(-i\phi(t)S^{x})\) with \(\phi(t)=\omega t\). This leads to \[\widetilde{\mathcal{H}}=(\Delta-\omega)S^{z}+AS^{x}\,, \tag{109}\] where we used that \(\phi=\omega\) for fixed frequency. Note that Eq. (109) is not yet the Floquet Hamiltonian, as its spectrum is not folded into a window \(\omega\) of energies [2]. However, since this folding is trivial, i.e., no photon resonances occur, we can skip this step for further consideration. So far, the analysis is general and applies to any parameter in Eq. (108). In the following, we focus on a constant in time frequency \(\frac{\mathrm{d}}{\mathrm{d}t}\omega=0\). The case of a time-dependent frequency, so-called chirp, is studied in detail in Sec. D.3. In the rotating frame, the Hamiltonian (109) can be diagonalized via \(U(\alpha)=\exp(-i\alpha S^{y})\), where \(\tan(\alpha)=A/(\Delta-\omega)\). Therefore, the FAGPs \(\mathcal{A}_{\Delta}\) and \(\mathcal{A}_{A}\) for \(\tilde{\mathcal{H}}_{\rm o}\) upon changing \(\Delta\) and \(A\), respectively, read \[\mathcal{A}_{\Delta}(t) =\frac{-A}{(\Delta-\omega)^{2}+A^{2}}(\cos(\omega t)S^{y}-\sin( \omega t)S^{x})\,, \tag{110}\] \[\mathcal{A}_{A}(t) =\frac{\Delta-\omega}{(\Delta-\omega)^{2}+A^{2}}(\cos(\omega t)S^ {y}-\sin(\omega t)S^{x})\,.\] Notice that there is no additional contribution due to \(V^{\dagger}\partial_{A}V\) as \(V\) is independent of both \(A\) and \(\Delta\). We can also derive the FAGP in the extended Floquet Hilbert space. This allows us to gain more insights into the system and the mechanism allowing for an exact solution. To this end, let us consider the quasi-energy operator \[\mathcal{Q}_{\rm o}=\omega\mathcal{N}+\Delta S^{z}+\frac{A}{2}S^{\dagger} \mathcal{E}^{+}+\frac{A}{2}S^{-}\mathcal{E}^{-}\,, \tag{111}\] where \(\mathcal{E}^{\pm}\mathcal{E}^{\pm i\omega t}\). Additionally, we introduced the spin-\(\frac{1}{2}\) raising and lowering operators \(S^{\pm}=(S^{x}\mp iS^{y})\) which fulfill \([S^{z},S^{\pm}]=\pm iS^{\pm}\). Note that, we can define generalized raising and lowering operators \(\Sigma^{\pm}=S^{\pm}\mathcal{E}^{\pm}\) with associated pseudo-spin operators \(\Sigma^{x}=(\Sigma^{+}+\Sigma^{-})/2\), \(\Sigma^{y}=(\Sigma^{+}-\Sigma^{-})/2i\) and \(\Sigma^{z}=S^{z}\). These operators also fulfill the usual spin-\(\frac{1}{2}\) algebra \[\begin{split}\left[\Sigma^{z},\Sigma^{\pm}\right]&= \pm i\Sigma^{\pm}\,,\\ \left[\Sigma^{+},\Sigma^{-}\right]&=2\Sigma^{z}\,, \\ \left[\Sigma^{\alpha},\Sigma^{\beta}\right]&=i\epsilon _{\alpha\beta\gamma}\Sigma^{\gamma}\,,\end{split} \tag{112}\] with \(\alpha,\beta,\gamma\in\{x,y,z\}\) and the fully anti-symmetric Levi-Civita symbol \(\epsilon\). Additionally, we find \[i\left[\mathcal{N},\Sigma^{\pm}\right] =\pm\Sigma^{\pm}\,, \tag{113}\] \[i\left[\mathcal{N},\Sigma^{\alpha}\right] =-i\epsilon_{z\alpha\gamma}\Sigma^{\gamma}\,.\] Therefore, the set of operators \(\{\mathcal{N},\,\Sigma^{x},\,\,\Sigma^{y},\,\,\Sigma^{z}\}\) form a closed algebra akin to the standard Pauli-algebra, with the number operator \(\mathcal{N}\) taking a similar role as \(\Sigma^{z}\). Eventually, let us consider the action of the adjoint representation of \(\exp(-i\alpha\Sigma^{y})\) \[\begin{split}\exp(i\alpha\Sigma^{y})\Sigma^{z}\exp(-i\alpha \Sigma^{y})&=\cos(\alpha)\Sigma^{z}-\sin(\alpha)\Sigma^{x}\,,\\ \exp(i\alpha\Sigma^{y})\Sigma^{x}\exp(-i\alpha\Sigma^{y})& =\cos(\alpha)\Sigma^{x}+\sin(\alpha)\Sigma^{z}\,,\\ \exp(i\alpha\Sigma^{y})\mathcal{N}\exp(-i\alpha\Sigma^{y})& =\cos(\alpha)\mathcal{N}+\sin(\alpha)\Sigma^{x}\,,\end{split} \tag{114}\] which follows immediately from Eqs. (112) and (113). Using the generalized raising and lowering operators, we can write the quasi-energy operator (111) as \[\mathcal{Q}_{\rm o}=\omega\mathcal{N}+\Delta\Sigma^{z}+A\Sigma^{x}\,. \tag{115}\] Then, Eq. (114) implies that \(\mathcal{W}=\exp(-i\alpha\Sigma^{y})\) diagonalizes the quasi-energy operator for \(\tan\alpha=(\Delta-\omega)/g\). Hence, the adiabatic gauge potentials in the extended Floquet Hilbert space read \[\mathcal{A}_{\Delta} =\frac{-A}{(\Delta-\omega)^{2}+A^{2}}\Sigma^{y}\,, \tag{116}\] \[\mathcal{A}_{A} =\frac{\Delta-\omega}{(\Delta-\omega)^{2}+A^{2}}\Sigma^{y}\,,\] which is equivalent to Eqs. (110) upon projecting to the physical Hilbert space. In summary, the circularly driven two-level system is analytically solvable due to the closed algebra generated by the quasi-energy operator in the extended Floquet Hilbert space. This insight might be generalized to obtain more complex, analytically tractable, Floquet systems, i.e., by ensuring a closure relation among the operators appearing in the quasi-energy operator. ## Appendix D Details on Frequency Ramps In this section, we use the extended Floquet Hilbert space approach to rigorously derive the variational action (14) for frequency ramps. Moreover, we illustrate our findings on the exactly solvable circularly driven two-level system. ### Floquet Counterdiabatic Driving for Chirps First, we give a rigorous derivation of the variational action (14) for chirps using the extended Floquet Hilbert space approach. In Sec. B.2, we used the instantaneous frequency throughout such that the quasi-energy operator (109), \[\mathcal{Q}_{\lambda}=\nu_{\lambda}\mathcal{N}+\sum_{n}\mathcal{H}_{n,\lambda} \mathcal{E}_{n}\,.\] forms a perfect starting point. Note that, using the quasi-energy operator \(\mathcal{Q}_{\lambda}\) (109) we find for a chirp, \(\lambda=\nu\), that \(\partial_{\nu}\mathcal{Q}_{\lambda}=\mathcal{N}\) appears in the variational principle (110). The appearance of the number operator in the variational principle is not a fundamental problem. However, the practical computation of the variational FAGP becomes more difficult due to the non-periodicity and unboundedness of \(\mathcal{N}\). An important observation is that changing the quasi-energy operator by a multiplicative (possibly time-dependent) factor \(\mathcal{Q}\to\mathcal{Q}/f(\lambda)\) leaves the eigenstates unchanged, hence, does not change the FAGP. Therefore, we can instead consider the re-scaled (dimensionless) quasi-energy operator \[q_{\lambda}=\frac{\mathcal{Q}_{\lambda}}{\nu_{\lambda}}=\mathcal{N}+\nu_{ \lambda}^{-1}\sum_{n}\mathcal{H}_{n,\lambda}\mathcal{E}_{n}\,. \tag{10}\] Then, the number operator no longer appears in the derivative \(\partial_{\nu}q_{\lambda}=-\nu_{\lambda}^{-2}\sum_{n}\mathcal{H}_{n,\lambda} \mathcal{E}_{n}\), making the variational principle easier to solve. However, note that since the instantaneous frequency \(\nu\) explicitly depends on \(\dot{\omega}\) and time \(t\), the FAGP attains an explicit time dependence. This is in contrast to the mere implicit time dependency it carries via \(\lambda=\lambda(t)\) if the frequency is fixed. Therefore, it is convenient to choose \(\lambda=t\) as the fundamental protocol parameter. Then, combining everything we arrive at the action of form Eq. (100) with \[\begin{split} S[\mathcal{X}_{\lambda}]&=\int_{0}^{ 2\pi}\operatorname{Tr}_{\mathcal{F}}(\mathcal{G}_{\lambda}^{2}(\mathcal{X})) \operatorname{d}\!\phi\,\\ \mathcal{G}_{\omega}(\mathcal{X}_{\lambda})&=i[\mathcal{ H},\mathcal{X}_{\lambda}]+v\partial_{\phi}\mathcal{X}_{\lambda}-\frac{\dot{\nu}}{ \nu}\mathcal{H}-\lambda\partial_{\lambda}\mathcal{H}\,,\end{split} \tag{11}\] where \(\partial_{\lambda}\) acts on any protocol parameter that is not related to the periodicity of the drive \(\phi\). A common scenario where the frequency appears both to describe the periodicity and a non-periodic parameter is strong driving. There, the amplitude \(A\) of a drive scales with the frequency \(A\propto\omega\). ### Adjusted Floquet Counterdiabatic Driving for Chirps The action (14) describes the FAGP which leads to transitionless driving along the adiabatic path. Let us recall that for a chirp this adiabatic path is described by the instantaneous frequency \(\nu(t)\) instead of the time-dependent frequency \(\omega(t)\), see Sec. III. Here, we demonstrate that using an adjusted action we can obtain transitionless driving along the instantaneous Floquet states described by the change in the time-dependent frequency \(\omega(t)\); not the instantaneous frequency \(\nu(t)\). To this end, let us recall that in Sec. III, we concluded that parts of the 'naive' FAGPT contribution \(\mathcal{A}_{\mathcal{P}}=-i\mathcal{P}_{\omega}\partial_{\omega}\mathcal{P} _{\omega}^{\dagger}\) must be included into the Floquet Hamiltonian. This procedure restores adiabaticity, i.e., \(\hat{\lambda}\|\mathcal{A}_{\mathcal{P}}\|\to 0\) in the adiabatic limit, but requires the change \(\nu(t)\to\omega(t)\). However, if we insist on using the entire expression \(\mathcal{A}_{\mathcal{P}}=-i\mathcal{P}_{\omega}\partial_{\omega}\mathcal{P} _{\omega}^{\dagger}\) as part of the FAGP, the state protocol will follow the time-dependent frequency \(\omega(t)\). This comes at the expense that the counter term will not vanish in the adiabatic limit, see also example in Sec D.3. In order to derive the variational principle for this case one simply retraces the steps from Eq. (101) up to Eq. (100) setting explicitly \(\nu=\omega\), i.e., assuming \(\frac{\operatorname{d}}{\operatorname{d}t}=\omega\,\frac{\operatorname{d}}{ \operatorname{d}\phi}\). However, in order to correct for the fact that the phase \(\phi=\omega t\) depends on \(\omega\), one eventually has to use \(\frac{\operatorname{d}}{\operatorname{d}\omega}=\partial_{\omega}+t\partial_{\phi}\) such that the variational principle reads \[\begin{split} S^{\prime}[\mathcal{X}_{\omega}]&= \int_{0}^{2\pi}\operatorname{Tr}_{\mathcal{F}}(\mathcal{G}_{\omega}^{2}( \mathcal{X}_{\omega}))\operatorname{d}\!\phi\,\\ \mathcal{G}_{\omega}^{\prime}(\mathcal{X}_{\omega})&=i[ \mathcal{H},\mathcal{X}_{\omega}]+\omega\partial_{\phi}\mathcal{X}_{\omega}- \partial_{\omega}\mathcal{H}-t\partial_{\phi}\mathcal{H}\.\end{split} \tag{12}\] ### Example: Circularly Driven two-level System To illustrate the theoretical consideration above, let us consider again the exactly solvable circularly driven two-level system (100), c.f. Sec. C, \[\mathcal{H}_{0}=\Delta S^{z}+A\,\left(\cos(\omega t)S^{x}+\sin(\omega t)S^{y} \right)\.\] As detailed in Sec. C, we can use a unitary transformation, \(V=\exp(-i\phi(t)S^{z})\), to map Eq. (100) to a static system \[\widetilde{\mathcal{H}}=(\Delta-\omega)S^{z}+AS^{x}\,.\] Figure 7: **Protocol Dependence for Chirp.** Unassisted state preparation for smooth **(A)** and linear **(B)** frequency ramp in a circularly driven two-level system (100). _(i)_ Driving protocol for time-dependent frequency \(\omega(t)\) (gray solid) and instantaneous frequency \(\nu(t)\) (blue dashed). _(ii)_\(z\)-polarization of time evolved state (blue) as a function of the ramp time \(T_{\text{ramp}}\). The constant black line corresponds to \(z\)-polarization of the target state. _(iii)_ Fidelity of time evolved state at the end of the protocol with respect to the Floquet eigenstate at instantaneous frequency, \(\nu\)=\(\omega_{\text{f}}\); for **B)** also with respect to \(\omega\) (inset). The adiabatic path does not follow the time-dependent frequency \(\omega\)-ramp but rather the instantaneous frequency \(\nu\), which is different for \(t\neq 0\) for a linear ramp but coincides with the expected result for the smooth ramp at the beginning and end of the protocol, since \(\dot{\omega}(t=T_{\text{ramp}}\)=0). We used \(g=1\) and \(\Delta=10\), \(\omega_{\text{i}}=20\) and \(\omega_{\text{f}}=10\); the smooth ramp is given by \(\omega(t)=\omega_{\text{i}}+(\omega_{\text{f}}-\omega_{\text{i}})\sin^{2}(x\pi/2)\) with \(x=(t-t_{\text{i}})/T_{\text{ramp}}\). Let us consider ramping the frequency from some initial value \(\omega(0)=\omega_{\rm i}\) to a final value \(\omega(T_{\rm ramp})=\omega_{\rm f}\). Then, we would naively expect that the \(\dot{\phi}\) contribution which deviates from \(\omega\) should be part of the adiabatic gauge potential, i.e., \(\dot{\omega}\mathcal{A}\supset t\dot{\omega}S^{z}\). However, as shown in Sec. III, the \(t\dot{\omega}S^{z}\) contribution does not vanish in the adiabatic limit therefore it cannot be part of the adiabatic gauge potential. Adiabatic LimitTo restore the adiabatic limit, \(t\dot{\omega}S^{z}\) should be considered a relevant perturbation regardless of the protocol and hence should be included in the Floquet Hamiltonian \[\widetilde{\mathcal{H}}=(\Delta-\nu(t))S^{z}+AS^{x}\,. \tag{10}\] Note that, while in the adiabatic limit \(\dot{\omega}\to 0\) the effective frequency does not approach the actual frequency at all times \(\nu\not\to\omega\), however, \(\dot{\nu}\to 0\). Let us further emphasize that the frequency ramp leads to a change in the frequency appearing in the quasi-energy operator (10), but does not affect the period of the oscillations, i.e., the micromotion operator \(V=\exp(i\omega(t)tS^{z})\) remains unchanged. For sufficiently smooth protocols that approach their final value with a vanishing slope \(\dot{\omega}(T_{\rm ramp})=0\) the two frequencies agree at the final point \(\omega(T_{\rm ramp})=\nu(T_{\rm ramp})\); thus, the final states also agree. However, this may still lead to an increase in the adiabaticity time scale: in general the maximal slope of \(\nu\), \(\max_{t}|\dot{\nu}|\), will be larger than the maximal slope of \(\omega\), \(\max_{t}|\dot{\omega}|\). This is a direct result of, \(\nu\) being a non-convex function in time whenever \(\dot{\omega}(T_{\rm ramp})=0\). If, however, \(\dot{\omega}(T_{\rm ramp})\neq 0\) as is the case for a linear frequency ramp, then also \(\omega(T_{\rm ramp})\neq\nu(T_{\rm ramp})\), and the prepared state might differ from the targeted state. In Fig. 7, we demonstrate the problem of a linear frequency ramp (B) in contrast to a smooth ramp (A). Let us consider, that we want to prepare the \(x\)-polarized state--or more precisely the Floquet state which circles around the \(z\)-axis in the \(xy\)-plane--starting from the (almost) \(z\)-polarized state. To this end, we choose a frequency chirp with \((\Delta-\omega_{\rm i})\gg A\) and \((\Delta-\omega_{\rm f})=0\). This leads to \(\widetilde{\mathcal{H}}(t=t_{\rm f})=AS^{x}\) if the state follows the time-dependent frequency \(\omega(t)\). In Fig. 7(i), we display the corresponding frequency ramps \(\omega=\omega(t)\) and the effective frequency \(\nu(t)\) experienced by the system. We first estimate the performance of the protocol on a physical observable. Therefore, we consider the \(z\)-magnetization, \(\langle\sigma^{z}\rangle\), which should vanish for the target \(x\)-polarized state. As the \(z\)-magnetization is unaffected by the micromotion operator \(\mathcal{P}\propto\sigma^{-z}\) a finite \(z\)-polarization indicates failure of the state preparation procedure. With increasing ramp time \(T_{\rm ramp}{\rightarrow}\infty\) the \(z\)-magnetization saturates for both ramps indicating that the adiabatic limit has been reached, see Fig. 7(ii). However, for the linear ramp, Fig. 7B(ii), the \(z\)-magnetization attains a non-zero value (\(\langle\sigma^{z}\rangle\to 1\)) suggesting that the protocol failed to prepare the target state. In addition, we consider the fidelity \(F\) with respect to the Floquet eigenstate following the instantaneous frequency \(\nu\), see Fig 7(iii). We find for both ramps, linear and smooth, that the fidelity approaches unity for large ramp time \(T_{\rm ramp}\); confirming that the adiabatic state follows \(\nu(t)\) and not \(\omega(t)\). For the smooth ramp, we find that the adiabatic limit is reached only at much larger ramp times compared to the linear ramp. However, the final state in the adiabatic limit agrees with the target state. Counterdiabatic DrivingAs discussed in Secs. D.1 and D.2 there are two ways to perform counterdiabatic driving in the case of chirps. One can either follow the adiabatic path described by instantaneous eigenstates following \(\nu(t)\), i.e., eigenstates of (10). Alternatively, one can follow the naively expected path which corresponds to instantaneous eigenstates with respect to \(\omega(t)\), i.e., eigenstates of Eq. (11). For both cases, we can readily write down the counterdiabatic term using the results from Sec. C: \[\dot{\nu}\mathcal{A}_{\omega}(\nu) =\frac{A\dot{\nu}}{(\Delta-\nu)^{2}+A^{2}}(\cos(\phi)S^{y}-\sin( \phi)S^{x})\,, \tag{11a}\] \[\dot{\omega}\mathcal{A}_{\omega}^{\prime}(\omega) =\dot{\omega}\mathcal{A}_{\omega}(\omega)+\dot{\omega}tS^{z}\,, \tag{11b}\] where \(\mathcal{A}_{\omega}^{\prime}\) denotes the alternative case. From Eq. (11b) it is clear that the contribution \(\dot{\omega}\mathcal{A}_{\omega}^{\prime}\) will remain finite in the adiabatic limit \(\dot{\omega}\to 0\). In Fig. 8 the effect of the two FCD protocols is depicted for the linear ramp. As expected, the FCD protocols lead to unit fidelity regardless of the ramp time; fidelity is measured with respect to the according targeted 'adiabatic' path. However, in agreement with the quantum speed limit the strength of the counter term strongly depends on the ramp time \(T_{\rm ramp}\), see Fig. 8(i). While the usual counter term, \(\mathcal{A}_{\omega}\), follows the adiabatic curve described in Fig. 7 using the adjusted counter term \(\mathcal{A}_{\omega}^{\prime}\) allows for state preparation following the naive path \(\omega\), see Fig. 8(ii). However, following \(\omega(t)\) instead of \(\nu(t)\) comes at the expense that the alternative FAGP does not vanish in the adiabatic limit. Figure 8: **Floquet Counterdiabatic Driving for Chirp in Circularly Driven Two-level System.**_Main Panel_. Fidelity as a function of ramp time \(T_{\rm ramp}\) for unassisted protocol (blue dots) and CD drive (black line), c.f. Eq. (11). _Inset (i)_ Amplitude of counterdiabatic term measured as the norm of the operator \(\big{\|}\lambda\mathcal{A}_{4}\big{\|}^{2}=\big{\|}\lambda\mathrm{Tr}\big{(} \mathcal{A}_{4}^{2}\big{)}\) where \(\lambda=\dot{\nu}\) or \(\dot{\omega}\) for \(\mathcal{A}\) (black solid) or \(\mathcal{A}^{\prime}\) (green dashed), respectively. _Inset (ii)_\(z\)-polarization as a function time for \(T_{\rm ramp}=A\) for CD protocols, colors as in (_i_). Floquet counterdiabatic driving leads to transitionless driving along the adiabatic path. Adjusted FAGP, \(\mathcal{A}^{\prime}\) allows for following instantaneous states along \(\omega\) at the expense of additional finite terms for all ramp times. Parameters are as in Fig. 7. ## Appendix E Details of the Numerical Algorithm Computing the (approximate) variational adiabatic gauge potential \(\mathcal{X}\), cf. Eq. (10), analytically quickly becomes infeasible for a large number of variational parameters. Therefore, most variational FAGPS presented in the main text are computed numerically. In this section, we describe the algorithm that we use to compute the variational Floquet adiabatic gauge potential numerically and comment on some straightforward extensions of our algorithm. General framework for solving variational principleLet us start by recalling the equations that need to be solved. The variational FAGP minimizes the action \[S[\mathcal{X}_{\lambda}] =\int_{0}^{2\pi}\mathrm{Tr}\Big{(}\mathcal{G}^{2}(\mathcal{X}_{ \lambda}(\phi))\Big{)}\mathrm{d}\phi\] \[\mathcal{G} =i\big{[}\mathcal{H}_{\lambda}(\phi),\mathcal{X}_{\lambda}(\phi) \big{]}+\partial_{\phi}\mathcal{X}_{\lambda}(\phi)+\partial_{\lambda}\mathcal{ H}_{\lambda}(\phi)\,,\] for each parameter value \(\lambda\), see also Eq. (10) in the main text. In a numerical approach, we can not compute the variational FAGP for all continuous values of \(\lambda\) in the interval \(\lambda\in\big{[}\lambda_{i},\,\lambda_{f}\big{]}\) but rather need to restrict to a discrete subset. Moreover, it would be computationally too demanding to compute the variational FAGP on the fly while computing the time evolution, i.e., while solving the time-dependent Schrodinger equation (TDSE). Therefore, we pre-compute the variational FAGP \(\mathcal{X}(\lambda_{n})\) for a given set of discrete parameters \(\lambda_{n}\in\big{[}\lambda_{i},\,\lambda_{f}\big{]},n=1,\ldots,N\). Then, we interpolate the results, \(\mathcal{X}(\lambda_{n})\to\mathcal{X}_{\lambda}^{[N]}\), to obtain a continuous function, which is then used to solve the TDSE. To ensure that our results are sufficiently converged we assert that the error between the interpolation \(\mathcal{X}_{\lambda}^{[N]}(\lambda_{n^{\prime}})\) and the exact result, evaluated on a set of unseen points \(\lambda_{n^{\prime}}{\neq}\lambda_{n}\), is below some threshold value \(\epsilon\), \(\left\|\left(\mathcal{X}_{\lambda}^{[N]}-\mathcal{X}_{\lambda}\right)(\lambda _{n^{\prime}})\right\|\leq\epsilon\). If the error exceeds the threshold value we increase the number of points used for interpolation and repeat the procedure until the error is below threshold. For simplicity, we draw points for the evaluation equidistantly in the interval \(\big{[}\lambda_{i},\,\lambda_{f}\big{]}\). More sophisticated sampling techniques, like importance sampling or Gaussian process regression, may lead to a significant reduction in computational resources. Let us now consider how to obtain the variational FAGP for a fixed value \(\lambda\). To this end, we first need to make an ansatz for the variational FAGP. Throughout, we will consider an ansatz given by an operator expansion of the form \[\hat{\mathcal{X}}_{\lambda}=\sum_{n}x_{n,L}\hat{O}_{n}(\phi)\,, \tag{12}\] with variational parameters \(x_{n,L}\) and linearly independent operators \(\hat{O}_{n}(\phi)\) which may carry a periodic time-dependency. Later, we consider more concrete examples of operators but in general, one may consider operators that satisfy experimental constraints. In particular, in most experimental setups there are strict constraints on the physical operators which can be implemented but there are hardly any constraints on their time-dependency. Therefore, a separation into time and physical operators might be useful, i.e., \(\hat{O}_{n}(\phi)\equiv\hat{O}_{n=(kl)}(\phi)=\hat{O}_{k}f_{l}(\phi)\). Note that, since \(\mathcal{G}\) is linear in \(\mathcal{X}_{\lambda}\) and \(S\) quadratic in \(\mathcal{G}\), the action is quadratic in the variational parameters, leading to a convex optimization problem. Moreover, plugging Eq. (12) into \(\nabla_{\mathbf{x}}S=0\) we find that the optimal parameters are given by \[\mathbf{M}\mathbf{x}=\mathbf{b}\,, \tag{13}\] with \[\begin{split} M_{jk}(\lambda)&=\int\!\!\mathrm{d} \phi\,\mathrm{Tr}\big{(}g_{j,\lambda}(\phi)g_{k,\lambda}(\phi)\big{)}\\ b_{j}(\lambda)&=\int\!\!\mathrm{d}\phi\,\mathrm{Tr }\big{(}g_{j,\lambda}(\phi)\partial_{\lambda}\mathcal{H}_{\lambda}(\phi) \big{]}\big{)}\\ g_{j,\lambda}(\phi)&=i\big{[}\mathcal{H}_{\lambda }(\phi),O_{j}(\phi)\big{]}+\partial_{\phi}O_{j}(\phi)\,.\end{split} \tag{14}\] Notice that, for a given Hamiltonian computing the matrix \(\mathbf{M}\) and vector \(\mathbf{b}\) can be done easily using an appropriate ansatz and exploiting the underlying Hilbert space algebra. Therefore, computing the variational FAGP at a fixed parameter \(\lambda\) only amounts to solving the linear system of equation, Eq. (13); thus only scales in the number of variational parameters but not in the Hilbert space dimension. However, for a generic system, the exact FAGP is expected to be non-local. Thus, to obtain the exact FAGP variationally the number of variational parameters must be comparable to the Hilbert space dimension. Basis RepresentationIf one is interested in only one particular Hamiltonian and a single variational ansatz with few parameters, we suggest to compute the quantities in Eq. (14) (semi-)analytically for the specific operators and solve the corresponding system of linear equations analytically or numerically. However, if multiple Hamiltonians or ansatze with the same underlying Hilbert space need to be solved a different approach might be beneficial. As we show below, one can avoid computing the linear system (13) for each individual one by exploiting the linear structure of Eq. (13). To this end, let us consider an operator basis \(\{\mathcal{B}_{n}\}_{n}\) of the extended Floquet Hilbert space \(\mathcal{F}^{2}=\mathcal{F}^{2}\otimes\mathcal{L}_{\circ}^{2}\), with Hilbert space of physical operators \(\mathcal{F}^{2}\) and the Hilbert space of operators acting on periodic smooth functions \(\mathcal{L}_{\circ}^{2}\). For example, for a spin system of \(L\) spins a complete basis is given by the operators \(\sigma^{\mu}\cdot e^{in\alpha t}\), where \(n\in\mathbb{Z}\) and \(\mathbf{\mu}=(\mu_{1},\ldots,\mu_{L})\) with \(\mu_{j}=0,x,y,z\) and Pauli-strings \(\sigma^{\mu}=\sigma^{\mu_{1}}\otimes\ldots\sigma^{\mu_{L}}\). A key insight which we exploit for our derivation is that the commutator \({}_{n}\hat{\mathcal{C}}\equiv i\big{[}\mathcal{B}_{n},\cdot\,]:\mathcal{F}^{2 }\to\mathcal{F}^{2}\) and derivative \(\hat{\mathcal{N}}\equiv\partial_{\phi}(\cdot)\equiv[\mathcal{N},\cdot\,]: \mathcal{F}^{2}\to\mathcal{F}^{2}\) are linear (super-)operators acting on the Hilbert space of operators. Thus, we can expand them in the operator basis \({}_{n}\hat{\mathcal{C}}(\mathcal{B}_{k})=i\big{[}{}_{n}\hat{\mathcal{C}}, \mathcal{B}_{k}\big{]}=\sum_{k}n\hat{\mathcal{C}}_{kl}\mathcal{B}_{l}\) and \(\hat{\mathcal{N}}(\mathcal{B}_{k})=\partial_{\phi}\mathcal{B}_{k}=\sum_{k} \hat{\mathcal{N}}_{kl}\mathcal{B}_{l}\) with matrix entries \({}_{n}\hat{\mathcal{C}}_{kl}\) and \(\hat{\mathcal{N}}_{nm}\). Note that, computing the matrix elements \({}_{n}\hat{C}_{kl}\) and \(N_{nm}\) may seem like a significant overhead. However, depending on the choice of basis they are directly related to the structure factor of the operator Lie-algebra, hence involve low to no computational overhead. Using the operator basis, \(\{\mathcal{B}_{n}\}_{n}\), we can express any Hamiltonian \(\mathcal{H}\) and its derivative \(\partial_{\lambda}\mathcal{H}\) as a linear combination of basis elements: \(\mathcal{H}=\sum_{n}h_{n}\mathcal{B}_{n}\) and \(\partial_{\lambda}\mathcal{H}=\sum_{n}\partial_{\lambda}h_{n}\mathcal{B}_{n}\) with sets of real numbers \(h_{n}\) and \(\partial h_{n}\), respectively. Likewise, we can also choose a variational ansatz of the form \(\mathcal{X}=\sum_{n\in\mathbb{Y}}x_{n}\mathcal{B}_{n}\) which in general only includes a subset of the operator basis \(\{\mathcal{B}_{n}\}_{n\neq Y}\subset\left\{\mathcal{B}_{n}\right\}_{n}\). Then, we can express the operator \(\mathcal{G}\) (10) similarly using the basis: \[\mathcal{G}(\mathcal{X}) =\sum_{j\in\mathbb{Y}}x_{j}\left(\hat{\mathcal{N}}(\mathcal{B}_{j} )\right)+\sum_{m}h_{m}\,m\hat{\mathcal{C}}(\mathcal{B}_{j})\right)+\sum_{n} \partial_{\lambda}h_{n}\mathcal{B}_{n}\] \[=\sum_{n}\left[\sum_{j\in\mathbb{Y}}\sum_{m}\left(x_{j}\,h_{nm} \,C_{jn}\right)+\partial_{\lambda}h_{n}\right]\mathcal{B}_{n}\,.\] Thus, the action, Eq. (10), can be expressed as \[\begin{split} S[\mathbf{x}]&=\sum_{j,k\in\mathbb{Y}}x_{ j}x_{k}\sum_{n,m}S_{jn}\eta_{nm}S_{mk}\\ &+2\sum_{j\in\mathbb{Y}}x_{j}\sum_{n,m}S_{jn}\eta_{nm}\partial_{ \lambda}h_{m}+\text{const.}\\ &=\mathbf{x}^{T}\mathbf{S}^{T}\mathbf{\eta}\mathbf{S}\mathbf{x}+2\mathbf{x}^{T}\mathbf{S}^{T} \mathbf{\eta}\partial_{\lambda}\mathbf{h}+\text{const.}\end{split} \tag{10}\] where we defined the metric \(\eta_{nm}{=}\int\!\mathrm{d}\phi\operatorname{Tr}(\mathcal{B}_{n}\mathcal{B}_ {m})\) and matrix \(\mathbf{S}\) with entries \(S_{nm}{\equiv}(\sum_{p}\,h_{p}\,\,\hat{\mathcal{C}}_{nm})+\hat{\mathcal{N}}_{ nm}\). Let us emphasize that, in general, \(\mathbf{S}\) is a rectangular matrix: considering the matrix entries \(S_{nm}\), then, \(n\) runs over all indices in the basis, \(\left\{\mathcal{B}_{n}\right\}_{n}\), but \(m\in\mathbb{Y}\) only. In addition, if the basis is orthonormal the metric is diagonal \(\eta_{nm}=\delta_{nm}\). Using the superoperator expressions has the advantage that all model and ansatz dependence enters the variational action (10) simply by changing the dimension and linear coefficients of the matrix \(\mathbf{S}\) and vector \(\partial_{\lambda}\mathbf{h}\). Therefore, considering a multitude of ansatze or models can be done at low cost, justifying the possible overhead caused by computing the superoperators \({}_{n}\hat{\mathcal{C}}\) and \(\hat{\mathcal{N}}\). Note that, if \(\mathbf{S}\) is invertible Eq. (11) simplifies to \[\mathbf{S}\mathbf{x}=\partial_{\lambda}\mathbf{h}\,. \tag{11}\] In general, however, the variational ansatz \(\mathcal{X}\) does not contain all elements of the basis \(\left\{B_{n}\right\}_{n}\) such that \(\mathbf{S}\) is rectangular and hence non-invertible. Therefore, an obvious approximation scheme is to truncate \(\mathbf{S}\) to a square matrix, i.e., neglecting all operators \(\mathcal{B}_{m}\) in \(\mathcal{G}=\sum_{m}\mathcal{G}_{m}\mathcal{B}_{m}\) which do not appear in \(\mathcal{X}\). This also naturally leads to a truncation in the number of harmonics as discussed in the main text. Such a truncation is, however, not strictly necessary, and in general, the matrix \(\mathbf{M}\) and vector \(\mathbf{b}\) in Eq. (11) are obtained from \[\begin{split}\mathbf{M}&=\mathbf{S}^{T}\mathbf{\eta}\mathbf{S}\\ \mathbf{b}&=\mathbf{S}^{T}\mathbf{\eta}\partial_{\lambda}\mathbf{h} \,.\end{split} \tag{12}\] ## Appendix F Additional Material for examples In this section, we provide additional material for the examples studied in Sec. IV of the main text. ### Linearly Driven Two-Level System #### e.1.1 High-frequency Expansion In this section, we briefly support the claim that the lowest order high-frequency expansion (19), \[H_{F}^{(0)}=\lambda S^{z}+gS^{x}\,,\] is sufficient to describe the linearly driven two-level system (17) in the high-frequency regime. To this end, we compare the numerically computed spectrum of the exact Floquet Hamiltonian with the spectrum obtained from the lowest order high-frequency expansion (19), see Fig. 9. We find that the exact and high-frequency Floquet quasi-energies agree up to the fourth digit, suggesting that the lowest order Floquet Hamiltonian is sufficient to describe the system at high accuracy. #### e.1.2 Analytical Derivation of Variational FAGP In this subsection, we derive in detail the analytical variational FAGP reported on in the main text Sec. IV.1. To this end, let us consider the ansatz Eq. (21) and plug it into the action Eq. (12), resulting in \[S[\mathbf{X}]=\mathbf{X}\cdot\mathbf{M}\cdot\mathbf{X}-2\mathbf{X}^{T}\mathbf{b}+\text{const.}\,. \tag{13}\] Hence, minimizing the action \(\delta S=0\) amounts to solving the linear system of equations \[\mathbf{M}\cdot\mathbf{X}=\mathbf{b}\,, \tag{14}\] Figure 9: **Performance of high-frequency expansion in linearly driven two-level system** in the large frequency regime. _Left Panel:_ Exact (solid line) and lowest order perturbative (circles) Floquet quasi-energies. _Right Panel:_ Error in lowest order perturbative Floquet energies compared to exact energies. The lowest order perturbative Floquet Hamiltonian accurately describes the exact Floquet quasi-energies. Parameters are as in Fig. 3. where \(\mathbf{X}^{T}=(y_{0},\,x_{1},\,y_{1},\,z_{1})\), \(\mathbf{b}^{T}=(2g,\,0,\,A,\,0)\) and \[\mathbf{M}=\begin{pmatrix}2\lambda^{2}+2g^{2}+4A^{2}&0&3Ag&-2A\omega\\ 0&\omega^{2}+\lambda^{2}&2\lambda\omega&-\lambda g\\ 3Ag&2\lambda\omega&g^{2}+\omega^{2}+\lambda^{2}+A^{2}/2&-2g\omega\\ -2A\omega&-\lambda g&-2g\omega&\omega^{2}+\lambda^{2}\end{pmatrix}. \tag{111}\] While inverting the \(4\times 4\) matrix may be performed analytically using, e.g., a Gaussian elimination procedure, the general expression is complicated, and not much insight can be gained from it. Also, in general, a larger number of variational parameters may be considered which makes an analytical treatment infeasible. Therefore, we focus on deriving an expression for the variational parameters for the two regimes considered in the main text, i.e., the high-frequency (I) and one-photon resonance (II) regime. _(I) High-frequency regime_ In the high-frequency regime \(\omega\) is the largest energy scale, \(\omega\gg\lambda,\,g,\,A\), such that it is favourable to remove the \(\propto\omega,\,\omega^{2}\) contributions in \(\mathbf{M}\). This is achieved by considering the re-scaling \(\tilde{X}^{T}=(y_{0},\,x_{1}\omega,\,y_{1}\omega,\,z_{1}\omega)\) and \(\tilde{b}^{T}=(2g,0,\,A/\omega,0)\) leading to \[\tilde{\mathbf{M}}=\begin{pmatrix}2\lambda^{2}+2g^{2}+4A^{2}&0&3Ag&-2A\omega\\ 0&1+\frac{\lambda^{2}}{\omega^{2}}&2\frac{\lambda}{\omega}&-\frac{\lambda g}{ \omega^{2}}\\ 3\frac{Ag}{\omega^{2}}&2\frac{\lambda}{\omega}&1+\frac{g^{2}+A^{2}+A^{2}/2}{ \omega^{2}}&-2\frac{\lambda}{\omega}\\ -2\frac{A}{\omega}&-\frac{kg}{\omega^{2}}&-2\frac{B}{\omega}&1+\frac{\lambda^{ 2}}{\omega^{2}}\end{pmatrix}, \tag{112}\] taking the limit \(\frac{A}{\omega},\,\frac{\tilde{g}}{\omega},\,\frac{\lambda}{\omega}\to 0\) the linear system Eq. (110) reduces to \[\begin{pmatrix}2\lambda^{2}+2g^{2}+4A^{2}&0&0&-2A\\ 0&1&0&0\\ 0&0&1&0\\ -2A&0&0&1\end{pmatrix}.\,\tilde{\mathbf{X}}=\begin{pmatrix}2g\\ 0\\ 0\\ 0\end{pmatrix},\] which is solved by \[y_{0} =\frac{g}{\lambda^{2}+g^{2}}\] \[x_{1} =0\] \[y_{1} =0\] \[z_{1} =\frac{A}{\omega}\frac{g}{\lambda^{2}+g^{2}}\,,\] which is equivalent to the IFE approach Eq. (20) up to \(O(\omega^{0})\). _(II) One-photon resonance regime_ In the one-photon resonance regime, we have \(\lambda\approx\omega\), such that both \(\lambda\) and \(\omega\) are large scales. Therefore, in order to get rid of the large scales a re-scaling of the parameters \(\tilde{X}^{T}=(y_{0}\omega,\,x_{1}\omega,\,y_{1}\omega,\,z_{1}\omega)\) and \(\tilde{b}^{T}=(2g/\omega,0,\,A/\omega,0)\) is needed. Performing the re-scaling and a large frequency expansion \(\frac{A}{\omega}\to 1\) and \(\frac{A}{\omega},\,\frac{\tilde{g}}{\omega}\to 0\) in Eq. (111), however, leads to the linear system \[\begin{pmatrix}1&0&0&0\\ 0&1&1&0\\ 0&1&1&0\\ 0&0&0&1\end{pmatrix}.\,\tilde{\mathbf{X}}=\begin{pmatrix}0\\ 0\\ 0\\ 0\end{pmatrix}.\] Naively one might assume that the ideal parameters are just given by the trivial choice of parameters, i.e., \(\tilde{\mathbf{X}}_{0}=0\). However, as the matrix is singular also \(\tilde{\mathbf{X}}_{1}=c(0,1,-1,0)\) with arbitrary \(c\) is a solution. Thus, the solution to the variational principle is ill-defined. To understand whether \(\tilde{\mathbf{X}}_{0}\) or \(\tilde{\mathbf{X}}_{1}\) is the correct choice of parameters, we consider expanding around these solutions. To this end, we represent the matrix (111) in the basis \(\left\{\tilde{\mathbf{e}}_{1},(\tilde{\mathbf{e}}_{2}+\tilde{\mathbf{e}}_{3})/\sqrt{2},( \tilde{\mathbf{e}}_{2}-\tilde{\mathbf{e}}_{3})/\sqrt{2},\tilde{\mathbf{e}}\right\}_{4}\), where \(\tilde{\mathbf{e}}_{i}\) is the \(i\)'th unit vector. This leads to the modified linear systems of equations \[\mathbf{M}^{\prime}\cdot\mathbf{X}^{\prime}=\mathbf{b}^{\prime}\,, \tag{113}\] with \(\mathbf{X}^{\prime}=(y_{0},\chi,\eta,\,z_{0})\), \(\mathbf{b}^{\prime}=(g,-A,A,0)\), and \[\mathbf{M}^{\prime}=\begin{pmatrix}2\lambda^{2}+2g^{2}+4A^{2}&3Ag&3Ag&-2A\omega\\ 3Ag&2\Omega_{+}^{2}+G^{2}&G^{2}&-g\omega-g\Omega_{+}\\ 3Ag&G^{2}&2\Omega_{-}^{2}+G^{2}&-g\omega-g\Omega_{-}\\ -2A\omega&-g\omega-g\Omega_{+}&-g\omega-g\Omega_{-}&\omega^{2}+\lambda^{2} \end{pmatrix}, \tag{114}\] where we defined \(\Omega_{\pm}=\lambda\pm\omega\) and \(G^{2}=g^{2}+A^{2}/2\). Note that, by definition of the one-photon regime \(\omega\approx\lambda\), such that \(\Omega_{-}=(\lambda-\omega)\) neither scales with \(\omega\) nor \(\lambda\). Thus, the \(M^{\prime}_{\eta\eta}\) index in Eq. (114) is a finite, non-diverging, contribution even if \(\lambda,\omega\to\infty\) is considered. Therefore, to get rid of the diverging energy scales when taking the infinite frequency limit \(\lambda\approx\omega\to\infty\) we should choose the re-scaling \(\mathbf{X}^{\prime}\to\tilde{\mathbf{X}}^{\prime}=(y_{0}\omega,\chi\omega,\eta,\,z_{0 }\omega)\). Performing this re-scaling and taking the large frequency limit in Eq. (114) we eventually arrive at \[\begin{pmatrix}2&0&0&0\\ 0&2\frac{\Omega_{+}^{2}}{\omega^{2}}&0&0\\ 0&0&2\Omega_{-}^{2}+G^{2}&-g\\ 0&0&-g&1\end{pmatrix}.\,\tilde{\mathbf{X}}^{\prime}=\begin{pmatrix}0\\ 0\\ A\\ 0\end{pmatrix},\] which is solved by \[\mathbf{X}^{\prime} =\left(0,0,\eta,\,\frac{g}{\omega}\eta\right)^{T},\] \[\eta =\frac{2A}{4(\lambda-\omega)^{2}+A^{2}}\,.\] Transforming back to the original basis leads to \[y_{0} =0\] \[x_{1} =-\frac{2A}{4(\lambda-\omega)^{2}+A^{2}}\] \[y_{1} =+\frac{2A}{4(\lambda-\omega)^{2}+A^{2}}\] \[z_{1} =0+O(\omega^{-1})\,,\] which is exactly Eq. (22) in the main text up to \(O(\omega^{-1})\) corrections. ### Floquet Topological Pump Experiment #### a.2.1 Derivation of the Bloch Hamiltonian In this section, for completeness, we derive the Bloch Hamiltonian, Eq. (24), from the real space lab frame Hamiltonian. Following Ref. [75], the fermionic Hamiltonian in the lab frame and real space is described by \[H_{\text{lab}}(t)=\frac{\hat{p}^{2}}{2m}+V(\hat{x}-x_{0}(t))\,, \tag{101}\] with potential energy \(V(x)=V_{0}\cos^{2}(\pi x/a)\) and lattice constant \(a\). The lattice position is driven by the two-tone frequency drive \[x_{0}(t)=\frac{c_{\text{exp}}}{\omega}\left(K_{1}\cos(\phi)+\frac{1}{2}K_{2} \cos(2\phi+\varphi)\right)\,, \tag{102}\] with phase shift \(\varphi\) and time-dependent phase \(\phi=\omega t\). Transforming to the co-moving frame of the shaken lattice, the Hamiltonian reads \[\hat{H}(t)=\frac{\hat{p}^{2}}{2m}+V(\hat{x})-F(t)\hat{x}\,, \tag{103}\] with inertial force \(F(t)=-m\vec{x}_{0}(t)\). Using the tight-binding picture, the second quantized operators read \[\frac{\hat{p}^{2}}{2m}+V(\hat{x})\hat{=}\sum_{j,\alpha}\left[\epsilon_{\alpha} c_{j,\alpha}^{\dagger}c_{j,\alpha}-\sum_{k}\left(J_{\alpha}^{k}c_{j,\alpha}^{ \dagger}c_{j+k,\alpha}^{\dagger}+\text{h.c.}\right)\right]\] and \[\frac{\hat{x}}{a} \hat{=}\sum_{j\alpha}\left\{jc_{j,\alpha}^{\dagger}c_{j,\alpha}\right.\] \[\left.+\sum_{\beta\neq\alpha}\left[\eta_{sp}^{0}c_{j,\alpha}^{ \dagger}c_{j,\beta}+\left(\eta_{sp}^{1}c_{j,\alpha}^{\dagger}c_{j+1,\beta}^{ \dagger}+\text{h.c.}\right)\right]\right\}\,,\] with \(c_{j,\alpha}^{\dagger}\) (\(c_{j,\alpha}\)) creating (annihilating) a Fermion on site \(j\) in the \(\alpha(=s,\ p)\) band. Transforming into a moving frame with respect to the unitary transformation \(V(t)=\exp\left(iA(t)\sum_{j\alpha}jc_{j,\alpha}^{\dagger}c_{j,\alpha}\right)\), with Peierls phase \(A(t)=\frac{-a}{h}\int_{0}^{t}F(s)\mathrm{d}s\), the on-site term \(\sum_{j\alpha}jc_{j,\alpha}^{\dagger}c_{j,\alpha}\) is rotated away and the inter-site terms transform as \[J_{\alpha}^{k}c_{j,\alpha}^{\dagger}c_{j+k,\alpha}^{\dagger} \to e^{-ikA(t)}J_{\alpha}^{k}c_{j,\alpha}^{\dagger}c_{j+k,\alpha}^{\dagger}\] \[\eta_{sp}^{1}c_{j,\alpha}^{\dagger}c_{j+k,\beta}^{\dagger} \to e^{-iA(t)}\eta_{sp}^{1}c_{j,\alpha}^{\dagger}c_{j+k,\beta}^{ \dagger}\,.\] Using periodic boundary conditions and transforming to quasi-momentum space the Hamiltonian in the rotating frame takes the form of Eq. (23): \[\mathcal{H}_{\lambda}(t)=\sum_{q}\mathbf{\Psi}_{q}^{\dagger}\cdot\mathbf{h}_{\lambda }(q,t)\cdot\mathbf{\Psi}_{q}\] where \(\mathbf{\Psi}_{q}^{\dagger}=(c_{p,q}^{\dagger},\ c_{s,q}^{\dagger})\) and \(\mathbf{h}_{\lambda}(q,t)\) is the Bloch Hamiltonian (24): \[\mathbf{h}_{\lambda}(q,t)= \left(\epsilon_{+}+\sum_{j=1}^{3}J_{+}^{k}\cos(jqa-jA)\right) \mathbbm{1}\] \[-aF(t)\left(\eta_{sp}^{0}+2\eta_{sp}^{1}\cos(qa-A)\right)\sigma^{x}\] \[+ \left(\epsilon_{-}-2\sum_{j=1}^{3}J_{-}^{j}\cos(jqa-jA)\right) \sigma^{x}\,,\] with \[F(t) =\omega K_{1}\cos(\omega t)+2\omega K_{2}\cos(2\omega t+\varphi) \tag{104}\] \[A(t) =K_{1}\sin(\omega t)+2K_{2}\sin(2\omega t+\varphi)\,.\] For the state preparation scheme, we use Eq. (104) as the definition for the force \(F(t)\) and Peierls phase \(A(t)\), i.e., also upon introducing additional time-dependencies, e.g., \(\omega=\omega(t)\). However, in general, they are related to the position displacement \(x(t)\), which is the physical quantity that is modulated in the experiment. Therefore, in the real experimental setup, the force and Peierls phase may pick up additional terms during the state preparation protocol as they are connected to derivatives of the position displacement, \(F\propto\bar{x}\) and \(A\propto\hat{x}\), respectively. For simplicity, these additional contributions are neglected in our analysis. #### e.2.2 Detecting photon resonances with the variational Floquet adiabatic gauge potential In this section, we exemplify on the Floquet band model how the variational FAGP can be used to detect unwanted photon resonances. Moreover, exploiting the simplicity of the model we are also able to remove the photon resonance by adjusting the parameters of the Floquet band model. General ApproachLet us first present a general approach that allows for the detection of photon resonances. The first insight enabling the detection scheme is the sensitivity of the adiabatic gauge potential to the presence of avoided crossings. In particular, diabatic transitions only occur near level crossings where the diabatic states strongly hybridize. Thus, the AGP takes its largest values near avoided level crossings and almost vanishes far away from them. For level crossings, however, the diabatic levels do not hybridize such that the AGP usually vanishes in their vicinity. Secondly, let us recall that the high-frequency expansion is not susceptible to the presence of photon resonances. In contrast, the variational FAGP is non-perturbative and thus will resolve the photon resonances. Therefore, a photon resonance \begin{table} \begin{tabular}{c||c|c} Parameter & Experiment [75] & Adjusted (this work) \\ \hline \hline \(\epsilon_{s}\) & \(7.523\) & \(8.639\) \\ \(\epsilon_{p}\) & \(20.586\) & \(23.639\) \\ \hline \(J_{\xi}^{0}\) & \(0.378\) & \(0.252\) \\ \(J_{\lambda}^{1}\) & \(-2.620\cdot 10^{-2}\) & \(-2.620\cdot 10^{-2}\) \\ \(J_{\xi}^{1}\) & \(2.630\cdot 10^{-3}\) & \(2.630\cdot 10^{-3}\) \\ \hline \(J_{p}^{0}\) & \(-2.037\) & \(-1.358\) \\ \(J_{p}^{1}\) & \(-0.357\) & \(-0.357\) \\ \(J_{p}^{1}\) & \(-0.154\) & \(-0.154\) \\ \hline \(\eta_{sp}^{s}\) & \(0.184\) & \(0.184\) \\ \(\eta_{sp}^{s}\) & \(-0.059\) & \(-0.059\) \\ \end{tabular} \end{table} Table 1: **Parameters for Floquet Topological Pump.** Listed are the values for the parameters in Eq. (24) used in the experiment as well as the adjusted parameters which we use in this work to avoid the photon resonance, see also Sec. F2.2. The on-site energies \(\epsilon_{\alpha}\) and tunnelings \(J_{\alpha}^{j}\) are in kHz and the couplings \(\eta_{sp}^{0,1}\) are dimensionless. can be detected by its distinctively different responses for the two FAGPS: a photon resonance is present if the variational FAGP indicates an avoided level crossing, i.e., taking a large value for some parameter regime, and the IFE FAGP takes a small value at the same regime. This simple approach may break down for many body systems where many avoided level crossings of different levels may appear for the same parameter value. However, if one has access to the perturbative Floquet energies and eigenstates one can always restrict the analysis to a subspace of candidate states where a resonance might occur; that is, those states which are separated by multiples of the driving frequency, such that photon absorption processes become resonant. Example: Floquet Topological Pump SetupFor the example of the Floquet topological pump setup studied in the main text, Sec. IV.2, we use a slightly different approach. In Ref. [75], the perturbative high-frequency Floquet Bloch bands have already been studied extensively. Therefore, we can use those results as a starting point for our analysis. In particular, the perturbative analysis suggests that during the frequency chirp, the quasi-energy bands hybridize starting at the edge of the Brillouin zone (\(q\approx\pm\frac{\pi}{a}\)). All other crossings caused by the folding of the energies, see Fig. 4C, into a quasi-energy Brillouin zone do not lead to hybridization. Considering the exact instantaneous quasi-energies during the chirp seems to support this picture, see Fig. 10A. Also considering a cut at short times indicates hybridization only at the edge of the Brillouin zone and otherwise no change of the trivial \(s\) and \(p\) bands, see Fig. 10B. To distinguish level crossings from avoided level crossings with small gaps, without the need to compute the quasi-energy spectrum with high resolution, we consider the numerically computed non-perturbative variational FAGP (26), see Fig. 10C. At small times \(t\approx 0\) we observe a strong response of the variational FAGP close to the edge of the Brillouin zone, in agreement with the perturbative analysis. However, we also observe a strong contribution close to the center of the Brillouin zone, see Fig. 10C, suggesting another avoided crossing. In fact, by drastically increasing the quasi-momentum resolution we can observe the avoided crossing also in the quasi-energy spectrum, see Fig. 10D. Notice that, this additional photon resonance is not important for the actual experiment as it passed diabatically with almost unit fidelity due to the gap being much smaller compared to the velocity used in the experiment. Avoiding photon resonancesCombining the perturbative Floquet Hamiltonian approach and the insight gained from the variational FAGP we can remove the photon resonance. Note that, the hybridization of the energy levels at the edge of the Brillouin zone is also engineered using photon resonances. However, this can be accounted for by transforming to a suitable rotating frame, as we describe below. During the chirp the energy splitting \(\epsilon_{-}\) is comparable to the driving frequency, \(\epsilon_{-}\approx\omega\). By transforming to a rotating frame with respect to \(V=\exp(-i\omega\sigma^{z}t)\) we can remove this resonance: thereby, \(\epsilon_{-}\) is replaced by \(\Delta=\epsilon_{-}-\omega\), and likewise \(\sigma^{x}\) by \(\cos(\omega t)\sigma^{x}+\sin(\omega t)\sigma^{y}\) in the Bloch Hamiltonian (24). Ignoring any additional resonances, the level splitting in the high-frequency expansion of the Floquet Hamiltonian is in lowest order described by \(\Lambda\sigma^{z}\subset\mathbf{h}_{F}^{\mathrm{rot},(0)}\) with effective level splitting \(\Lambda=\Delta-2\sum_{k}J^{\pm}\cos(kqa)\). Due to the rotating frame transformation \(\Delta=\Delta(\omega)\) is no longer resonant with \(\omega\), such that one might expect no further photon resonances to occur. However, taking a closer look at the entire expression \(\Lambda=\Lambda(\omega,q)\) there exist \(\omega^{\star}\) and \(q^{\star}\) values, such that \[|\Lambda|(\omega^{\star},q^{\star})=\omega^{\star}\,, \tag{100}\] indicating the presence of additional photon resonances. In Figure 10: **Additional resonances in state preparation protocol** for topological Floquet pump setup. **(A)** Quasi-momentum resolved quasi-energy gap \(\Delta E_{F}\) normalized by instantaneous frequency \(\nu(t)\) for frequency chirp presented in main text. Color map indicates the equivalence of gaps with \(\Delta E_{F}\pi\nu-\Delta E_{F}\) for quasi-energies. **(B)** Quasi-energy bands at \(t=0.1\,\mathrm{ms}\) (see dashed line in A), color indicates overlap with initial \(s\) (blue) and \(p\) (red) band. Due to the folding of the bands level crossings and avoided level crossings occur between \(s\) and \(p\) band. Avoided crossings with small energy gaps are indistinguishable from level crossings when considering only the quasi-energies. **(C)**\(L^{\infty}\) norm \(\|\mathcal{X}\|_{\infty}\) of extended Floquet Hilbert space variational FAGP \(X\). Cyan dashed line indicates the photon resonance computed via Eq. (100). Using the FAGP the level crossings and avoided level crossings are clearly distinguishable. **(D)** Zoom into (B). In agreement with the strong response of FAGP around \(t=0\), \(q=0.25\,\frac{\pi}{a}\) we can observe an avoided level crossing with a small gap in the quasi-energies. **(E)** and **(F)** same as **(A)** and **(B)** for adjusted parameters, respectively. Using the adjusted parameters we can remove the additional photon resonance. Experimental parameters, see table 1, used for A-D, other parameters are as in Fig. 5. fact, the photon resonance location obtained from the matching condition (109) agree well with the avoided gap closings encountered in the FAGP, see Fig. 10C. To avoid the additional photon resonance we adjust the parameters in the model such that during the ramps Eq. (109) cannot be satisfied. In addition, we must preserve the topological properties of the model. To this end, it suffices to ensure that \[\begin{split}\left|\epsilon_{-}-\omega_{\text{i}}\right|& >2\sum_{k}\left|J_{-}^{k}\right|\\ \left|\epsilon_{-}-\omega_{\text{f}}\right|&<2\sum_ {k}\left|J_{-}^{k}\right|\end{split} \tag{110}\] where \(\omega_{\text{i}}\) and \(\omega_{\text{f}}\) refer to the frequencies at the beginning and end of the protocol, respectively. Let us emphasize, that these three conditions are not sufficient to uniquely determine the parameters of the model. In Tab. 1, we present a possible choice of parameters that we used in the main text. _Summary._ We demonstrated that the non-perturbative character of the variational FAGP allows us to detect photon resonances. In fact, for the simple example of the Floquet band model, we have even been able to adjust the parameters to avoid the detected photon resonances. This is a key ingredient to yield the high-fidelity FCD state preparation scheme presented in the main text. In addition, this example demonstrates that the adiabatic gauge potential can be used to distinguish level crossings from avoided level crossings with small gaps, without the need to resolve the small gap - for both static and Floquet systems. ### Details of Many Body System Trace vs groundstate expectation value.In the main text, Sec. IV.3, we presented results for l-FCD protocols using the groundstate expectation value instead of the trace in the variational principle (10). Here, we briefly compare the performance of the two approaches. In Fig. 11, we show the performance of both methods compared to the unassisted protocols presented in the main text. The l-FCD protocol using the trace norm results in an increase in fidelity compared to the unassisted protocol, see lower panel of Fig. 11. However, the increase is negligible compared to the increase in fidelity caused by the l-FCD using the groundstate norm, see lower panel of Fig. 11. This can be understood by taking a closer look at the variational parameters obtained for both methods, upper panel of Fig. 11. We find that the groundstate l-FCD has a notably stronger counter term than the trace l-FCD, thus, motivating the reduced performance of the latter. Approximate groundstate expectation valueIn the previous paragraph, we demonstrated that using the (Floquet) Figure 11: **Groundstate expectation value vs trace l-FCD** for circularly driven transverse field Ising model (28). _Upper Panel:_ Numerically computed variational parameter \(\chi(t)\), see Eq. (30), for l-FCD using the Floquet groundstate (GS) expectation value (green triangles) and trace (yellow pluses). The groundstate l-FCD shows a stronger response than the trace l-FCD. _Lower Panel:_ Instantaneous fidelities \(F(t)\) for the unassisted non-driven (gray circles) and driven (blues squares) protocol, and the groundstate l-FCD (green triangles) and trace l-FCD (yellow pluses) assisted Floquet protocol. Note the log scale for the \(y\)-axis. The trace 1-FCD protocol leads to a small increase in fidelity compared to the unassisted protocol. However, the increase in fidelity is considerably larger for the groundstate l-FCD. We use \(JT_{\text{ramp}}=2\), other parameters as in Fig. 6. Figure 12: **Approximate groundstate l-FCD** for circularly driven transverse field Ising model (28). **(A)** Deviation, \((\chi_{L}(t)-\chi_{14}(t))\), of variational parameter \(\chi_{L}(t)\) for different system sizes \(L\) from variational parameter obtained for \(L=14\equiv L_{\text{comp}}\). _Inset:_ Variational parameters for different system sizes \(L\). The l-FCD protocol hardly changes as a function of system size. **(B)** Final fidelities \(F\) for different system sizes for unassisted driven (blue squares) and l-FCD (green triangles) protocol, as shown in Fig. 6, and approximate l-FCD for \(L_{\text{comp}}=14\), as described in Sec. F.3. For this example, computing the l-FCD for a small system size is sufficient to yield a high-fidelity l-FCD protocol for all other considered system sizes. Other parameters are as in Fig. 6. groundstate expectation value in the l-FCD variational principle leads to a notable improvement in fidelity compared to using the trace. However, in general computing the Floquet groundstate involves computing the exact Floquet Hamiltonian and computing its eigenstate. Therefore, computing the Floquet groundstate and evaluating the action (10) is computationally demanding and can in general only be done for small system sizes. A possible approximation scheme that reduces the computational cost is to compute the l-FCD protocols for some computationally tractable system size \(L_{\text{comp}}\) and then apply it to any target system size \(L_{\text{target}}\). Using this scheme inevitably leads to degradation in fidelity compared to directly computing the protocol for the target system size, \(L_{\text{comp}}=L_{\text{target}}\), however, allows us to access arbitrary system sizes. Let us emphasize, that in order to reach system size of up to \(L=24\) spins, as presented in the main text, we need to exploit two properties specific to this model: (i) we can exactly write down the Floquet Hamiltonian saving us to compute the Floquet Hamiltonian numerically and (ii) the parity and translational symmetry of the model reduce the effective Hilbert space dimension notably. While (ii) may hold in more general scenarios, (i) is a fine-tuned scenario, such that in general the Floquet Hamiltonian must be computed numerically; putting serious restrictions on the achievable system size. Therefore, we consider the proposed approximation scheme for \(L_{\text{comp}}=14\) - a system size where computing the Floquet Hamiltonian is numerically tractable. We present results for this approximate groundstate l-FCD in Fig. 12. Notably, the l-FCD protocol, characterized by the variational parameter \(\chi_{L}(t)\) in Eq. (11), hardly varies for different system sizes \(L=10,\dots,24\), see Fig. 12A. This suggests that using the protocol obtained for some small system size \(L_{\text{comp}}\) also performs well for other system sizes. In fact, we find no notable difference in the final fidelity between the approximate and exact groundstate l-FCD protocols, see Fig. 12B. In summary, using the approximate groundstate l-FCD enables to access high-fidelity counter diabatic protocols at low computational cost. However, the performance of such approximations in general depends strongly on the considered model. Protocol dependenceEventually, let us emphasize, that the choice of protocol for the non-equilibrium drive also can have a significant impact on the performance of the unassisted and l-FCD protocols. For the many body model obvious choices of parameters controlling the protocol are the maximum extent in the \(z\)-direction determined by the range \(\omega_{\text{max}}\) of the frequency \(\omega\in[0,\ \omega_{\text{max}}]\), and the fundamental protocol \(\lambda(t)\). We demonstrate the dependence of the performance of the state preparation on these parameters of the protocol in Fig. 13. First, let us focus on the dependence on the fundamental drive protocol \(\lambda(t)\), see Fig. 13A. In particular, we compare the cubic ramp (25), used in the main text, with the quartic ramp \[\lambda(t)=x(t)^{4}\;. \tag{12}\] and the quadratic-to-cubic ramp \[\lambda(t)=(t_{\text{f}}-t_{\text{i}})\left[\frac{x(t)^{4}}{4}-\frac{2x(t)^{3} }{3}+\frac{x(t)^{2}}{2}\right] \tag{13}\] where \(x\)=\((t-t_{\text{i}})/T_{\text{ramp}}\). The quadratic-to-cubic drive (13) is an adjusted quartic drive with \(\dot{\lambda}(t)=x(1-x)^{2}\) ensuring that \(\nu(t)\) is quadratic around \(t=t_{\text{i}}\) and cubic around \(t=t_{\text{f}}\). The choice of drive can impact the final fidelity for both unassisted and l-FCD protocols, see Fig. 13A. However, for the chosen protocols the change in final fidelity is negligible for the l-FCD protocol. Second, we consider different \(\omega_{\text{max}}=0.1J,\ \dots,\ 2J\) values, see Fig. 13B. Again, for l-FCD protocols, the dependence on the details of the protocol is less dominant than for the unassisted protocol. Notably, for l-FCD the optimal protocol is reached around \(\omega_{\text{max}}=0.2J\). In contrast, for the unassisted protocol the optimum is assumed around \(\omega_{\text{max}}=1.4J\), see Fig. 13B. Therefore, the optimization of such hyperparameters may always be performed with respect to the l-FCD protocol in order to yield the highest fidelity protocol. In summary, if one considers approximate l-FCD state preparation protocols the performance of the protocol may depend on the details of the respective unassisted protocol. Therefore, to find an optimal driving protocol an optimization of the protocol should be performed. Figure 13: **Protocol Dependence of state preparation** for circularly driven transverse field Ising model (28). **(A)** Dependence on the protocol \(\lambda(t)\) with \(A(t)/J=[10-9.5\times\lambda(t)]\) and \(\omega(t)=[\omega_{\text{max}}\times\sin(\pi\lambda(t))]\) for cubic (25) (blue), quartic (12) (orange) and quadratic-to-cubic (13) (green) drive. (i) Protocols in \(h_{x}\)-\(h_{z}\) plane, (ii) fidelity of protocols. **(B)** Dependence on \(\omega_{\text{max}}\) with (i) protocols in \(h_{x}\)-\(h_{z}\) plane, (ii) corresponding fidelity of protocols. Data is obtained for a chain of length \(L=14\) and all other parameters are as in Fig. 6, if not stated otherwise.
2303.02610
HyperPose: Camera Pose Localization using Attention Hypernetworks
In this study, we propose the use of attention hypernetworks in camera pose localization. The dynamic nature of natural scenes, including changes in environment, perspective, and lighting, creates an inherent domain gap between the training and test sets that limits the accuracy of contemporary localization networks. To overcome this issue, we suggest a camera pose regressor that integrates a hypernetwork. During inference, the hypernetwork generates adaptive weights for the localization regression heads based on the input image, effectively reducing the domain gap. We also suggest the use of a Transformer-Encoder as the hypernetwork, instead of the common multilayer perceptron, to derive an attention hypernetwork. The proposed approach achieves superior results compared to state-of-the-art methods on contemporary datasets. To the best of our knowledge, this is the first instance of using hypernetworks in camera pose regression, as well as using Transformer-Encoders as hypernetworks. We make our code publicly available.
Ron Ferens, Yosi Keller
2023-03-05T08:45:50Z
http://arxiv.org/abs/2303.02610v1
# HyperPose: Camera Pose Localization using Attention Hypernetworks ###### Abstract In this study, we propose the use of attention hypernetworks in camera pose localization. The dynamic nature of natural scenes, including changes in environment, perspective, and lighting, creates an inherent domain gap between the training and test sets that limits the accuracy of contemporary localization networks. To overcome this issue, we suggest a camera pose regressor that integrates a hypernetwork. During inference, the hypernetwork generates adaptive weights for the localization regression heads based on the input image, effectively reducing the domain gap. We also suggest the use of a Transformer-Encoder as the hypernetwork, instead of the common multilayer perceptron, to derive an attention hypernetwork. The proposed approach achieves superior results compared to state-of-the-art methods on contemporary datasets. To the best of our knowledge, this is the first instance of using hypernetworks in camera pose regression, as well as using Transformer-Encoders as hypernetworks. We make our code publicly available1. Footnote 1: [https://anonymous.4open.science/r/hyperpose-2023/](https://anonymous.4open.science/r/hyperpose-2023/) ## 1 Introduction In numerous computer vision applications such as indoor navigation, augmented reality, and autonomous driving, it is essential to determine the position and orientation of a camera using a query image. Current camera localization techniques show various trade-offs between accuracy, runtime, and memory usage. The most precise methods [2, 3, 28, 35, 27] involve hierarchical localization pipelines, which commonly use a coarse-to-fine processing flow that begins with image retrieval to perform an initial localization based on images similar to the query image, followed by local feature extraction and matching. To estimate the camera's 6-Degrees-Of-Freedom (6DoF) accurately, the resulting 2D-2D matches are mapped to 3D correspondences in the scene's point cloud and then used to determine the camera pose through Perspective-n-Point (PnP) and RANSAC. Another approach, absolute pose regressors (APRs), can estimate the camera pose with a single forward pass using only the query image, resulting in significantly lower latency, although being notably less accurate. Additionally, due to their low memory footprint, APRs can be deployed as a standalone application on edge devices with limited computational resources. For a survey of visual camera pose localization, refer to [44, 30]. PoseNet by Kendall et al. [17] was the first APR approach, using a convolutional backbone and a multilayer perceptron (MLP) head to regress the camera's position and orientation. This approach was simple and able to run at 5ms leading to a multitude of APR methods that improve accuracy by modifying the backbone and MLP architectures [21, 22, 40, 42, 31, 41, 6] as well as alternative loss functions and optimization strategies [15, 16, 30]. Indoor and outdoor environments present dynamic scenarios characterized by varying lighting conditions, viewing angles, and moving objects. Thus, fitting a single, global set of parameters to these changing circumstances might limit the network's adaptability and result in reduced accuracy. Hypernetworks [14] are a deep learning architecture where an auxiliary network, the hypernetwork, generates the weights for the primary network (main network) based on the current input and the downstream task. The hypernetwork allows for dynamic and adaptive weight modulation of the main network, leading to improved flexibility in response to changing conditions [23, 9]. Both the main network and the hypernetwork are trained in a unified, end-to-end manner through the use of backpropagation-based optimization. Our approach differs from traditional Absolute Pose Regression (APR) methods that rely on a single global image encoding for estimating the 6DoF pose of a camera [17, 15, 40, 31, 6, 32]. Instead, we use a dual attention-based approach that simultaneously regresses both translation and orientation, providing an adaptive analysis of image content. The spatial content of images contains both informative and uninformative cues, that are effectively handled by our approach. Furthermore, we introduce a novel attention hypernetwork formulation that utilizes Transformer-Encoders as hypernetworks to output coefficients for regression heads based on the input query image. Specifically, the camera pose regression is divided into two separate sequence-to-one problems. Intermediate activation maps are processed by two separate Transformer-Encoders for translation and rotation, as illustrated in Figure 2. The activation maps are also fed into the relevant attention hypernetwork, which computes the weights for the localization and orientation heads. These predicted weights, combined with the output from the encoders, allow us to perform the regression of the corresponding localization attributes. This approach allows for an adaptive analysis of the image content, enabling the regression weights to focus on informative features for each attribute, rather than relying on a single global image representation. The use of Transformer-Encoders as hypernetworks is a novel formulation that provides improved performance in the regression tasks, compared to traditional fully-connected layers or other neural network architectures. The proposed dual attention-based architecture results in a more robust and accurate model for camera pose regression. In summary, our research offers three key contributions: * An absolute pose regression approach using hypernetworks that is experimentally shown to adapt to varying appearance and lighting conditions. * We propose to use a Transformer-Encoder, instead of the common MLP, as an attention hypernetwork to learn the regression head parameters. * The proposed scheme achieves new state-of-the-art APR accuracy across outdoor and indoor localization benchmarks. ## 2 Related Work ### Camera Pose Camera pose estimation techniques differ by their input during inference and algorithmic attributes. **3D-based or structure-based localization** methods are considered to be the most accurate for camera pose estimation and have demonstrated state-of-the-art performance on leading benchmarks. These methods utilize correspondences between 2D features in an image and 3D world coordinates. A two-phase approach was introduced by [12, 27, 24, 25] for hierarchical pose estimation pipelines. In this process, each query image is first encoded using a CNN trained for image retrieval (IR), based Figure 1: Camera pose localization using hypernetworks. (a) A hypernetwork consisting of a multilayer perceptron computing the weights of the regression head. (b) The regression weights are computed by an attention-hypernetwork consisting of a Transformer-Encoder. on a pre-mapped image dataset. Tentative correspondences are then estimated by matching local image features to 3D matches. The resulting matches are then used by PnP-RANSAC to estimate the camera pose. DSAC [2] and its follow-up scheme, DSAC++ [3], utilize a CNN architecture to directly estimate the 3D coordinates from 2D positions in an image and only require the query image during inference. The same as in Absolute Pose Regression (APR) methods, separate models must be trained for each scene, achieving state-of-the-art accuracy. **Relative Pose Regression** (RPR) algorithms estimate the relative pose between an input query image and a set of reference images based on their known location. Calculating the relative pose involves the estimation of the relative position and orientation between the query image and the reference images [47, 4]. An Image Retrieval (IR) approach is utilized to determine the closest set of neighbor images, and the relative motion is then estimated between the query image and each of the retrieved images. The estimated relative poses are utilized, along with the known position of the reference images, to estimate the camera's absolute pose. Similarly to 3D and structured-based localization methods, RPRs involve a multistep process and require access to a database of images with labeled poses during the inference phase. **Absolute Pose Regression** (APR) directly estimate the camera pose given the input query image. APRs were first introduced by Kendall [17], using a modified version of the truncated GoogLeNet architecture, where the Softmax classification layer was replaced with a series of fully connected layers that output the pose estimate. Although such approaches are less accurate than classical structure-based techniques, they offer several advantages, such as faster inference times (milliseconds), reduced memory footprint (megabytes), and do not require feature engineering. Variations in the encoder and MLP architectures have been proposed to enhance APR accuracy. Furthermore, modifications to the loss function have been suggested to improve the balance between orientation and position objectives [16] or to incorporate other types of information [5]. For an in-depth examination of the various architectural designs and loss functions utilized in camera pose regression, please refer to [30]. ### Transformers Originally developed for machine translation [38], Transformers have since become a state-of-the-art architecture in natural language processing. Recent studies have also shown the success of using Transformers to encode visual data [11]. Our work builds upon these studies by demonstrating the effectiveness of Transformers in incorporating the spatial information needed for accurate pose localization. In particular, to the best of our knowledge, we are the first to propose Transformer-Encoders as an efficient hypernetwork that adapts the weights of the regression heads based on the input query image. ### Hypernetworks Hypernetworks were first presented by Ha et al. [14]. These are neural networks that have been designed to predict the weights of a primary network. The primary network's weights can be adjusted based on specific inputs, resulting in a more expressive and adaptive model. Hypernetworks have been used in various applications such as learning implicit neural representations [9], semantic segmentation [23], 3D scene representation and modeling [19, 33, 34] and continuous learning [39], to name a few. ## 3 Camera Pose Localization using Attention Hypernetworks The proposed localization network consists of two main parts: the main network (APR), which is designed to estimate the camera's absolute position, and the hypernetwork, which predicts the regression weights of the main network based on the input query image. The overall network is illustrated in Fig. 2 where the APR performs a forward pass on the input image to localize the capture camera. The camera pose is represented as the tuple \(<\mathbf{x},\mathbf{q}>\), where \(\mathbf{x}\in\mathbb{R}^{3}\) is the camera's position in world coordinates and \(\mathbf{q}\in\mathbb{R}^{4}\) is the quaternion encoding of its 3D orientation. Our method utilizes separate positional and orientational Transformer-Encoders to effectively integrate the flattened intermediate activation maps generated by a convolutional backbone. Following [8], we formulate the hypernetwork \(H\) as a parametric function \(\theta_{x}=H(\theta,\mathbf{x})\), where \(\theta\) represents the parameters of the hypernetwork, Figure 2: The proposed method consists of a dual-branch structure in the main network and an accompanying hypernetwork. The Transformer-Encoders in the main network sample activation maps generated by the convolutional backbone. In parallel, the hypernetwork generates a set of weights for the regression head of the main network, based on the input query image. The resulting adaptive weights, combined with the generalized latent vectors produced by each encoder in the main network, are utilized to estimate the camera pose components, either position (x) or orientation (q). \(\mathbf{x}\) the input activation maps, and \(\theta_{x}\) the parameters of the regression heads layer for the image \(\mathbf{x}\). ### Network Architecture In line with Fig. 2, the input query image \(I_{q}\) is processed by a convolutional backbone that calculates the intermediate activation maps of two different layers. These activation maps are then utilized as inputs by both the main network's and hypernetwork's Transformer-Encoders. The hypernetwork's encoders convert the activation maps into latent vectors through their processing. These latent vectors are then passed through multilayer perceptrons (MLPs) to generate the weights for the translation and orientation regression heads. Simultaneously, the positional and orientational encoders within the main network process the activation maps and integrate sequential representations into single latent vectors. These latent vectors are used to estimate camera position and orientation, using the weights \(\theta_{x}\) generated by the hypernetwork \(H\). #### 3.1.1 Main Network The main network in the HyperPose architecture is composed of a convolutional backbone and two separate branches for regressing the camera's position and orientation. Each branch includes its own Transformer-Encoder and a multilayer perceptron (MLP) head. Studies have demonstrated that substituting the patchify stem, utilized in early visual processing of a Vision Transformer (ViT), with a conventional convolutional stem enhances optimization stability and elevates peak performance to a higher level [43]. Hence, given an input query image \(I_{q}\in\mathbb{R}^{H_{q}\times W_{q}\times C_{q}}\), a shared convolutional backbone generates activation maps. Two different activation maps are selected from the CNN based on the particular learned task. The activation maps, represented as \(M\in\mathbb{R}^{H_{M}\times W_{M}\times C_{M}^{In}}\), are processed and transformed into \(\tilde{M}\in\mathbb{R}^{H_{M}\times W_{M}\times C_{M}^{Out}}\) using a \(1\times 1\) convolutional layer, following the sequence preparation procedure described in [38]. To maintain the spatial relationships between the cells of the activation map, positional encoding is applied. The processed sequence is then augmented with a learned orientation/position token \(T_{m}\in\mathbb{R}^{C_{M}^{Out}}\), creating a concatenated sequence that serves as the input to the task-specific Transformer Encoder [10]: \[E_{m}^{In}=[T_{m},\tilde{M}]\in\mathbb{R}^{(H_{M}W_{M}=1)\times C_{M}^{Out}}. \tag{1}\] The Transformer-Encoder consists of \(N\) blocks, all of which are identical in design. Each block is composed of a multi-head attention (MHA) mechanism for self-attention and a two-layer MLP with Swish activation functions [26] as specified in [7]. The input is subjected to Layer Normalization (LN) prior to processing by the MHA and MLP modules, as per [38]. The outputs of the MHA and MLP are then combined with the input via residual connections and dropout, following the approach outlined in [32]. As described in [11], position encoding is added to the input before each layer, and the final output undergoes additional LN normalization. The output at the position of the special token \(E_{m_{0}}^{Out}\in\mathbb{R}^{C_{M}^{Out}}\), represents a context-aware and global summarization of the local features in the input activation map. Finally, based on the MLP weights output by the hypernetwork, the estimated camera position and orientation is regressed using the global summarized latent vector \(E_{0}^{Out}\). #### 3.1.2 Hypernetwork The hypernetwork, similar in design to the main network, consists of two distinct branches, each responsible for the regression of the weights for the regression position and orientation heads in the main network. Through the application of an identical preprocessing flow, a sequence is generated for the input of the Transformers encoders in the hypernetwork, derived from the encoded activation maps and learned tokens \(T_{h}\in\mathbb{R}^{C_{M}^{Out}}\)[10]: \[E_{h}^{In}=[T_{h},\tilde{M}]\in\mathbb{R}^{(H_{M}W_{M}=1)\times C_{M}^{Out}}. \tag{2}\] The global summarized latent vector from the encoders, \(E_{h_{0}}^{Out}\in\mathbb{R}^{C_{M}^{Out}}\), is processed by multiple multilayer perceptrons (MLPs) followed by Swish activation functions, each of which is tasked with estimating the weights of a single corresponding layer in the regression heads of the main network: \[\theta_{h}^{Outi}=H(\theta_{h}^{Outi},E_{h_{0}}^{Out})\in\mathbb{R}^{C_{M}^{Out }\times C_{H}^{Outi}}, \tag{3}\] where \(i\) is the index of the regression head layer. ### Optimization Criteria The localization error is calculated by comparing the translation position and orientation of the ground truth pose \(\mathbf{p}_{gt}=<\mathbf{x}_{gt},\mathbf{q}_{gt}>\), where \(\mathbf{x}\in\mathbb{R}^{3}\) represents the position of the camera in the world and \(\mathbf{q}\in\mathbb{R}^{4}\) denotes its orientation encoded as a quaternion, with the estimated pose \(\mathbf{p}_{est}=<\mathbf{x}_{est},\mathbf{q}_{est}>\). The translation error, represented by \(L_{x}\), is commonly expressed in meters and is calculated as the Euclidean distance between the ground truth and the estimated camera positions. The orientation error, \(L_{q}\), is defined as the Euclidean distance between the ground truth and the estimated unit vector quaternions [30]: \[L_{x}=||\mathbf{x}_{0}-\mathbf{x}||_{2}, \tag{4}\] \[L_{q}=\left\|\mathbf{q_{0}}-\frac{\mathbf{q}}{||\mathbf{q}||}\right\|_{2} \tag{5}\] where \(\mathbf{q}\) is a normalized norm quaternion to ensure it is a valid orientation encoding. As a multi-regression optimization problem, we adopt the method outlined in [16] by using different weighting schemes, based on the uncertainty of each task, to balance the individual loss functions defined for the separate objectives. The model optimizes the aggregated loss function, which is a combination of the weighted individual losses. \[L_{p}=L_{x}\exp(-s_{x})+s_{x}+L_{q}\exp(-s_{q})+s_{q}, \tag{6}\] where \(s_{x}\) and \(s_{q}\) are the learned parameters. As a further optimization, we adopt the approach detailed in [31] and follow a three-step training process. Firstly, the entire network is trained using the aggregated loss described in Eq. 6. In the subsequent two steps, each MLP head is fine-tuned independently using its specific loss. ### Implementation Details In our shared convolutional backbone architecture, we employed an EfficientNet-B0 [36]. This model outperforms the accuracy of most existing convolutional neural networks and boasts improved efficiency. It consists of multiple levels of reduction, each representing activation maps with a reduced spatial resolution and an increased network depth. We used the activation maps of two endpoints (reduction levels) as input for our position and orientation branches: \(m_{rdc4}\in\mathbb{R}^{14\times 14\times 112}\) and \(m_{rdc3}\in\mathbb{R}^{28\times 28\times 40}\), respectively. According to the method described in [32], we applied linear projections on each activation map to a common depth dimension of \(C_{M}^{Out}=256\) and learned the positional encoding of the same depth. Our implementation of the EfficientNet model is based on an open-source version [20]. In both the main network and the hypernetwork, we incorporated six-layer Transformer Encoders in each branch (position and orientation). Each block consists of a MHA layer with four heads, followed by an MLP that preserves the input dimension. A dropout rate of \(p=0.1\) was applied to each MLP layer. The output dimension of the Transformer-Encoders in the main network was set to \(C_{M}^{Out}=256\), while in the hypernetwork it was increased to \(C_{H}^{Out}=512\). The position and orientation branches of the hypernetwork generate the weights of three regression layers: input, hidden, and output. The number of weights in the position and orientation branches differ. The hypernetwork's position branch produces the weights for an MLP with dimensions 512, 256, and 3, while the orientation branch generates weights for an MLP with dimensions 512, 512, and 4. The weights generated by the hypernetwork are then incorporated into the regression heads of the main network to estimate the position and orientation of the input query image. The input and hidden layers of the regression head utilize the Swish activation function. ## 4 Experimental Results ### Datasets The experimental evaluation was conducted using the 7 Scenes [13] and Cambridge Landmarks [17] APR evaluation benchmark datasets. The Cambridge Landmarks dataset represents an urban outdoor environment with a spatial extent ranging from approximately \(900\) to \(5500\) square meters. We present results for four of its six scenes, as the remaining two scenes have limited coverage in the existing literature or are reported to have possible faults. The 7 Scenes dataset consists of seven small-scale indoor scenes with a spatial extent ranging from approximately \(1\) to \(10\) square meters. These datasets present various challenges commonly encountered in visual localization applications, including differences in scale, indoor/outdoor environments, repetitive elements, textureless features, significant viewpoint changes, and differences in trajectory between training and test sets. ### Training Details The training of the network is executed in three phases. During the first phase, all network components are trained simultaneously. In the second phase, only the translation-related layers, such as the hypernetwork and regression layers, are fine-tuned while other parts of the network are kept frozen. In the final phase, the orientation-related components are fine-tuned to achieve improved performance without compromising the localization objectives. The Adam optimization algorithm is used to minimize the loss function with \(\beta_{1}=0.9,\beta_{2}=0.999\), and \(\epsilon=10^{-10}\) as the hyperparameters. The initial values of the loss parameters are established based on the characteristics of the dataset utilized. The batch size is set to \(8\), and the initial learning rate is \(\lambda=10^{-4}\). The learning rate decreases by \(25\%\) every 100 epochs for indoor localization and every 200 epochs for outdoor localization, with a maximum of \(300\) and \(1000\) epochs, respectively. Additionally, a weight decay of \(10^{-4}\) and a dropout of \(p=0.1\) are applied to the encoders during training. Our model is subjected to augmentation procedures to enhance its generalization, in line with the approach used in [17]. During training, the image is resized so that its smaller edge is resized to \(256\) pixels and a random \(224\times 224\) crop is taken. Random adjustments are made to the brightness, contrast, and saturation of the image. During testing, the image is rescaled with the smaller edge resized to \(256\) pixels, and a center crop is taken without any further augmentation. The models were trained on multiple NVIDIA \(2080Ti\) GPUs and the PyTorch framework. ### Results The primary objective of the proposed scheme is to improve the accuracy of absolute pose regression. To evaluate its performance, Tables 1 and 2 compare the median position error, measured in meters, and the orientation error, measured in degrees, of the proposed method with recent state-of-the-art localization algorithms on the Cambridge Landmarks and 7 Scenes datasets, respectively. For both indoor and outdoor datasets, in addition to the results of leading APRs, we report the median position and orientation errors of structure-based, sequence-based, IR and RPR methods. Although DSAC [4] provides SOTA accuracy, our proposed method outperforms all other APRs methods. HyperPose achieves the lowest position estimation error in all scenes in both the Cambridge outdoor dataset and the 7Scenes indoor dataset, with the most accurate overall average of \(0.87m\) and \(0.17m\), respectively. Furthermore, HyperPose demonstrates the best orientational estimation error on both the Hospital and ShopFace scenes, in the Cambridge dataset. ### Hypernetwork Output Weights Given an input query image \(I_{q}\), hypernetworks generate weights for the position and orientation regression heads, enabling adaptation of the network's output based on the visual features present in the image. The scene depicted in the image is likely to exhibit a range of attributes, such as varying illumination conditions, different viewpoints, dynamic objects, and changing backgrounds. As a result, it is expected that the generated weights will be adapted to accommodate these fluctuations, so that the images from the same scene (location) will be jointly clustered despite the appearance variations, as shown in Fig. 3. Figure Figure 3: Clusters of the Shop Facade test set images, that are part of the Cambridge Landmarks dataset. The clusters are computed using K-Means with \(K=4\). (a) Clustering the corresponding camera (X, Y) positions. (b) Clustering the 2D t-SNE projections of the regression weights computed by the hypernetwork. 3(a) depicts the clustering of the Shop Facade scene test set from the Cambridge Landmarks dataset. The spatial (X,Y) coordinates were clustered using K-Means with \(K=4\). The images are colored according to their cluster index. As expected, the clusters were formed based on the proximity and location of the images within the scene's coordinates. Figure 2(b) shows the K-Means clustering of the 2D t-SNE projections of the regression weights generated by the hypernetwork. Figure 3(b) shows that the images are grouped based on their visual attributes and viewing angles, which demonstrates that the hypernetwork adapts its output to the input image. This differs from other state-of-the-art APR solutions, where the regression head weights are static and learned during the training phase, thus restricting their ability to adapt to the variations in input images. ### Ablation Study In order to gauge the performance of the chosen architecture, we performed an ablation study. The comparison was based on the model's performance on the ShopFacade scene in the Cambridge Landmarks dataset, measured at the end of the initial training phase. For each ablation, we changed and studied a single scheme parameter. **Residual hypernetwork output.** Table 3 presents the results obtained by regressing both the absolute pose and the residual additive branch, to support the reasoning behind the configuration of the hypernetworks to output the residual position and orientation. The residual-based architecture outperforms the direct regression approach, as evidenced by the lower errors in both position and orientation estimation achieved by the former. **Hypernetwork embedding dimension.** We assess the \begin{table} \begin{tabular}{l l l l l l l} \hline \hline & & **College** & **Hospital** & **Shop Facade** & **St. Mary** & **Avg.** \\ \hline \multirow{4}{*}{**Synthetic**} & DSAC [4] & 0.18,0.3\({}^{\circ}\) & 0.21,0.4\({}^{\circ}\) & 0.05,0.3\({}^{\circ}\) & 0.15,0.5\({}^{\circ}\) & 0.15,0.4\({}^{\circ}\) \\ \hline \multirow{4}{*}{**Synthetic**} & MapNet [5] & 1.08,1.9\({}^{\circ}\) & 1.94,3.9\({}^{\circ}\) & 1.49,4.2\({}^{\circ}\) & 2.00,4.5\({}^{\circ}\) & 1.63,3.6\({}^{\circ}\) \\ & GL-Net [46] & 0.59,0.7\({}^{\circ}\) & 1.88,2.8\({}^{\circ}\) & 0.50,2.9\({}^{\circ}\) & 1.90,3.3\({}^{\circ}\) & 1.22,2.4\({}^{\circ}\) \\ \hline \multirow{4}{*}{**Synthetic**} & VLAD [37] & 2.80,5.7\({}^{\circ}\) & 4.01,7.1\({}^{\circ}\) & 1.11,7.6\({}^{\circ}\) & 2.31,8.0\({}^{\circ}\) & 2.56,7.1\({}^{\circ}\) \\ & VLAD+Inter [29] & 1.48,4.5\({}^{\circ}\) & 2.68,4.6\({}^{\circ}\) & 0.90,4.3\({}^{\circ}\) & 1.62,6.1\({}^{\circ}\) & 1.67,4.9\({}^{\circ}\) \\ \hline \multirow{4}{*}{**Synthetic**} & EssNet [47] & 0.76,1.9\({}^{\circ}\) & 1.39,2.8\({}^{\circ}\) & 0.84,4.3\({}^{\circ}\) & 1.32,4.7\({}^{\circ}\) & 1.08,3.4\({}^{\circ}\) \\ & NC-EssNet [47] & 0.61,1.6\({}^{\circ}\) & 0.95,2.7\({}^{\circ}\) & 0.7,3.4\({}^{\circ}\) & 1.12,3.6\({}^{\circ}\) & 0.85,2.8\({}^{\circ}\) \\ & RelocGNN[4] & 0.48,1.0\({}^{\circ}\) & 1.14,2.5\({}^{\circ}\) & 0.48,2.5\({}^{\circ}\) & 1.52,3.2\({}^{\circ}\) & 0.91,2.3\({}^{\circ}\) \\ \hline \multirow{4}{*}{**Synthetic**} & PoseNet [17] & 1.92,5.40\({}^{\circ}\) & 2.31,5.38\({}^{\circ}\) & 1.46,8.08\({}^{\circ}\) & 2.65,8.48\({}^{\circ}\) & 2.08,6.83\({}^{\circ}\) \\ & BayesianPN [15] & 1.74,4.06\({}^{\circ}\) & 2.57,5.14\({}^{\circ}\) & 1.25,7.54\({}^{\circ}\) & 2.11,8.38\({}^{\circ}\) & 1.91,6.28\({}^{\circ}\) \\ & LSTM-PN [40] & 0.99,3.65\({}^{\circ}\) & 1.51,4.29\({}^{\circ}\) & 1.18,7.44\({}^{\circ}\) & 1.52,6.68\({}^{\circ}\) & 1.30,5.57\({}^{\circ}\) \\ & SVS-Pose [22] & 1.06,2.81\({}^{\circ}\) & 1.50,4.03\({}^{\circ}\) & 0.63,5.73\({}^{\circ}\) & 2.11,8.11\({}^{\circ}\) & 1.32,5.17\({}^{\circ}\) \\ & GPoseNet [6] & 1.61,2.29\({}^{\circ}\) & 2.62,3.89\({}^{\circ}\) & 1.14,5.73\({}^{\circ}\) & 2.93,6.46\({}^{\circ}\) & 2.07,4.59\({}^{\circ}\) \\ & PoseNetLearn [16] & 0.99,1.06\({}^{\circ}\) & 2.17,2.94\({}^{\circ}\) & 1.05,3.97\({}^{\circ}\) & 1.49,3.43\({}^{\circ}\) & 1.42,2.85\({}^{\circ}\) \\ & GeoPoseNet [16] & 0.88,1.04\({}^{\circ}\) & 3.20,3.29\({}^{\circ}\) & 0.88,3.78\({}^{\circ}\) & 1.57,3.32\({}^{\circ}\) & 1.63,2.86\({}^{\circ}\) \\ & MapNet [5] & 1.07,1.89\({}^{\circ}\) & 1.94,3.91\({}^{\circ}\) & 1.49,4.22\({}^{\circ}\) & 2.00,4.53\({}^{\circ}\) & 1.62,3.64\({}^{\circ}\) \\ & IRPNet [31] & 1.18,2.19\({}^{\circ}\) & 1.87,3.38\({}^{\circ}\) & 0.72,3.47\({}^{\circ}\) & 1.87,4.94\({}^{\circ}\) & 1.41,3.50\({}^{\circ}\) \\ & MS-Trans-1S[32] & 0.72,2.55\({}^{\circ}\) & 2.07,3.24\({}^{\circ}\) & 0.68,3.66\({}^{\circ}\) & 1.10,5.26\({}^{\circ}\) & 1.14,3.68\({}^{\circ}\) \\ & **HyperPose (Ours)** & 0.56,2.40\({}^{\circ}\) & 1.41,2.91\({}^{\circ}\) & 0.54,3.37\({}^{\circ}\) & 0.98,4.86\({}^{\circ}\) & 0.87,3.43\({}^{\circ}\) \\ \hline \hline \end{tabular} \end{table} Table 1: **Comparison to state-of-the-art methods: Cambridge Landmarks. We report the median position/orientation error in meters/degrees for each method. The first and second most effective APRs are distinguished using Red and Blue colors, respectively.** effect of altering the size of the fully-connected layers in both the position and orientation branches of the regression heads. The results of our experiment are shown in Table 4, where we present the results obtained by varying the dimensions of the layers. **Hypernetwork architecture.** We evaluated using an alternative conventional MLP-based hypernetwork design. This design computes the regression layer weights using an MLP, whose input is the terminal latent vector of the EfficientNet-B0 backbone. The proposed attention-hypernetwork architecture exhibits superior performance in estimating both positional and orientational errors. A plausible explanation is that spatial information is already condensed into a vector by the output of the backbone network, which constrains the MLP-based hypernetwork branches from addressing finer details in the activation \begin{table} \begin{tabular}{l l c c c c c c c} \hline \hline & & **Chess** & **Fire** & **Heads** & **Office** & **Pumpkin** & **Kitchen** & **Stairs** \\ \hline \multirow{5}{*}{\begin{tabular}{} \end{tabular} } & DSAC\({}^{*}\)[4] & 0.02,1.1\({}^{\circ}\) & 0.02,1.0\({}^{\circ}\) & 0.01,1.8\({}^{\circ}\) & 0.03,1.2\({}^{\circ}\) & 0.04,1.4\({}^{\circ}\) & 0.03,1.7\({}^{\circ}\) & 0.04,1.4\({}^{\circ}\) \\ \cline{2-9} & LsG [45] & 0.09,3.3\({}^{\circ}\) & 0.26,10.9\({}^{\circ}\) & 0.17,12.7\({}^{\circ}\) & 0.18,5.5\({}^{\circ}\) & 0.20,3.7\({}^{\circ}\) & 0.23,4.9\({}^{\circ}\) & 0.23,11.3\({}^{\circ}\) \\ & MapNet[5] & 0.08,3.3\({}^{\circ}\) & 0.27,11.7\({}^{\circ}\) & 0.18,13.3\({}^{\circ}\) & 0.17,5.2\({}^{\circ}\) & 0.22,4.0\({}^{\circ}\) & 0.23,4.9\({}^{\circ}\) & 0.30,12.1\({}^{\circ}\) \\ & GL-Net[46] & 0.08,2.8\({}^{\circ}\) & 0.26,8.9\({}^{\circ}\) & 0.17,11.4\({}^{\circ}\) & 0.18,13.3\({}^{\circ}\) & 0.15,2.8\({}^{\circ}\) & 0.25,4.5\({}^{\circ}\) & 0.23,8.8\({}^{\circ}\) \\ \hline \multirow{5}{*}{\begin{tabular}{} \end{tabular} } & VLAD [37] & 0.21,12.5\({}^{\circ}\) & 0.33,13.8\({}^{\circ}\) & 0.15,14.9\({}^{\circ}\) & 0.28,11.2\({}^{\circ}\) & 0.31,11.2\({}^{\circ}\) & 0.30,11.3\({}^{\circ}\) & 0.25,12.3\({}^{\circ}\) \\ & VLAD+Inter[29] & 0.18,10.0\({}^{\circ}\) & 0.33,12.4\({}^{\circ}\) & 0.14,14.3\({}^{\circ}\) & 0.25,10.1\({}^{\circ}\) & 0.26,9.4\({}^{\circ}\) & 0.27,11.1\({}^{\circ}\) & 0.24,14.7\({}^{\circ}\) \\ \hline \multirow{5}{*}{\begin{tabular}{} \end{tabular} } & NN-Net [18] & 0.13,6.5\({}^{\circ}\) & 0.26,12.7\({}^{\circ}\) & 0.14,12.3\({}^{\circ}\) & 0.21,7.4\({}^{\circ}\) & 0.24,6.4\({}^{\circ}\) & 0.24,8.0\({}^{\circ}\) & 0.27,11.8\({}^{\circ}\) \\ & RelocNet[1] & 0.12,4.1\({}^{\circ}\) & 0.26,10.4\({}^{\circ}\) & 0.14,10.5\({}^{\circ}\) & 0.18,5.3\({}^{\circ}\) & 0.26,4.2\({}^{\circ}\) & 0.23,5.1\({}^{\circ}\) & 0.28,7.5\({}^{\circ}\) \\ & EssNet [47] & 0.13,5.1\({}^{\circ}\) & 0.27,10.1\({}^{\circ}\) & 0.15,9.9\({}^{\circ}\) & 0.21,6.9\({}^{\circ}\) & 0.22,6.1\({}^{\circ}\) & 0.23,6.9\({}^{\circ}\) & 0.32,11.2\({}^{\circ}\) \\ & NC-EssNet [47] & 0.12,5.6\({}^{\circ}\) & 0.26,9.6\({}^{\circ}\) & 0.14,10.7\({}^{\circ}\) & 0.20,6.7\({}^{\circ}\) & 0.22,5.7\({}^{\circ}\) & 0.22,6.3\({}^{\circ}\) & 0.31,7.9\({}^{\circ}\) \\ & RelocGNN[4] & 0.08,2.7\({}^{\circ}\) & 0.21,7.5\({}^{\circ}\) & 0.13,8.70\({}^{\circ}\) & 0.15,4.1\({}^{\circ}\) & 0.15,3.5\({}^{\circ}\) & 0.19,3.7\({}^{\circ}\) & 0.22,6.5\({}^{\circ}\) \\ \hline \multirow{5}{*}{ \begin{tabular}{} \end{tabular} } & PoseNet [17] & 0.32,8.12\({}^{\circ}\) & 0.47,14.4\({}^{\circ}\) & 0.29,12.0\({}^{\circ}\) & 0.48,7.68\({}^{\circ}\) & 0.47,8.42\({}^{\circ}\) & 0.59,8.64\({}^{\circ}\) & 0.47,13.8\({}^{\circ}\) \\ & BayesianPN [15] & 0.37,7.24\({}^{\circ}\) & 0.43,13.7\({}^{\circ}\) & 0.31,12.0\({}^{\circ}\) & 0.48,8.04\({}^{\circ}\) & 0.61,7.08\({}^{\circ}\) & 0.58,7.54\({}^{\circ}\) & 0.48,13.1\({}^{\circ}\) \\ & LSTM-PN [40] & 0.24,5.77\({}^{\circ}\) & 0.34,11.9\({}^{\circ}\) & 0.21,13.7\({}^{\circ}\) & 0.30,8.08\({}^{\circ}\) & 0.33,7.00\({}^{\circ}\) & 0.37,8.83\({}^{\circ}\) & 0.40,13.7\({}^{\circ}\) \\ & GPoseNet [6] & 0.20,7.11\({}^{\circ}\) & 0.38,12.3\({}^{\circ}\) & 0.21,13.8\({}^{\circ}\) & 0.28,8.83\({}^{\circ}\) & 0.37,6.94\({}^{\circ}\) & 0.35,8.15\({}^{\circ}\) & 0.37,12.5\({}^{\circ}\) \\ & PoseNetLearn[16] & 0.14,4.50\({}^{\circ}\) & 0.27,11.8\({}^{\circ}\) & 0.18,12.1\({}^{\circ}\) & 0.20,5.77\({}^{\circ}\) & 0.25,4.82\({}^{\circ}\) & 0.24,5.52\({}^{\circ}\) & 0.37,10.6\({}^{\circ}\) \\ & GeoPoseNet[16] & 0.13,4.48\({}^{\circ}\) & 0.27,11.3\({}^{\circ}\) & 0.17,13.0\({}^{\circ}\) & 0.19,5.55\({}^{\circ}\) & 0.26,4.75\({}^{\circ}\) & 0.23,5.35\({}^{\circ}\) & 0.35,12.4\({}^{\circ}\) \\ & MapNet[5] & 0.08,3.25\({}^{\circ}\) & 0.27,11.7\({}^{\circ}\) & 0.18,13.3\({}^{\circ}\) & 0.17,5.15\({}^{\circ}\) & 0.22,4.02\({}^{\circ}\) & 0.23,4.93\({}^{\circ}\) & 0.30,12.1\({}^{\circ}\) \\ & IRPNet[31] & 0.13,5.64\({}^{\circ}\) & 0.25,9.67\({}^{\circ}\) & 0.15,13.1\({}^{\circ}\) & 0.24,6.33\({}^{\circ}\) & 0.22,5.78\({}^{\circ}\) & 0.30,7.29\({}^{\circ}\) & 0. maps. **Hypernetwork output layers.** By design, the output of hypernetworks determines the number of regression layers in the regression heads of the main network. Table 6 details the position estimation error vs. the number of layers. **Rotation representations.** The authors of [48] contend that 3D and 4D rotation representations are suboptimal for network regression, whereas continuous representations in 5D and 6D are more appropriate for learning. As shown in Table 7, employing a 4D-based representation in conjunction with the rotational loss specified in Eq. 5 led to the lowest orientational estimation error. In both the Quaternions and 4D-Norm approaches, the rotation regressor generates four values that estimate the camera's orientation (quaternion), but the latter computes the orientation loss based on rotation matrices instead. ## 5 Conclusions In this study, we present a novel attention-based architecture for absolute pose regression that uses a hypernetwork to estimate the regression head weights. This allows us to improve localization accuracy by adapting the regression weights to the input image embedding. To the best of our knowledge, this is the first use of hypernetworks to have been used in localization in general, and the first use of Transformer-Encoders in hypernetworks, in particular. Our approach is shown to compare favorably with contemporary APR schemes.
2305.11284
Federated learning for secure development of AI models for Parkinson's disease detection using speech from different languages
Parkinson's disease (PD) is a neurological disorder impacting a person's speech. Among automatic PD assessment methods, deep learning models have gained particular interest. Recently, the community has explored cross-pathology and cross-language models which can improve diagnostic accuracy even further. However, strict patient data privacy regulations largely prevent institutions from sharing patient speech data with each other. In this paper, we employ federated learning (FL) for PD detection using speech signals from 3 real-world language corpora of German, Spanish, and Czech, each from a separate institution. Our results indicate that the FL model outperforms all the local models in terms of diagnostic accuracy, while not performing very differently from the model based on centrally combined training sets, with the advantage of not requiring any data sharing among collaborators. This will simplify inter-institutional collaborations, resulting in enhancement of patient outcomes.
Soroosh Tayebi Arasteh, Cristian David Rios-Urrego, Elmar Noeth, Andreas Maier, Seung Hee Yang, Jan Rusz, Juan Rafael Orozco-Arroyave
2023-05-18T20:04:55Z
http://arxiv.org/abs/2305.11284v2
Federated learning for secure development of AI models for Parkinson's disease detection using speech from different languages ###### Abstract Parkinson's disease (PD) is a neurological disorder impacting a person's speech. Among automatic PD assessment methods, deep learning models have gained particular interest. Recently, the community has explored cross-pathology and cross-language models which can improve diagnostic accuracy even further. However, strict patient data privacy regulations largely prevent institutions from sharing patient speech data with each other. In this paper, we employ federated learning (FL) for PD detection using speech signals from 3 real-world language corpora of German, Spanish, and Czech, each from a separate institution. Our results indicate that the FL model outperforms all the local models in terms of diagnostic accuracy, while not performing very differently from the model based on centrally combined training sets, with the advantage of not requiring any data sharing among collaborators. This will simplify inter-institutional collaborations, resulting in enhancement of patient outcomes. Soroosh Tayebi Arasteh\({}^{*,1,2,3}\), Cristian David Rios-Urrego\({}^{*,4}\), Elmar Noeth\({}^{1}\), Andreas Maier\({}^{1}\), Seung Hee Yang\({}^{2}\), Jan Rusz\({}^{5}\), Juan Rafael Orozco-Arroyave\({}^{1,4}\)+\({}^{1}\)Pattern Recognition Lab, Friedrich-Alexander-Universitat Erlangen-Nurnberg, Erlangen, Germany \({}^{2}\)Speech & Language Processing Lab, Friedrich-Alexander-Universitat Erlangen-Nurnberg, Erlangen, Germany \({}^{3}\)Department of Diagnostic and Interventional Radiology, University Hospital RWTH Aachen, Aachen, Germany \({}^{4}\)GITA Lab, Faculty of Engineering, University of Antioquia, Medellin, Colombia \({}^{5}\)Department of Circuit Theory, Czech Technical University in Prague, Prague, Czech Republic [email protected], [email protected] Footnote †: STA and CDRU contributed equally to this work Accepted for INTERSPEECH 2023, Dublin, Ireland **Index Terms**: federated learning, speech pathology, Parkinson's disease, deep learning, trustworthy speech processing ## 1 Introduction Parkinson's disease (PD) is a neurodegenerative disorder that affects the nervous system, leading to the progressive deterioration of motor and non-motor functions, which contribute significantly to decreasing the quality of life of the patient's [1]. PD is characterized by resting tremor, rigidity, bradykinesia, postural instability, and other symptoms [2]. Most PD patients develop speech deficits which are grouped and called hypokinetic dysarthria where the speech is characterized by reduced loudness, monotonous pitch, and changes in voice quality [3, 4]. Speech signals can be analyzed objectively to quantify the severity of the disease and track its progression over time, which can be useful in clinical research and treatment monitoring. Among the best motivations to consider the speech signals is that they can be easily collected and analyzed remotely, which can provide greater convenience to patients and reduce the need for frequent clinical visits [5]. This can be especially beneficial for patients who live in remote areas or have limited mobility. In addition, speech signals can provide a complementary source of information to clinical assessment and other diagnostic tests, which can improve the accuracy and reliability of PD diagnosis and treatment [6]. Recently, deep learning (DL)-based methods have particularly gained a lot of attention for analyzing PD speech signals [7, 8]. However, a major impediment to developing such robust DL models is the need for accessing lots of training data, which is challenging for many institutions. Thus, benefiting from data from different external institutions could solve this issue. However, strict patient data privacy regulations in the medical context make this infeasible in most cases in real-world practice [9, 10, 11, 12]. Therefore, privacy-preserving collaborative training methods, in which participating institutions do not share data with each other are favorable. Federated learning (FL) [13, 14, 15], as the golden key to this issue, has been increasingly investigated by researchers and practitioners and received a lot of attention in the medical image analysis domain [11, 16, 17, 18] as it does not require sharing any training data between participating institutions in the joint training process. To the best of our knowledge, collaborative training methods based on FL have not been addressed in the literature on pathological speech signals yet, despite the availability of similar privacy regulations and restrictions as in the imaging domain [9], especially considering recent literature revealing the vulnerability of pathological speech signals in terms of patient data [19, 20, 21]. In this paper, for the first time, we investigate the applicability of FL in the privacy-preserving development of DL methods for PD detection using speech signals from three real-world language corpora, each from a separate and independent institution. We hypothesize that utilizing FL will substantially increase the diagnostic performances of networks for each local database while preserving patient privacy by avoiding data sharing between the institutions. Moreover, we assume that the FL model will perform relatively similarly, with only slight degradation compared to the hypothetical and non-privacy-preserving scenario where all the institutions could combine their training sets at a central location. ## 2 Material and Methods ### Methodology The methodology addressed in this study consists of the following main stages: data were acquired in different languages (Ger man, Spanish, and Czech), after, embeddings were extracted from speech signals for each participant using a pre-trained Wav2Vec 2.0 model [7, 8], then the extracted embeddings were utilized for the secure FL training of a classification architecture, and finally, a copy of the global model is sent back to each participating site for the classification of PD patients from healthy control (HC) subjects. This methodology is summarized in Fig. 1. Details of each stage are presented below. #### 2.1.1 Data We considered speech corpora in three different languages: Spanish, German, and Czech; each database contains PD patients and HC subjects. The first corpus is PC-GITA which includes recordings of 50 PD patients and 50 HC subjects [22]. All participants were Colombian native speakers. The second corpus contained a total of 176 German native speakers (88 PD patients and 88 HC subjects) [23]. The last database contained recordings of 100 Czech native speakers divided into 50 PD patients and 50 HC subjects [24]. Specialized neurologists evaluated each patient according to the Movement Disorder Society - Unified Parkinson's Disease Rating Scale (MDS-UPDRS-III) [25]. In addition, all recordings were captured in noise-controlled conditions, and the speech signals were downsampled to 16 kHz to feed a deep-learning model. The rapid repetition of the syllables /pa-ta-ka/ was considered in this study. This task allows the evaluation of specific movements required to produce stop consonants (/p/, /t/, /k/). Table 1 shows the demographic information of each database. #### 2.1.2 Feature Extraction To create a representation for each recording, we used Wav2vec 2.0 architecture, a state-of-the-art topology based on transformers proposed in [8]. Wav2Vec 2.0 was trained using a self-supervised pre-training approach that allows the model to learn representations directly from the raw audio signal without additional annotations or labels. The training process involved two main steps. Firstly, the contrastive pre-training, where the model was trained to distinguish between two versions of the same audio signal including a positive sample (a randomly selected segment of the original audio signal) and a negative sample (a randomly selected segment of a different audio signal). The second stage was fine-tuned based on a specific automatic speech recognition (ASR) task. Particularly in this work, we used a Wav2Vec 2.0 model, pre-trained on 960 hours of unlabeled audio from the LibriSpeech dataset [26], which was derived from English audiobooks and fine-tuned for ASR on the same audio with the corresponding transcripts. Due to the dynamic representation of 768 dimensions for each array with respect to time, we calculated a static vector for each participant from 6 different statistics (mean, standard deviation (std.), skewness, kurtosis, minimum, and maximum), building a speech representation of 4608 dimensions per recording. #### 2.1.3 Federated Learning In order to speed up the collaborative training convergence, the FL process was performed merely for the classification network, i.e., after all the embeddings were locally extracted using the Wav2Vec 2.0 model. Of note, all the data pre-processing and feature extraction steps happened locally by every participating institution without sharing any data with other institutions. Each institution performed a local training round of the classification network and transmitted the network parameters, i.e., the weights and biases, to a trusted server, which aggregated all the local parameters leading to a set of global parameters. In our implementation, we chose each round to be equal to one epoch of training with the full local dataset. Afterward, the server transmitted back a copy of the global network to each institution for another round of local training. The process continued until the convergence of the global network. It is worth mentioning that not only each institution did not have access to any training data from others, but also not even to the network parameters of others, rather only an aggregated network, without the knowledge about the contributions of other participating institutions to the global network. Once the training of the global classification network was converged, every institution could take a copy of the global network and locally utilize it for diagnosing its test data. #### 2.1.4 Classification and Evaluation The classification network architecture contained 4 fully-connected layers with different sizes: 1024, 256, 64, and 2, respectively. Rectified linear unit (ReLU) activation and batch normalization [27] were considered in each layer, and a Softmax activation function was used at the output. The fully connected network was trained and evaluated following a stratified 10-fold cross-validation strategy. The process was repeated 5 times for a better generalization of the results. The He initialization scheme [28] was applied to all classification network weights and all the biases were initialized with zeros. Cross-entropy was chosen as the loss function and the models were optimized using the Adam optimizer [29] with a learning rate of \(8\times 10^{-5}\) and weight decays of \(5\times 10^{-6}\). The classification networks were trained for 50 epochs in batches of size 16. Accuracy and area under the receiver-operator-characteristic curve (AUC) were chosen as the main evaluation metrics, while sensitivity and specificity were utilized as supporting metrics. Two-tailed paired t-test was employed for determining statistical significance. The significance threshold was set at p-value \(\leq 0.05\). ## 3 Experiments and Results For each test database, we compared the diagnostic performances of the methods in three multicentric setups where the network was: i) locally trained using solely the training set of the corresponding database (Local), ii) trained utilizing the combination of all the training sets of different databases at a central location without privacy measures (Central), and iii) \begin{table} \begin{tabular}{l c c} \hline \hline \multicolumn{3}{c}{**PD patients**} & **HCsubjects** \\ \hline \multicolumn{3}{c}{**Spanish**} \\ \hline Gender [FM] & 25/25 & 25/25 \\ Age [FM] & 60.7\(\pm\)761.3\(\pm\)11 & 61.4\(\pm\)760.5\(\pm\)12 \\ Range of age [FM] & 49.75/3.81 & 49.76/31.86 \\ MDS-UPDRS-III [FM] & 37.6\(\pm\)4197.8\(\pm\)42.22 & \\ Speech item (MDS-UPDRS-III) [FM] & 1.3\(\pm\)0.81\(\pm\)4.09 & \\ \hline \multicolumn{3}{c}{**German**} \\ \hline Gender [FM] & 41/47 & 44/44 \\ Age [FM] & 66.2\(\pm\)9.766.7\(\pm\)8.78 & 62.6\(\pm\)15.20/63.8\(\pm\)12.7 \\ Range of age [FM] & 24.5\(\pm\)44.84 & 28.8\(\pm\)526.83 \\ UPRS-III [FM] & 23.3\(\pm\)1272.1\(\pm\)10 & \\ Speech item (MDS-UPDRS-III) [FM] & 1.2\(\pm\)0.51\(\pm\)4.06 & \\ \hline \multicolumn{3}{c}{**Chest**} \\ \hline Gender [FM] & 20/30 & 20/30 \\ Age [FM] & 60.1\(\pm\)985.3\(\pm\)10 & 63.5\(\pm\)11/60.3\(\pm\)12 \\ Range of age [FM] & 41.72\(\pm\)43.82 & 40.79/41.77 \\ UPRS-III [FM] & 18.1\(\pm\)1021.4\(\pm\)11.2 & \\ Speech item (MDS-UPDRS-III) [FM] & 0.7\(\pm\)0.69\(\pm\)0.5 & \\ \hline \hline \end{tabular} \end{table} Table 1: Demographic and clinical information of the participants. [FM]: Female/Male. Values reported as mean \(\pm\) std. trained with all the training sets of different databases based on FL, i.e., without sharing any data and preserving patient privacy information. Furthermore, due to the relatively small test sizes of each database, we repeated each experiment corresponding to each cross-validation step 5 times, including the training and evaluation of the classification network for all 3 setups. Considering the 5 repetitions and 10-fold cross-validation steps, a total of \(50\) values were obtained for statistical analysis of each experiment. The average evaluation results are reported in Table 2 and details about diagnostic accuracy and classification performance are illustrated in Fig. 2-A. The accuracy of the FL method was significantly higher than local models for Spanish (\(83.2\pm 10.8\%\) vs. \(77.0\pm 13.3\); P-value \(=0.001\)) and Czech (\(76.0\pm 12.2\%\) vs. \(70.3\pm 14.6\); P-value \(=0.020\)) databases while it was only slightly higher for the German database (\(75.8\pm 8.3\%\) vs. \(74.8\pm 9.1\); P-value \(=0.455\)) which contained the largest training set. These results suggest that combining the corpus of the same pathology but in different languages allows generalizing the architecture to classify pathological speech from healthy speech. Furthermore, comparing the non-private "Central" and the secure FL strategies, we can observe that the diagnostic accuracy of the FL method was not significantly different from the "Central" model for Spanish (\(83.2\pm 10.8\%\) vs. \(82.0\pm 11.6\); P-value \(=0.436\)) and Czech (\(76.0\pm 12.2\%\) vs. \(77.8\pm 9.2\); P-value \(=0.334\)) databases while it was for the German database (\(75.8\pm 8.3\%\) vs. \(78.9\pm 8.3\); P-value \(=0.023\)). Moreover, Table 2 shows that the strategy proposed in this work obtained similar results to the state-of-the-art centralized training methods [30], with the advantage of patient privacy preservation by avoiding data exchange between local institutions using an FL strategy. In addition, Fig. 2-B shows a visual comparison between the "Central" and the FL strategies from the receiver-operator-characteristic (ROC) curves and the corresponding AUC values obtained in each experiment. Again, when we compared each institution (language) separately, we can observe that the Central and FL curves have the same trend and show no significant differences. It can also be observed that the Spanish language obtains the best result (AUC of 0.85), followed by German (AUC of 0.79) and Czech language (AUC of 0.78). Finally, Fig. 2-C shows the histogram and the probability density distributions obtained for the classification of German, Spanish, and Czech databases using the FL strategy. It can be observed that all three figures have the highest bins at their extremes, which corresponds to a high probability of the decision taken by the classifier. Moreover, it is possible to observe that in the case of Spanish and Czech, the highest bin is for the HC controls, which is related to the reported specificity (89.2% and 90.8%, respectively); while for Spanish, the highest bin corresponds to PD patients due to a higher sensitivity (90.8%). ## 4 Discussion In this study, we showed the first successful application of cross-language federated learning for PD detection using three patho \begin{table} \begin{tabular}{l c c c c c} \hline **Training set** & **Accuracy** & **AUC** & **Sensitivity** & **Specificity** & **P-value** \\ \hline \hline Local & \(77.0\pm 13.3\) & \(77.9\pm 11.11\) & \(71.2\pm 21.4\) & \(82.8\pm 19.4\) & 0.001 \\ Central & \(82.0\pm 11.6\) & \(80.6\pm 12.0\) & \(78.4\pm 17.5\) & \(85.6\pm 17.6\) & 0.436 \\ FL & \(83.2\pm 10.8\) & \(83.6\pm 11.8\) & \(77.2\pm 17.6\) & \(89.2\pm 14.1\) & - \\ \hline \hline Local & \(74.8\pm 9.1\) & \(73.2\pm 8.6\) & \(76.3\pm 13.8\) & \(73.2\pm 14.2\) & 0.455 \\ Central & \(78.9\pm 8.3\) & \(77.5\pm 8.2\) & \(81.1\pm 13.0\) & \(74.8\pm 17.0\) & 0.023 \\ FL & \(75.8\pm 8.3\) & \(77.1\pm 7.4\) & \(90.8\pm 8.9\) & \(60.8\pm 18.0\) & - \\ \hline \hline Local & \(70.3\pm 14.6\) & \(68.4\pm 14.4\) & \(64.8\pm 26.1\) & \(76.4\pm 23.1\) & 0.020 \\ Central & \(77.8\pm 9.2\) & \(77.6\pm 10.7\) & \(74.0\pm 18.6\) & \(82.0\pm 17.3\) & 0.334 \\ FL & \(76.0\pm 12.2\) & \(78.2\pm 10.7\) & \(62.0\pm 22.6\) & \(90.8\pm 13.5\) & - \\ \hline \hline \end{tabular} \end{table} Table 2: Evaluation results for each database. “Local” represents solely using the training set of the target database, while “Central” means utilizing all training sets when combined with each other at a central location. Values are reported as mean \(\pm\) std in percentages. The “P-value” is with respect to FL for each database for accuracy values. Figure 1: _General methodology: each institution pre-processes its local data, extracts the features using a Wav2Vec 2.0 model, and performs one epoch of the classifier network training locally, and transmits its local network parameters to a trusted server. The server aggregates all the parameters from all the institutions and transmits back the resulting global model to each institution for the next round of local training. In the end, each institution takes a copy of the final global model and performs its desired classification locally._ logical speech corpora, including a total of 188 PD and 188 HC subjects, covering Spanish, German, and Czech languages. We used a state-of-the-art topology namely the Wav2vec 2.0 [7, 8] for obtaining speech representations. We compared the performances in three multicentric setups where the architecture was: i) trained locally and separated by language, i.e., monolingual models, ii) trained utilizing the combination of all the training sets at a central location without privacy measures, i.e., cross-lingual model, and iii) trained with all the training sets of different databases based on FL strategy, i.e., without sharing any data and preserving patient privacy. The results indicated that the FL model outperformed all the local models (mono-lingual models) for every test database in terms of diagnostic accuracy, while not requiring any data sharing between institutions. This result is very interesting and encourages the scientific community to further explore techniques for the generalization of models from databases of the same pathology, in different languages, without the need for sharing information between other institutions (cross-lingual model), which has been a major challenge. In addition, comparing the "Central" combination and FL strategies, we observed that in the majority of scenarios, the FL method was not significantly different from the Central method in terms of the model's diagnostic accuracy. This shows that the FL paradigm can considerably help the collaboration of institutions around the world in the creation of DL models with large amounts of data, cross-lingual, and preserving patient privacy by avoiding data exchange between local institutions, a major limitation in real-world practice that was not considered in current state-of-the-art cross-lingual approaches. Our study has limitations. The collaborative FL training process was implemented in a proof-of-concept mode, i.e., using a single institutional network. Due to strict data protection regulations, the implementation of FL among different institutions would be challenging. However, we simulated a realistic setup where every database corresponded to a separate computing entity and we kept the data strictly independent from each other. As already mentioned, the parameter aggregation mechanism of the central server which was utilized in this study was direct averaging the individual network parameters of each participating database, i.e., the FedAvg algorithm [15], which is the simplest yet the most common aggregation mechanism. Furthermore, the databases utilized in this study were non-independent-and-identically-distributed (non-IID). This was shown to be decreasing the performance of the global model in many different FL applications [31]. Consequently, future work could consider more advanced and task-specific aggregation methods such as [32, 33, 34] by accounting for the individual contribution of each participating site by analyzing their gradient updates in each FL training round before aggregation that could potentially increase the performance of the global model. In addition, we considered the most common task of PD detection, i.e., utilizing speech data containing the rapid repetition of the syllables /pa-ta-ka/ for the applicability of FL in pathological speech analysis in this study. In the future, we will extend this by considering further tasks and cross-pathology scenarios. As a side note, we could conclude that the characterization performed by the Wav2Vec 2.0 method is suitable to model different impairments for PD detection. This could be further investigated in the future with other controlled experiments such as at the level of phonemes, words, and phrases that could help interpret the features obtained by this model. ## 5 Conclusions This paper shows that FL model yields similar or even better results compared to local approaches where mono-lingual models are created for every test database. FL offers the advantage of not requiring any data sharing between institutions, which we hope will encourage researchers and practitioners to improve scientific collaborations among different institutions around the world. The approach shows that FL allows for obtaining competitive results while preserving data privacy. We expect these results to promote simpler and more frequent collaborations between medical institutions, and subsequently, to further improve patient outcomes. ## 6 Acknowledgments STA was supported by the RACOON network under BMBF grant number 01KX2021. JROA and CDRU were funded by UdeA grant number ES92210001. JR was supported by the National Institute for Neurological Research (Programme EXCELES, ID Project No. LX22NPD50107) - funded by the European Union - Next Generation EU. The funders played no role in the design or execution of the study. Figure 2: Evaluation results. (A) Illustrates the final accuracy values for each test database using the 3 setups, where “Local” represents solely using the training set of the target database, while “Central” means utilizing all training sets when combined with each other at a central location. (B) Shows the receiver-operator-characteristic curves. (C) Shows the histogram and the probability density distributions obtained for the classification of German, Spanish, and Czech databases using the FL strategy.
2308.10166
Cell Spatial Analysis in Crohn's Disease: Unveiling Local Cell Arrangement Pattern with Graph-based Signatures
Crohn's disease (CD) is a chronic and relapsing inflammatory condition that affects segments of the gastrointestinal tract. CD activity is determined by histological findings, particularly the density of neutrophils observed on Hematoxylin and Eosin stains (H&E) imaging. However, understanding the broader morphometry and local cell arrangement beyond cell counting and tissue morphology remains challenging. To address this, we characterize six distinct cell types from H&E images and develop a novel approach for the local spatial signature of each cell. Specifically, we create a 10-cell neighborhood matrix, representing neighboring cell arrangements for each individual cell. Utilizing t-SNE for non-linear spatial projection in scatter-plot and Kernel Density Estimation contour-plot formats, our study examines patterns of differences in the cellular environment associated with the odds ratio of spatial patterns between active CD and control groups. This analysis is based on data collected at the two research institutes. The findings reveal heterogeneous nearest-neighbor patterns, signifying distinct tendencies of cell clustering, with a particular focus on the rectum region. These variations underscore the impact of data heterogeneity on cell spatial arrangements in CD patients. Moreover, the spatial distribution disparities between the two research sites highlight the significance of collaborative efforts among healthcare organizations. All research analysis pipeline tools are available at https://github.com/MASILab/cellNN.
Shunxing Bao, Sichen Zhu, Vasantha L Kolachala, Lucas W. Remedios, Yeonjoo Hwang, Yutong Sun, Ruining Deng, Can Cui, Yike Li, Jia Li, Joseph T. Roland, Qi Liu, Ken S. Lau, Subra Kugathasan, Peng Qiu, Keith T. Wilson, Lori A. Coburn, Bennett A. Landman, Yuankai Huo
2023-08-20T05:26:25Z
http://arxiv.org/abs/2308.10166v1
Cell Spatial Analysis in Crohn's Disease: Unveiling Local Cell Arrangement Pattern with Graph-based Signatures ###### Abstract Crohn's disease (CD) is a chronic and relapsing inflammatory condition that affects segments of the gastrointestinal tract. CD activity is determined by histological findings, particularly the density of neutrophils observed on Hematoxylin and Eosin stains (H&E) imaging. However, understanding the broader morphometry and local cell arrangement beyond cell counting and tissue morphology remains challenging. To address this, we characterize six distinct cell types from H&E images and develop a novel approach for the local spatial signature of each cell. Specifically, we create a 10-cell neighborhood matrix, representing neighboring cell arrangements for each individual cell. Utilizing t-SNE for non-linear spatial projection in scatter-plot and Kernel Density Estimation contour-plot formats, our study examines patterns of differences in the cellular environment associated with the odds ratio of spatial patterns between active CD and control groups. This analysis is based on data collected at the two research institutes. The findings reveal heterogeneous nearest-neighbor patterns, signifying distinct tendencies of cell clustering, with a particular focus on the rectum region. These variations underscore the impact of data heterogeneity on cell spatial arrangements in CD patients. Moreover, the spatial distribution disparities between the two research sites highlight the significance of collaborative efforts among healthcare organizations. All research analysis pipeline tools are available at [https://github.com/MASILab/cellNN](https://github.com/MASILab/cellNN). Cell spatial analysis, Pattern recognition, Crohn's disease. ## 1 Introduction Crohn's disease (CD) is a complex inflammatory bowel disease (IBD) affecting the gastrointestinal tract, characterized by persistent and recurring bowel inflammation [1]. The prevalence of IBD has been on the rise, leading to increased medical expenditures. Notably, the medical cost of CD was estimated to be $3.48 billion per year in 2015, and it is projected to reach $3.72 billion per year by 2025, constituting a significant portion of the overall US national costs [2]. The Gut Cell Atlas Crohn's Disease Consortium is an ambitious initiative supported by The Leona M. and Harry B. Helmsley Charitable Trust, aiming to develop comprehensive cellular reference maps for CD. The primary focus of this initiative is to compare tissues from CD patients in comparison to healthy controls ([https://www.gutcellatlas.helmsleytrust.org/](https://www.gutcellatlas.helmsleytrust.org/)). By mapping different human cell types and analyzing gene and protein expression in the context of response to anatomical locations and CD, this project offers a unique opportunity to advance our understanding of the human gut and its implications in CD. In tumor research, spatial analysis using H&E-stained images has been extensively explored. For instance, Failmezger et al. revealed key tumor microenvironment features through topological tumor graphs in melanoma specimens [3], while Xu et al. assessed tumor mutational burden and immune infiltrates in bladder cancer patients using H&E and iHC imaging [4]. They also developed a method to detect prognostic tumor infiltrating lymphocytes (TIL) density in colorectal carcinoma patients [5]. Saltz et al. generated TIL maps from H&E images, correlating them with survival in diverse tumor types [6]. However, applying spatial analysis to inflammatory bowel disease (IBD) remains under investigation. Determination of CD activity depends on histological findings, particularly the density of neutrophils observed via hematoxylin and eosin (H&E) staining [7]. There are many ways to assess the samples from CD patients. For instance, pathologists can understand the cell neighborhood changes with a zoom in and out on the biopsies. The invasion of neutrophils is known to be a sign of active inflammation and is a well-known problem [8]. Pathologist-assigned disease severity scores for CD biopsies are often given at the slide level, though the disease features that resulted in the scoring might not present homogeneously across the slide. As Figure 1 shows, we can see that the high density of neutrophils is not shown in all areas of the tissue on a slide. Can we quantify the relationship of between cell cells, beyond the magnitude of the neutrophils or other cell types is the primary motivation of this work. In this study, we deviate from conventional morphological feature-based CD activity classification. Instead, our focus is on delving into and defining a graph-based metric aimed at enabling spatial analysis. Our core hypothesis is that distinct patterns might characterize the relationships among various cell types within the context of the CD. Utilizing established tools to identify six specific cell types from H&E images, we introduce an innovative approach to capture the local spatial characteristics of each cell. In brief, we create a 10-cell neighborhood matrix that outlines the arrangement of neighboring cells for each individual cell. Through the utilization of t-SNE for non-linear spatial projection in both scatter-plot and Kernel Density Estimation (KDE) contour-plot formats, our research delves into disparities in cellular environments, particularly those linked to the odds ratio of spatial patterns between CD active and control groups. The contribution of this study is three-fold: * We propose a graph-based signature to represent a local arrangement signature for each cell. Figure 1: We present four patches collected from two sample slides, where one slide is CD normal, and the other slide is diagnosed as CD active. Neutrophils can be found in both types of the tissues. Counting the density of neutrophils is one of the pivotal biomarkers used in identify CD activity. Additionally, pathologists have the capability to zoom in and observe the morphology of individual nuclei or clusters. Our objective is to explore and quantify a more comprehensive morphometric pattern within the localized cellular arrangement. * We develop a comprehensive visualization workflow to identify the relationship of signature patterns among active CD and normal CD, and healthy control cohorts. * We present investigations from two research institutes with heterogeneous data acquisition, focusing on the rectum. ## 2 Method Our objective is to comprehend alterations in the cellular neighborhood. To accomplish this, it is essential to establish biomarkers capable of detecting the nuanced local histological orientation and interrelationships among co-located cells. Here, we investigate a potential avenue for achieving this goal. Initially, we outline a graph-based spatial characterization for each cell. Subsequently, we introduce a visualization technique designed to handle a substantial cell count. Lastly, the quantification method is elucidated. The comprehensive analysis workflow is depicted in Figure 3. ### Data pre-processing Prior to generating the spatial signature of each cell, we first segment the whole slide image (WSI) using a pre-trained segmentation deep learning model (HoverNet [9]) from the Colon Nuclei Identification and Counting (CoNIC) Challenge dataset [10], which can identify six cell types: neutrophils, epithelial cells, lymphocytes, plasma cells, eosinophils, and connective tissue. The HoverNet CoNIC pre-trained segmentation model operates only on patches with size of 256\(\times\)256 pixels under 20\(\times\) magnification. So we we employ the CLAM [11] method to remove the background of the WSI and divide the gigapixel image into relevant size of patches, segment each patch (in 20\(\times\)) using the pre-trained model, and then merge all the patches back into the original WSI space to collect the global coordinate of the cells. ### Graph-based spatial signature For building the spatial signature, we aim to find 11-nearest neighbor for each cell, converting the 11 neighbors into a count matrix for the six cell types. Figure 2 depicts two cell samples of the same type but with different local arrangements. To achieve this, we utilize the KD-tree algorithm [12, 13], which creates a binary tree partitioning the cell coordinate space into smaller regions for fast searching of nearest neighbor points in multi-dimensional spaces. Each cell is assigned with 11 indexes, including itself, resulting in a total count of 10 when excluding the cell itself. To clean up feature noise, we remove edge cases of cells containing less than 10 exclusive neighbors (10-NN). ### Visualization We aggregate the count matrix of 10-NN signatures for all cells in the target dataset, comprising CD active and control groups, into a large matrix, considering that each WSI may contain over ten thousand cells. Subsequently, we utilize a 2-D standard t-SNE with the KL divergence as the cost function [14]. The t-SNE technique effectively maintains the relative distances between neighboring data points, accentuating local structure over global structure. Initially, when employing Figure 2: Counting the density of neutrophils is one of the key biomarkers to identify CD, which is a well-known problem. We would like to understand if there is a broader morphometry existing within cell local arrangement. t-SNE scatter plots, it becomes straightforward to visualize the distinct exclusive regions for the two categories. However, due to the vast number of data points, numerous overlaps are expected, making it difficult to convey the data point density through the scatter plot. As a result, we opt to transform the t-SNE embedding into KDE contour plots to identify regions of interest with diverse probabilities. This visualization approach establishes a non-linear spatial space encompassing all cell data points from the input whole dataset. ### Quantification We investigate any regions of interest (ROI) of the t-SNE visualization using bounding box (BBox). To comprehend the occurrence probability of the specific 10-nearest neighbor (10-NN) pattern in two different CD activity groups, we calculate the odds ratio. The odds ratio is defined as the fraction of cells from CD activity group 1 divided by the fraction of cells from CD activity group 2 within the BBox. ## 3 Experiments and Results ### Dataset CD can manifest anywhere in the gastrointestinal tract. In our study, we applied our proposed workflow to two datasets acquired from different institutions, with a specific focus on the anatomical region of the rectum. The first dataset was obtained from the Emory University School of Medicine (Emory), also in the Gut Cell Atlas Crohn's Disease Consortium, and includes 8 biopsies from children. Among these, 4 biopsies are from a healthy control group, while the remaining 4 biopsies are classified as CD active. The second dataset comprises 143 biopsies stained with H&E, obtained in a deidentified form from Vanderbilt University Medical Center (VUMC) under Institutional Review Board (IRB) approval, specifically Vanderbilt IRB #191738 and #191777 [15]. All biopsies were collected from adult CD patients and scored by a single pathologist, resulting in 97 biopsies classified as normal and 46 biopsies marked as active, categorized into subcategories of mild, moderate, and severe. It is worth noting that the VUMC dataset lacks a healthy control group. Thus, for a "normal" comparator group, the term "CD normal" refers to patients diagnosed with CD, but the pathologic review of collected tissues were normal, and showed no acute or chronic changes on pathology due to medical therapies. Figure 3: This figure illustrates the step-by-step workflow of the whole study data cohort exploration process, showcasing the rigorous methodologies employed in image analysis, cell classification, nearest neighbor calculation, visualization, and ultimate pattern recognition using a graph-based nearest neighbor signature approach. ### Experiment design For each cell type category, which includes neutrophils, epithelial cells, lymphocytes, plasma cells, eosinophils, and connective tissue, we conduct t-SNE visualization and compute the odds ratios for the research institutes separately. Table 1 presents a summary of the data points for each site and the six cell subtype categories. To explore the potential ROIs, we investigate them in both the t-SNE scatter plots, where we identify exclusive regions using BBox, and the t-SNE KDE contour plots, where our attention is on areas with relatively high probabilities. All experiments were executed on a workstation boasting 96 CPU cores, 250 GiB of RAM, and an NVIDIA RTX A6000 GPU. ### Results We provide the outcomes of spatial pattern analysis pertaining to individual cell types across the two institutes as follows. **Neutrophils** (Figure 4). The VUMC dataset reveals that CD active tissues tend to exhibit a higher count of 10-NN shapes containing neutrophils and lymphocytes, as well as plasma, with a moderate amount of connective tissue. Moreover, it indicates that when lymphocytes dominate the surroundings without epithelial cells, CD active occurrences surpass those in CD normal tissue. CD normal tissues, on the other hand, tend to have a greater involvement of epithelial cells. In the Emory dataset, it is evident that lymphocytes play a crucial role in the 10-NN interaction with neutrophils. Interestingly, even when both lymphocytes and epithelial cells are dominant, the prevalence remains higher in CD active cases. Furthermore, the dominance of lymphocytes and plasma is associated with a high occurrence rate of healthy controls. **Eosinophils (Figure 5)**. In the VUMC dataset, there is a notable significance of lymphocytes and plasma in CD active tissues, whereas eosinophils in CD normal tissues tend to exhibit a higher tendency towards involvement with epithelial and plasma components. However, the Emory dataset reveals a contrasting pattern. **Connective (Figure 6)**. In the VUMC dataset, shape patterns between CD active and CD normal tissues are remarkably similar. CD active tissues show moderate elevation in lymphocyte and connective tissue presence, while CD normal tissues display increased involvement of epithelial elements. Conversely, the Emory dataset presents greater diversity. It mirrors the VUMC CD active pattern; moreover, dominant connective tissue aligns with healthy controls, and higher plasma presence associates with the control group. **Plasma (Figure 7)**. No distinct pattern is identified that is specific to CD active tissues. In the case of CD normal tissues, there is a tendency for an even distribution of involvement among epithelial, lymphocyte, plasma, and connective tissue. This pattern is similarly observed in the Emory dataset. However, when the surroundings solely consist of lymphocytes and plasma, this combination appears more frequently in CD active cases. **Plasma (Figure 8)**. In the VUMC dataset, no notable pattern difference in lymphocyte distribution between CD active and CD normal tissues is discerned, where lymphocytes dominate the 10-NN arrangement. This similarity in pattern is also observed in the Emory dataset for CD active cases. However, within the Emory dataset, healthy control cases exhibit a more varied pattern, with the additional presence of plasma and connective tissue in the local environment, alongside lymphocytes. **Epithelial (Figure 9)**. Both datasets from the different institutions consistently indicate that epithelial cells predominantly interact with other epithelial cells, regardless of whether the tissue is CD active, CD normal, or from a healthy control. \begin{table} \begin{tabular}{|c|c|c|c|c|} \multicolumn{5}{c}{**Institute 1: VUMC**} & \multicolumn{2}{c|}{**Institute 2: Emory**} \\ \cline{2-5} \multicolumn{1}{c|}{} & **CD normal** & **CD active** & **Healthy control** & **CD active** \\ \hline \multicolumn{1}{|c|}{Sample biopsies} & 97 & 46 & 4 & 4 \\ \hline \multicolumn{1}{|c|}{Neutrophils (neu)} & 16,424 & 33,103 & 384 & 1,022 \\ \hline \multicolumn{1}{|c|}{Epithelial cell (epi)} & 5,528,718 & 2,409,172 & 50,479 & 209,950 \\ \hline \multicolumn{1}{|c|}{Lymphocytes (lym)} & 4,037,761 & 2,946,595 & 37,326 & 379,809 \\ \hline \multicolumn{1}{|c|}{Plasma cell (pla)} & 738,568 & 607,164 & 9,651 & 21,486 \\ \hline \multicolumn{1}{|c|}{Eosinophils (eon)} & 50,173 & 61,557 & 1,148 & 1,437 \\ \hline \multicolumn{1}{|c|}{Connective tissue (con)} & 1,406,897 & 887,472 & 17,838 & 60,172 \\ \hline \end{tabular} \end{table} Table 1: Data summary and cell nuclei type counts across different institutions and patient groups. ## 4 Discussion and Conclusion In this paper, we delve into the exploration of a broader morphometric aspect within the local arrangement of cells, shifting the focus from neutrophil counting to the realm of CD investigation. Our proposed approach introduces a graph-based signature that is customized to portray a distinct local arrangement for each cell. Additionally, we have meticulously designed a comprehensive visualization workflow that illuminates the intricate interconnections among signature patterns within both the CD active patient and CD normal cohorts among adults, as well as between the CD active patient cohort and healthy controls among children. Our study encompasses investigations carried out across two distinct research institutes, centered around the rectum region. The diversified conclusions stemming from shape patterns, extending beyond neutrophils, can be attributed to the diverse array of data acquisition methods. For instance, the VUMC dataset primarily encompasses adult samples and lacks healthy patient tissue, whereas the control group comprises CD biopsies classified as normal. In contrast, the Emory dataset is sourced from pediatric samples, introducing an additional layer of variation in the data collection process. In conclusion, our findings indicate that cells do not consistently co-locate in the same manner between CD active and CD normal cohorts for adults rectum dataset, or between CD active and healthy control for children. To delve deeper into these distinct shape patterns, we can leverage RNA-seq to identify cell types, unveil gene expression patterns, facilitate biomarker discovery, enable spatial mapping, and uncover disease mechanisms. Moreover, integrating morphological Figure 4: Visualization and quantification of neutrophils surrounding a 10-NN shape pattern recognition. spatial features, such as considering actual cell distances in the neighborhood of the spatial signature, could potentially enhance our analysis. The observed disparities in spatial distributions across the two research institutes underscore the significance of collaborative endeavors among healthcare institutions. ## 5 Acknowledgments This research was supported by The Leona M. and Harry B. Helmsley Charitable Trust grant G-1903-03793 and G-2103-05128, NSF CAREER 1452485, NSF 2040462, and in part using the resources of the Advanced Computing Center for Research and Education (ACCRE) at Vanderbilt University, Nashville, TN. This project was supported in part by the National Center for Research Resources, Grant UL1 RR024975-01, and is now at the National Center for Advancing Translational Sciences, Grant 2 UL1 TR000445-06, the National Institute of Diabetes and Digestive and Kidney Diseases, the Department of Veterans Affairs I01BX004366,I01CX002171, and I01CX002573. The de-identified imaging dataset(s) used for the analysis described were obtained from ImageVU, a research resource supported by the VICTR CTSA award (ULTR000445 from NCATS/NIH), Vanderbilt University Medical Center institutional funding and Patient-Centered Outcomes Research Institute (PCORI; contract CDRN-1306-04869). This work is supported by NIH grants T32GM007347, R01DK135597, and R01DK103831. We extend gratitude to NVIDIA for their support by means of the NVIDIA hardware Figure 5: Visualization and quantification of eosinophils surrounding a 10-NN shape pattern recognition. grant. ChatGPT was utilized for proofreading and grammar checking, and the results were validated by human review to ensure accuracy of facts and intent. ## Appendix A Supplementary Information on Spatial Pattern Analysis Figure 7: Visualization and quantification of plasma cells surrounding a 10-NN shape pattern recognition. Figure 8: Visualization and quantification of lymphocytes surrounding a 10-NN shape pattern recognition.
2308.08729
Black Hole Horizons and their Mechanics
Black holes are often characterized by event horizons, following the literature that laid the mathematical foundations of the subject in the 1970s. However black hole event horizons have two fundamental conceptual limitations. First, they are defined only in space-times that admit a future conformal boundary. Second, they are teleological; their formation and growth is not determined by local physics but depends on what could happen in the distant future. Therefore, event horizons have not played much of a role in the recent theoretical advances that were sparked by discoveries of the LIGO-virgo collaborations. This article focuses on quasi-local horizons that have been used instead. Laws governing them -- mechanics of quasi-local horizons -- generalize those that were first found using event horizons. These results, obtained over the last two decades or so, have provided much insight into dynamical predictions of general relativity in the fully nonlinear regime. The article summarizes the deep and multi-faceted interplay between geometry and physics that has emerged. Conceptually, quasi-local horizons also play a key role in the discussion of the quantum evaporation of black holes. However, due to space limitations, this application is only briefly discussed in Section 6.
Abhay Ashtekar
2023-08-17T01:58:34Z
http://arxiv.org/abs/2308.08729v1
# Black Hole Horizons ###### Abstract Black holes are often characterized by event horizons, following the literature that laid the mathematical foundations of the subject in the 1970s. However black hole event horizons have two fundamental conceptual limitations. First, they are defined only in space-times that admit a future conformal boundary. Second, they are teleological; their formation and growth is not determined by local physics but depends on what could happen in the distant future. Therefore, event horizons have not played much of a role in the recent theoretical advances that were sparked by discoveries of the LIGO-virgo collaborations. This article focuses on quasi-local horizons that have been used instead. Laws governing them -mechanics of quasi-local horizons- generalize those that were first found using event horizons. These results, obtained over the last two decades or so, have provided much insight into dynamical predictions of general relativity in the fully nonlinear regime. The article summarizes the deep and multi-faceted interplay between geometry and physics that has emerged. Conceptually, quasi-local horizons also play a key role in the discussion of the quantum evaporation of black holes. However, due to space limitations, this application is only briefly discussed in Section 6. ## 1 Introduction By now, gravitational wave observations have established that black holes (BHs) with tens of solar masses are ubiquitous in our universe, and the Event Horizon Telescope (EHT) has provided us with images of light rings around two supermassive black holes. But what exactly is a BH? What exactly is it that forms as a result of a gravitational collapse and evaporates due to quantum radiation? The common answer is: _Event Horizons_(EHs). Much of the rationale behind this conviction comes from the laws of BH mechanics that dictate the behavior of EHs in equilibrium, under small perturbations away from equilibrium, and in fully dynamical situations. While these laws are consequences of classical general relativity alone, they have close similarity with the laws of thermodynamics. The origin of this seemingly strange coincidence lies in quantum physics. Thanks to these insights, in the 1970s and 1980s, EHs were taken to be synonymous with BHs. However, EHs are rather enigmatic. In particular, they have _teleological_ properties. For example, an EH may be forming and growing in your very room _in anticipation_ of a gravitational collapse that may take place a billion years from now! This phenomenon is concretely illustrated by collapse of a spherical, null fluid, described by the Vaidya solution to Einstein's equations, where one can explicitly see that the EH first forms and _grows_ in a flat (i.e. Minkowski) region of space-time, where nothing at all is happening. In the traditional descriptions of physical phenomena such teleology is regarded as spooky, and avoided studiously. For EHs, it occurs because the notion is unreasonably global; one needs to know the space-time structure all the way to the infinite future to say if admits an EH. In particular, in the simulations of compact binary mergers one cannot use EHs to locate either the progenitors or the final remnant as one numerically time-evolves the system since EHs can only be identified at the _end_ of the simulation, as an after thought. Given these serious limitations, a question naturally arises: Are there alternate, quasi-local horizons that can serve to characterize BHs without teleology? Can one achieve this and still maintain the attractive features of the EH mechanics? Advances over the past 2 decades have shown that the answer is in the affirmative. We will focus on these developments. We first recall the notion of EHs, then introduce quasi-local horizons and discuss their mechanics. For brevity, all manifolds (and fields) are assumed to be smooth, with a metric of signature -,+,+,+. An arrow under a space-time index denotes the pull-back of that index to the horizon. We assume that matter satisfies a weak version of the dominant energy condition: for each future directed causal vector \(t^{a}\), \(-T_{ab}t^{b}\) is also future directed and causal _at the horizon_. We will set the speed of light \(c\) and Boltzmann's constant \(k_{\rm B}\) to one. Finally, unless otherwise stated, space-time is assumed to be 4-dimensional, and \(g_{ab}\) is assumed to satisfy Einstein's equations with zero cosmological constant. ## 2 Event Horizons To introduce EHs, one starts with Penrose's conformal boundary \(\mathscr{I}^{+}\), representing future null infinity of asymptotically flat space-times. A _Black-hole region_\(\mathscr{B}\) of a space-time \((\mathbb{M},g_{ab})\) is defined as \(\mathscr{B}=\mathbb{M}\setminus I^{-}(\mathscr{I}^{+})\), where \(I^{-}\) denotes 'chronological past'.1 The boundary \(\partial\mathscr{B}\) of the black hole region is called the _event horizon_ and denoted by \(\mathscr{E}\). Thus, \(\mathscr{E}\) is the future boundary of the causal past of \(\mathscr{I}^{+}\). It therefore follows that \(\mathscr{E}\) is a null 3-surface, ruled by null geodesics that are inextensible to the future. If the space-time is globally hyperbolic, an 'instant of time' is represented by a Cauchy surface \(M\). The intersection of \(\mathscr{B}\) with \(M\) may have several disjoint components, each representing a black hole at that instant of time. If \(M^{\prime}\) is a Cauchy surface to the future of \(M\), the number of disjoint components of \(M^{\prime}\cap\mathscr{B}\) in the causal future of \(M\cap\mathscr{B}\) must be less than or equal to those of \(M\cap\mathscr{B}\) (see Hawking & Ellis 1973). Thus, black holes can merge but can not bifurcate. (By a time reversal, i.e. by replacing \(\mathscr{I}^{+}\) with \(\mathscr{I}^{-}\) and \(I^{-}\) with \(I^{+}\), one can define a white hole region \(\mathscr{W}\). However, here we will focus only on black holes.) Footnote 1: In the context of the \(\mathbb{R}^{3}\)-space-time, the \(\mathbb{R}^{3}\)-space-time is a & Ellis 1973). Let \(({\mathbb{M}},g_{ab})\) admit an event horizon \({\mathscr{E}}\). Denote by \(\ell^{a}\) a geodesic null normal to \({\mathscr{E}}\). Its expansion is defined as \(\theta_{(\ell)}:=q^{ab}\nabla_{a}\ell_{b}\), where \(q^{ab}\) is any inverse of the degenerate intrinsic metric \(q_{ab}\) on \({\mathscr{E}}\), and determines the rate of change of the area-element of \({\mathscr{E}}\) along \(\ell^{a}\). Assuming that the null energy condition and Einstein's equations hold, the Raychaudhuri equation immediately implies that if \(\theta_{(\ell)}\) were to become negative somewhere it would become infinite within a finite affine parameter. Hawking showed that, if there is a globally hyperbolic region containing \(I^{-}({\mathscr{I}}^{+})\cup{\mathscr{E}}\) --i.e., if there are no naked singularities-- this can not happen, whence \(\theta(\ell)\geq 0\) on \({\mathscr{E}}\). Therefore, if a cross-section \(S_{2}\) of \({\mathscr{E}}\) is to the future of a cross section \(S_{1}\), we must have \(a_{S_{2}}\geq a_{S_{1}}\). Thus, in any (i.e., not necessarily infinitesimal) dynamical process, the change \(\Delta a\) in the horizon area is always non-negative. This result is known as the _second law of black hole mechanics_. As in the first law, the analog of entropy is the horizon area. Subsequently, Hawking's seminal discovery of black hole evaporation (Hawking 1974) provided the precise relation between the laws of EH mechanics and thermodynamics: It is \(\kappa\hbar/2\pi\) that plays the role of the thermodynamic temperature \(T\) and \(a/4G\hbar=a/4\ell_{\rm{planck}}^{2}\) that plays the role of entropy \(S\). The factors of \(\hbar\) -that are essential on dimensional grounds- indicate that the resemblance of laws of EH mechanics with those of thermodynamics is an imprint left by quantum mechanics on classical physics. In spite of this deep and fascinating feature, the analogy between EH mechanics and thermodynamics has some important limitations. Let us begin with the first law. In thermodynamics, one only assumes that the physical system under consideration is in equilibrium, while in the discussion of the first law of EH mechanics one assumes that the entire space-time -including the far away matter, for example- is stationary, not just the EH. Secondly, while all quantities that enter the statement of the first law of thermodynamics refer just to the system under consideration, now \(m,J\) refer not just to the BH but to everything contained in the space-time, since they are calculated at spatial infinity. Finally, in the second law of thermodynamics, change in entropy of a given system is directly related to physical processes occurring in its neighborhood. For event horizons, as we saw, this is not the case; area of cross-sections of \({\mathscr{E}}\) can increase for teleological reasons, in anticipation of a physical process that would occur in, say, a billion years to the future, even when nothing at all is falling across \({\mathscr{E}}\) for all these years! Quasi-local horizons were introduced to improve on this situation. _Isolated horizons_, dented by \(\Delta\) in the literature, provide a quasi-local description of situations in which it is only the BH that is in equilibrium; there may be dynamical processes away from it. A Dynamical Horizon \({\mathscr{H}}\) represents BHs whose horizon structure is changing in time due to the full non-linear dynamics of GR. Change in its area is due to instantaneous fluxes across \({\mathscr{H}}\); there is no teleology. Isolated Horizons: Local Equilibrium The key idea here is drop the requirement that the entire space-time should admit a stationary Killing field and ask only that the intrinsic horizon geometry be time-independent. Consider a null, 3-dimensional sub-manifold \(\Delta\) with topology \(\mathbb{S}^{2}\times\mathbb{R}\) in a space-time \((\mathbb{M},g_{ab})\), and denote future pointing normals to \(\Delta\) by \(\ell^{a}\). The pull-back \(q_{ab}:=g_{\underset{\leftarrow}{ab}}\) of the space-time metric to \(\Delta\) is the intrinsic, degenerate'metric' of \(\Delta\) with signature 0,+,+. (Recall that an arrow under a space-time index denotes the pull-back of that index to the horizon.) The first condition is that \(q_{ab}\) be 'time-independent', i.e. \(\mathcal{L}\,_{\ell}\,q_{ab}=0\) on \(\Delta\). This condition implies that the area of any 2-sphere cross-section of \(\Delta\) is constant. Therefore, such sub-manifolds \(\Delta\) are referred to as _non-expanding horizons_ (NEHs). One can show that every NEH inherits a natural derivative operator \(D\), obtained by the restricting of the space-time derivative operator \(\nabla\) to \(\Delta\). While \(D\) is compatible with \(q_{ab}\), i.e. \(D_{a}q_{bc}=0\), it is not uniquely determined by this property because \(q_{ab}\) is degenerate. Thus, \(D\) has extra information, not contained in \(q_{ab}\). The pair \((q_{ab},D)\) is said to determine the _intrinsic geometry_ of the null surface \(\Delta\). This notion leads to a natural notion of a horizon in local equilibrium. Definition 1: A NEH \(\Delta\) is said to be _isolated horizon_ (IH) if it admits a null normal \(\ell^{a}\) such that: \(\mathcal{L}\,_{\ell}\,q_{ab}=0\) and \([\mathcal{L}\,_{\ell},\,D]=0\) on \(\Delta\). On can show that, generically, this null normal field \(\ell^{a}\) is unique up to rescalings by positive _constants_ (Ashtekar, Beetle, Lewandowski 2002). Note that the definition refers just to \(\Delta\); in particular, \((\mathbb{M},g_{ab})\) is not required to be asymptotically flat and there is no longer any teleological feature. Since \(\Delta\) is null and \(\mathcal{L}\,_{\ell}q_{ab}=0\), the area of _any_ of its cross sections is the same, denoted by \(a_{\Delta}\). As one would expect, one can show that there is no flux of gravitational radiation or matter across \(\Delta\). This captures the idea that the black hole itself is in equilibrium. Thus, _Definition 1_ extracts from the notion of a Killing horizon just a 'tiny part' that refers only to the intrinsic geometry of \(\Delta\). As a result, every Killing horizon is, in particular, an IH, whence the EH of any stationary BH is an IH. However, IHs are more general. For example, the multi-black hole solution of Kastor and Traschen admits IHs which are not Killing horizons. Also, a space-time with an IH \(\Delta\) can admit gravitational radiation and dynamical matter fields away from \(\Delta\). In fact, as a family of Robinson-Trautman space-times illustrates, gravitational radiation could even be present arbitrarily close to \(\Delta\). Because of these possibilities, the transition from EHs of stationary space-times to IHs represents a significant generalization of black hole mechanics.2 Footnote 2: In fact the derivation of the zeroth and the first law requires slightly weaker assumptions, encoded in the notion of a ‘weakly IH’ (Ashtekar et al 2000, 2001). An immediate consequence of the requirement \(\mathcal{L}\,_{\ell}q_{ab}=0\) is that there exists a 1-form \(\omega_{a}\) on \(\Delta\) such that \(D_{a}\ell^{b}=\omega_{a}\ell^{b}\). Following the definition of \(\kappa\) on a Killing horizon, the _surface gravity_\(\kappa_{(\ell)}\) of \((\Delta,\ell)\) is defined as \(\kappa_{(\ell)}=\omega_{a}\ell^{a}\). Again, under \(\ell^{a}\to c\ell^{a}\), we have \(\kappa_{(c\ell)}=c\kappa_{\ell}\). Together with Einstein's equations, the two conditions of _Definition 1_ imply \({\cal L}\,_{\ell}\,\omega_{a}=0\) and \(\,\ell^{a}D_{[a}\omega_{b]}=0\). The Cartan identity relating the Lie and exterior derivative now yields \[D_{a}(\omega_{b}\ell^{b})\equiv D_{a}\kappa_{(\ell)}=0\,. \tag{3.4}\] _Thus, surface gravity is constant on every isolated horizon_. This is the zeroth law, extended to horizons representing local equilibrium. In presence of an electromagnetic field, _Definition 1_ and the field equations imply: \({\cal L}\,_{\ell}\,F_{ab}=0\) and \(\ell^{a}F_{ab}=0\). The first of these equations implies that one can always choose a gauge in which \({\cal L}\,_{\ell}A_{\,a}=0\). By Cartan identity it then follows that the electrostatic potential \(\Phi_{(\ell)}:=\widetilde{A}_{a}\ell^{a}\) is constant on the horizon. This is the Maxwell analog of the zeroth law. In this setting, the first law is derived using a Hamiltonian framework (Ashtekar et al 2000, 2001). For concreteness and simplicity, let us assume that we are in the asymptotically flat situation and the only matter field present is electromagnetic. One begins by restricting oneself to horizon geometries such that \(\Delta\) admits a rotational vector field \(\varphi^{a}\) satisfying3\({\cal L}\,_{\varphi}q_{ab}=0\). One then constructs a phase space \(\bf\Gamma\) of gravitational and matter fields such that: i) the space-time manifold \(\mathbb{M}\) admits an internal boundary \(\Delta\) which is an IH; and, ii) all fields satisfy asymptotically flat boundary conditions at infinity. Note that the horizon geometry is allowed to vary from one phase space point to another; the pair \((q_{ab},D)\) induced on \(\Delta\) by the space-time metric has to satisfy only _Definition 1_ and the condition \({\cal L}\,_{\varphi}q_{ab}=0\). Footnote 3: For black hole mechanics, in fact it suffices to assume only that \({\cal L}\,_{\varphi}\epsilon_{ab}=0\) where \(\epsilon_{ab}\) is the intrinsic area 2-form on \(\Delta\). The same is true on dynamical horizons discussed in the next section. Let us begin with angular momentum. Fix a vector field \(\phi^{a}\) on \(\mathbb{M}\) which coincides with the fixed \(\varphi^{a}\) on \(\Delta\) and is an asymptotic rotational symmetry at infinity. (Note that \(\phi^{a}\) is not restricted in any way in the bulk.) Lie derivatives of gravitational and matter fields along \(\phi^{a}\) define a vector field \({\bf X}(\phi)\) on \(\bf\Gamma\). One shows that it is an infinitesimal canonical transformation, i.e., satisfies \({\cal L}\,_{{\bf X}(\phi)}\,{\bf\Omega}=0\) where \(\bf\Omega\) is the symplectic structure on \(\bf\Gamma\). The Hamiltonian \({\bf H}(\phi)\) generating this canonical transformation is given by: \[{\bf H}(\phi)=J^{(\phi)}_{\Delta}-J^{(\phi)}_{\infty}\quad\mbox{where}\quad J ^{(\phi)}_{\Delta}=-\frac{1}{8\pi G}\,\oint_{\mathbb{S}}(\omega_{a}\varphi^{a })\,\epsilon-\frac{1}{4\pi}\,\oint_{\mathbb{S}}(A_{a}\varphi^{a})\,{}^{\star}F \tag{3.5}\] where \(J^{(\phi)}_{\infty}\) is the total angular momentum at spatial infinity, \(\mathbb{S}\) is any cross-section of \(\Delta\) and \(\epsilon\) the area element thereon. The term \(J^{(\phi)}_{\Delta}\) depends _only_ on \(\varphi\) and the geometry of \(\Delta\), and is independent of the choice of the 2-sphere \(\mathbb{S}\) made in its evaluation. It is interpreted as the _horizon angular momentum_. It has numerous properties that support this interpretation. In particular, it yields the standard angular momentum expression in Kerr-Newman space-times. To define horizon energy, one has to introduce a 'time-translation' vector field \(t^{a}\). On \(\Delta\), it must be a symmetry of \(q_{ab}\). Since \(\ell^{a}\) and \(\varphi^{a}\) are both horizon symmetries, one sets \(t^{a}=c\ell^{a}+\Omega\varphi^{a}\,\) on \(\Delta\), for some constants \(c\) and \(\Omega\). However, unlike \(\phi^{a}\), the restriction of \(t^{a}\) to \(\Delta\) can not be fixed once and for all but must be allowed to vary from one phase space point to another. In particular, on physical grounds, one expects \(\Omega\) to be zero at a phase space point representing a non-rotating black hole, but non-zero at a point representing a rotating black hole. This freedom in the boundary value of \(t^{a}\) introduces a qualitatively new element. The vector field \({\bf X}(t)\) on \({\bf\Gamma}\) defined by the Lie derivatives of gravitational and matter fields does not, in general, satisfy \({\cal L}\,_{{\bf X}(t)}\,{\bf\Omega}=0\); it need not be an infinitesimal canonical transformation. The necessary and sufficient condition is that \(\ (\kappa_{(c\ell)}/8\pi G)\delta a_{{}_{\Delta}}+\Omega\delta J_{\Delta}+\Phi_{( c\ell)}\delta Q_{\Delta}\,\) be an exact variation. That is, \({\bf X}(t)\) generates a Hamiltonian flow if and only if there exists a function \(E_{\Delta}^{(t)}\) on \({\bf\Gamma}\) such that \[\delta E_{\Delta}^{(t)}=\frac{\kappa_{(c\ell)}}{8\pi G}\delta a_{{}_{\Delta}}+ \Omega\delta J_{\Delta}+\Phi_{(c\ell)}\delta Q_{\Delta} \tag{3.6}\] This is precisely the first law! Thus, the framework provides a deeper insight into the origin of the first law: It is the necessary and sufficient condition for the evolution generated by \(t^{a}\) to be Hamiltonian. Eq. (3.6) is a genuine restriction on the choice of phase space functions \(c\) and \(\Omega\), i.e., of restrictions to \(\Delta\) of evolution fields \(t^{a}\). It is easy to verify that \({\mathbb{M}}\) admits many such vector fields. Given one, the Hamiltonian \(H(t)\) generating the time evolution along \(t^{a}\) takes the form \[{\bf H}(t)\,=\,E_{\infty}^{(t)}-E_{\Delta}^{(t)}\,, \tag{3.7}\] where the expression of \(E_{\Delta}^{(t)}\) refers only to fields on \(\Delta\), re-enforcing the interpretation of \(E_{\Delta}^{(t)}\) as the horizon energy. Thus, in general, there is a multitude of first laws, one for each vector field \(t^{a}\), the evolution along which preserves the symplectic structure. In the Einstein-Maxwell theory, given _any_ phase space point, one can choose a canonical boundary value \(t^{a}_{o}\) by exploiting the black hole uniqueness theorem. \(E_{\Delta}^{(t_{o})}\) is then called the horizon mass and denoted simply by \(m_{\Delta}\). In the Kerr-Newman family, \({\bf H}(t_{o})\) vanishes and \(m_{\Delta}\) coincides with the ADM mass \(m_{\infty}\). Similarly, if \(\phi^{a}\) is chosen to be a global rotational Killing field, \(J_{\Delta}^{(\phi)}\) equals \(J_{\infty}^{(\phi)}\). However, in more general space-times where there is matter field or gravitational radiation outside \(\Delta\), these equalities do not hold. The \(m_{{}_{\Delta}}\) and \(J_{\Delta}\) that enter the IH first law (3.6) represent quantities associated with the _horizon alone_, while \(m\) and \(J\) that feature in the EH first law (2.3) are the _total_ mass angular momentum in the space-time, measured at spatial infinity, including contributions from matter fields that may be present in the exterior region. When the uniqueness theorem fails, as for example in the Einstein-Yang-Mills-Higgs theory, the infinite family of first laws continue to hold, but the horizon mass \(m_{\Delta}\) becomes ambiguous. Interestingly, these ambiguities can be exploited to relate properties of hairy black holes with those of the corresponding solitons. (For a summary, see Ashtekar & Krishnan (2004).) Dynamical situations It is tempting to ask if there is a local physical process directly responsible for the growth of horizon area (as there is for change of entropy of a closed thermodynamical system). As we saw, the answer is in the negative for EHs since they can grow in a flat portion of space-time. However, one can introduce quasi-local horizons also in dynamical situations and obtain the desired result (Ashtekar & Krishnan, 2003). These constructions are motivated in part by earlier ideas introduced by Hayward (1994). Since the subject of black hole dynamics is rich and diverse, one cannot incorporate all its aspects in a single, concise treatment. In this article we focus on classical aspects. For a summary of the role played by quasi-local horizons in quantum dynamics of black holes, see Ashtekar 2020. Definition 2: A 3-dimensional space-like sub-manifold \(\mathscr{H}\) of \((\mathbb{M},g_{ab})\) is said to be a _dynamical horizon_ (DH) if it admits a foliation by compact 2-manifolds \(\mathbb{S}\) (without boundary) such that the expansion \(\theta_{(\ell)}\) of one (future directed) null normal field \(\ell^{a}\) to \(\mathbb{S}\) vanishes and the expansion of the other (future directed) null normal field, \(n^{a}\) is negative. One can show that this foliation of \(\mathscr{H}\) is unique (Ashtekar & Galloway, 2005) and that \(\mathbb{S}\) has the topology of a 2-sphere (except in some degenerate and physically over-restrictive cases when it has the topology of a 2-torus). Each leaf \(\mathbb{S}\) is a marginally trapped surface (MTS) and will be referred to as a _cut_ of \(\mathscr{H}\). Unlike event horizons \(\mathscr{E}\), DHs \(\mathscr{H}\) are locally defined and do not display any teleological feature. In particular, they _cannot_ lie in a flat portion of space-time. DHs arise in numerical simulations of binary black hole mergers as world tubes of MTSs. If a MTS is located on a Cauchy slice during evolution, generically it persists (Andersson et al, 2009). In the early days, numerical simulations focused on outermost MTSs on the 3+1 foliation they used. This led to a general belief that MTSs jump abruptly when a common horizon is formed. More recent high precision numerical work has shown that this conclusion is incorrect. In fact the world tubes of MTSs associated with individual black holes persist and join on continuously to the final, common world tube representing the remnant (Pook-Kolb et al, 2019, 2021). During this process, the world tubes are not everywhere space-like, but their early and late portions are. Thus, the individual world tubes of MTSs representing the progenitors while they are well separated as well as the final portion of the world tube representing the remnant are DHs. As the remnant settles down, its DH approaches the EH. During the dynamical phase, \(\mathscr{H}\) lies inside \(\mathscr{E}\). _Definition 2_ immediately implies that the area of cuts of \(\mathscr{H}\) increases monotonically along the 'outward direction' defined by the projection of \(\ell^{a}\) on \(\mathscr{H}\). More importantly, a detailed analysis shows that this change is directly related to the flux of energy falling across \(\mathscr{H}\). Denote by \(R\) the area radius of the MTS cuts \(\mathbb{S}\) of \(\mathscr{H}\), so that \(a_{\mathbb{S}}=4\pi R^{2}\). Let \(N\) denote the norm of \(\partial_{a}R\) and \(\mathscr{H}_{1,2}\) the portion of \(\mathscr{H}\) bounded by two cross-sections \(\mathbb{S}_{1}\) and \(\mathbb{S}_{2}\). The appropriate notion of energy flux turns out to be associated with the vector field \(N\ell^{a}\) where \(\ell^{a}\) is normalized such that its projection on \(\mathscr{H}\) is the unit normal \(\hat{r}^{a}\) to the cuts \(\mathbb{S}\). In the generic and physically interesting case when \(\mathbb{S}\) is a 2-sphere, the Gauss and the Codazzi (i.e. constraint) equations imply: \[\frac{1}{2G}\left(R_{2}-R_{1}\right)=\int_{\mathscr{H}_{1,2}}T_{ab}\,N\ell^{a} \hat{\tau}^{b}d^{3}V+\frac{1}{16\pi G}\,\int_{\mathscr{H}_{1,2}}N\,\left(\sigma _{ab}\sigma^{ab}+2\zeta_{a}\zeta^{a}\right)\,d^{3}V. \tag{4.8}\] Here \(\hat{\tau}^{a}\) is the unit normal to \(\mathscr{H},\ \sigma^{ab},\ \text{is the shear of }\ell^{a}\) (i.e., the trace-free part of \(q^{am}q^{bm}\nabla_{m}\ell_{n}\)) and \(\zeta^{a}=q^{ab}\hat{r}^{c}\nabla_{c}\ell_{b}\), where \(q^{ab}\) is the projector onto the tangent space of the cuts \(\mathbb{S}\). The first integral on the right can be directly interpreted as the flux across \(\mathscr{H}_{1,2}\) of matter-energy (relative to the causal vector field \(N\ell^{a}\)). The second term is purely geometric and is interpreted as the flux of energy carried by gravitational waves across \(\mathscr{H}_{1,2}\). It has several properties which support this interpretation. Thus, not only does the second law of black hole mechanics hold for a dynamical horizon \(\mathscr{H}\), but the 'cause' of the increase in the area can be directly traced to physical processes happening near \(\mathscr{H}\). Another natural question is whether the first law (3.6) can be generalized to fully dynamical situations, where \(\delta\) is replaced by a finite transition. Again, the answer is in the affirmative. We will outline the idea for the case when there are no gauge fields on \(\mathscr{H}\). As with IHs, to have a well-defined notion of angular momentum, let us suppose that the intrinsic 3-metric on \(\mathscr{H}\) admits a rotational Killing field \(\varphi\). Then, the angular momentum associated with any cut \(\mathbb{S}\) is given by \[J^{(\varphi)}_{\mathbb{S}}=-\frac{1}{8\pi G}\oint_{\mathbb{S}}K_{ab}\varphi^{ a}\hat{r}^{\,b}\,d^{2}V\,\equiv\frac{1}{8\pi G}\oint_{\mathbb{S}}j^{(\varphi)}d^{2 }V\,, \tag{4.9}\] where \(K_{ab}\) is the extrinsic curvature of \(\mathscr{H}\) in \((\mathbb{M},g_{ab})\) and \(j^{(\varphi)}\) is interpreted as 'the angular momentum density'. Now, in the Kerr family, the mass, surface-gravity and the angular velocity can be unambiguously expressed as well-defined functions \(\bar{m}(a,J)\), \(\bar{\kappa}(a,J)\) and \(\bar{\Omega}(a,J)\) of the horizon area \(a\) and angular momentum \(J\). The idea is to use these expressions to associate mass, surface gravity and angular velocity with each cut of \(\mathscr{H}\). Intuitively, this corresponds to identifying the geometrical fields on each MTS of \(\mathscr{H}\) as providing the geometry of a Kerr IH, so that the evolution along \(\mathscr{H}\) is seen as providing a 1-parameter family of Kerr IHs. Then, a surprising result is that the difference between the horizon masses associated with cuts \(\mathbb{S}_{1}\) and \(\mathbb{S}_{2}\) can be expressed as the integral of a _locally defined_ flux across the portion \(\mathscr{H}_{1,2}\) of \(\mathscr{H}\) bounded by \(\mathscr{H}_{1}\) and \(\mathscr{H}_{2}\): \[\bar{m}_{2}-\bar{m}_{1} = \frac{1}{8\pi G}\,\int_{\mathscr{H}_{1,2}}\bar{\kappa}da \tag{4.10}\] \[+ \frac{1}{8\pi G}\biggl{\{}\oint_{\mathbb{S}_{2}}\bar{\Omega}j^{ \varphi}\,d^{2}V-\oint_{\mathbb{S}_{1}}\bar{\Omega}j^{\varphi}\,d^{2}V-\int_{ \bar{\Omega}_{1}}^{\bar{\Omega}_{2}}d\bar{\Omega}\oint_{\mathbb{S}}j^{\varphi }\,d^{2}V\biggr{\}}\] If the cuts \(\mathbb{S}_{2}\) and \(\mathbb{S}_{1}\) are only infinitesimally separated, this expression reduces precisely to the standard first law involving infinitesimal variations. Therefore, (4.10) is _an integral generalization of the first law_. ## 5 Multipole moments and tomography IHs and DHs have provided valuable insights on a number of issues that cannot be analyzed using EHs because of their teleological nature. For example, the IHs framework provides natural boundary conditions (at the inner boundaries) for the initial value problem in which the black holes can be regarded as being in quasi-equilibrium, e.g., when they are sufficiently far away from each other (Jaramillo 2011). Another example is the determination of the mass and the spin of the progenitors as well as of the final remnant in a binary black hole merger using procedures outlined in section 3. These parameters in turn lead to a notion of 'binding energy' between the black holes that is useful in locating 'quasi-circular' orbits. (For details, other applications, and further references, see Ashtekar & Krishnan 2004.) Form a conceptual and mathematical viewpoint, a more powerful tool provided by quasi-local horizons is the invariant characterization of the horizon geometry via multipole moments. Properties of multipoles implied by Einstein's equations provide us with a richer body of laws of horizon mechanics. **Multipole Moments:** IHs. Black hole uniqueness theorems for vacuum (and electrovac) solutions assure us that the horizon geometry of isolated stationary black holes is well-described by that of the Kerr(-Newman) EH. However, astrophysical black holes are rarely completely isolated -there can be matter rings or another black hole that can perturb the horizon geometry. The possibility of this rich structure is well-captured by IHs, where the intrinsic metric \(q_{ab}\) and the derivative operator \(D\) are sensitive to these external influences. The challenge is to provide an invariant characterization of these distortions in the shape and spin structure of the horizon. This is provided by two sets of numbers \(I_{\ell,m}\) representing the shape multipoles, and \(L_{\ell,m}\) representing the spin multipoles. One may be given two space-times with IHs, presented in different coordinates, representing, for example, progenitor black holes in the distant past constituting a binary, or, remnants after the merger in the distant future. Do the two IHs represent same physics? A priori, this question seems difficult to address. Multipole moments provide a natural avenue to answer this question. In each space-time one can separately compute the multipoles using the coordinates and other set up used to describe its IHs. Then IHs are physically the same if and only if the \(I_{\ell,m}\) and \(L_{\ell,m}\) all agree. Recall that there is no flux of matter across an IH. However, there may be time independent (Maxwell) fields. For simplicity, let us first suppose that there are no matter fields on the given IH itself (although there may be, e.g., matter rings, outside). Then, Einstein's equations imply that the IH geometry is completely determined by the null-normal \(\ell^{a}\), a number \(\kappa_{(\ell)}\), and two fields defined intrinsically on a 2-sphere cross-section \(\mathbb{S}\) of \(\Delta\): a metric \(\tilde{q}_{ab}\) of signature \(++\), and a 1-form \(\tilde{\omega}_{a}\). (As one might expect, these fields are just pull-backs to \(\mathbb{S}\) of the metric \(q_{ab}\) and the rotation 1-form \(\omega_{a}\) on \(\Delta\), and \(\kappa_{(\ell)}\) is the surface gravity of \(\ell^{a}\).) Since \(\mathbb{S}\) has the topology of a 2-sphere, the invariant information in \(\tilde{q}_{ab}\) is encoded in its scalar curvature \(\tilde{R}\). The invariant information in \(\tilde{\omega}_{a}\) is contained in its curl because \(\tilde{\omega}_{a}\) changes by the addition of a gradient when one changes the initial cross-section \(\tilde{S}\). Interestingly, on an IH, \(\tilde{R}\) is proportional to the real part of the Newman-Penrose component \(\Psi_{2}\) of Weyl curvature, and the curl of \(\tilde{\omega}_{a}\), in the imaginary part. Thus, the invariant information in the free data of the IH geometry is neatly encoded in \(\Psi_{2}\), which is unambiguously defined because we have a canonical null vector field \(\ell^{a}\) and the 'radiative' field on \(\Delta\)\({}^{\star}C^{abcd}\ell_{b}\ell_{d}\) -or \(\Psi_{0}\) and \(\Psi_{1}\)- vanishes. The idea is to encode \(\Psi_{2}\) in two sets of numbers, \(I_{\ell,m}\), and \(L_{\ell,m}\), using invariantly defined spherical harmonics. This step can be carried out different ways. We will discuss the one that has been used most extensively so far. As in the discussion of the first law, one assumes that the metric \(q_{ab}\) on \(\Delta\) is axisymmetric.4 Then, one can introduce a canonical _round_ 2-sphere metric \(\mathring{q}_{ab}\) which shares the rotational Killing field, as well as the area 2-form with \(q_{ab}\). This construction is invariant in that it does not require introduction of any extra structure. One then uses the spherical harmonics \(\mathring{Y}_{\ell,m}\) of \(\mathring{q}_{ab}\) and sets: Footnote 4: For a discussion of multipoles on a generic IH that are not axisymmetric –in fact on a generic NEH– see Ashtekar, et al (2022a). \[I_{\ell,m}+iL_{\ell,m}\,:=\,-\oint_{\mathbb{S}}\Psi_{2}\,\mathring{Y}_{\ell,m} \mathrm{d}^{2}\tilde{\mathrm{V}} \tag{5.11}\] where \(\mathrm{d}^{2}\tilde{\mathrm{V}}\) is the volume element induced on \(\mathbb{S}\) by \(\tilde{q}_{ab}\) (as well as by \(\mathring{q}_{ab}\)). One can show that the right side is independent of the choice of the cross-section \(\mathbb{S}\); thus the shape and spin multipoles are properties of \(\Delta\) alone. This set characterizes the IH geometry in the following sense: Given an IH, one can reconstruct its geometry \((q_{ab},\,D)\), starting from its multipoles \(I_{\ell,m},\,L_{\ell,m}\) (given the vector field \(\ell^{a}\) and two numbers: \(\kappa_{\ell}\) and \(a_{\mathbb{S}}\) on a bare manifold \(\mathbb{S}^{2}\times\mathbb{R}\)). These _geometrical_ multipole moments are dimensionless. The _mass and angular momentum_ moments, \(M_{\ell,m}\) and \(J_{\ell,m}\), that are of more direct physical interest can be obtained simply by rescaling the \(I_{\ell,m}\) and \(L_{\ell,m}\) by appropriate dimensionfull factors. The proportionality factors are determined by analogy to electrodynamics where charge and current moments suffice to determine the source. In electrodynamics, one uses charge and current densities to define the moments. For the IHs under consideration, there are no matter fields on \(\Delta\); the required 'densities' arise from the horizon geometry itself. A comparison between the expressions of angular momentum and the mass of IHs and the integral on the right side of (5.11) yields the required expressions of the 'effective mass and angular momentum densities'. They only involve powers of \(m_{{}_{\Delta}}\), and the area radius \(R_{\Delta}\) of the IH which then determine the proportionality factors between \(I_{\ell,m}\) and \(M_{\ell,m}\), and between \(L_{\ell,m}\) and \(J_{\ell,m}\). These mass and angular momentum multipoles have several attractive properties. First, the mass dipole and the angular momentum monopole always vanish. The mass monopole equals \(m_{{}_{\Delta}}\), and the angular momentum dipole equals the \(J_{\Delta}^{(\phi)}\) discussed in section 3. Second, in vacuum, stationary space-times the mass monopole equals the ADM mass \(m_{{}_{\infty}}\) at spatial infinity. Third, in static space-times all angular momentum multipoles vanish. Finally, in vacuum axisymmetric space-times, the angular momentum dipole \(J_{1,0}\) equals \(J_{\infty}\). In particular, then, for the Kerr family, the mass monopole and angular momentum dipole are the expected ones, and the discrete reflection symmetry of the spacetime metric implies that all odd mass multipoles and all even angular momentum multipoles vanish. However, that there is a subtlety about the higher multipoles whose values are not determined by space-time Killing fields: The multipole moments defined here are the'source multipoles' that refer to the horizon, and are thus distinct from the 'field multipoles' defined at spatial infinity. In Newtonian gravity the two sets agree. However, in general relativity, the gravitational field outside the horizon also contributes to the field multipoles since they are extracted from the gravitational field _at infinity_. The discrepancy increases with the Kerr parameter \(a\). In the extremal limit, \(a\to 1\), the field mass-quadruple moment is smaller than the horizon one by a factor of 1.4 while the field angular momentum octupole moment is greater than the horizon one by a factor of 1.14. These differences are conceptually important for certain applications such as the'self-force' calculations used in the study of extremal mass ratio binaries, where it is the source or the horizon multipoles that are relevant. These considerations have been extended to include Maxwell fields at \(\Delta\). Then, in addition to the mass and angular momentum moments, one also has the charge and current multipoles. (For further details in IH multipoles, see Ashtekar et al (2004).) **Multipole Moments:** DHs. Let us now turn to dynamical situations and assume that there are no matter fields on the horizon itself. A black hole may be formed by a gravitational collapse or as a result of a compact binary coalescence. In these processes, a dynamical horizon forms and eventually settles down to a Kerr IH in the distant future. However, as numerical simulations show, initially the shape and spin structure of the DH is quite complicated, and changes rapidly in time. Can one provide an invariant description of the fascinating dynamics in the fully non-linear regime of general relativity? The answer is in the affirmative. One can capture the information in the geometry of each \(v=\) const cut \(\mathbb{S}\) of the DH in time-dependent multipoles, i.e., functions \(I_{\ell,m}(v)\), \(L_{\ell,m}(v)\) of time. This 1-parameter family of shape and spin multipoles provides an invariant characterization of the horizon dynamics. (For details, see Ashtekar et al 2013). As in section 4, one can think of each cut \(\mathbb{S}\) of \(\mathscr{H}\) as a cross-section of an isolated horizon \(\Delta\). The shape is again invariantly encoded in the scalar curvature \(\tilde{R}(v)\) of the 2-metric \(\tilde{q}_{ab}(v)\) on \(\mathbb{S}\). However, since \(\tilde{q}_{ab}(v)\) is now time dependent, \(\tilde{R}(v)\) is no longer proportional to \(\mathrm{Re}\Psi_{2}(v)\). As before, the spin structure is encoded in the 'rotation' 1-form, now defined as \(\tilde{\omega}_{a}(v):=-n^{c}\,\tilde{q}_{a}{}^{b}\,\nabla_{b}\ell_{c}(v)\) on \(\mathbb{S}\). (The expression on the right side yields the rotational one from \(\tilde{\omega}_{a}\) also on an IH.) However, now there is no gauge freedom in \(\tilde{\omega}_{a}(v)\) because it is tied to a specific cut, i.e., an MTS that has direct physical meaning. Finally, because of time dependence, its curl is no longer given by \(\mathrm{Im}\Psi_{2}(v)\). Therefore, in the definition of multipoles, the seed function that replaces \(\Psi_{2}\) in (5.11) now given by \(\Phi(v):=-\frac{1}{4}\,\tilde{R}(v)+\frac{i}{2}\tilde{\epsilon}^{ab}\tilde{D} _{a}\tilde{\omega}_{b}(v)\) (which becomes \(v\) independent on an IH and reduces to \(\Psi_{2}\)). This function encodes the information in the spin and mass structure of the DH on each cut \(\mathbb{S}\). Recall that one also needs spherical harmonics to define the multipoles. Now, in the dynamical phase of a collapse or a merger, \(\tilde{q}_{ab}(v)\) is far from being axisymmetric, whence the procedure to extract a canonical round metric used on IHs now fails. However, in the asymptotic future the DH tends to the Kerr IH which _is_ axisymmetric. Therefore, a natural strategy would be to introduce spherical harmonics \(\mathring{Y}_{\ell,m}\) in the asymptotic future and Lie-drag them to the rest of \(\mathscr{H}\) along a suitable vector field \(X^{a}\) on \(\Delta\). However, for this procedure to be physically meaningful, the vector field \(X^{a}\) has to satisfy several constraints. For example: (i) \(X^{a}\) has to be constructed by a covariant procedure that does not require any additional structure; (ii) the flow it generates has to preserve the foliation of \(\Delta\) by MTSs \(\mathbb{S}\); (iii) if the DH happens to be axisymmetric, then the flow has to preserve the rotational Killing vector so that the \(\mathring{Y}_{\ell,m}\) obtained by the dragging procedure agree with those constructed on each leaf; etc. It turns out that, thanks to these constraints, \(X^{a}\) is unique up to a rescaling \(X^{a}\to f(v)X^{a}\) for some positive function \(f(v)\), and this small freedom does not affect the \(\mathring{Y}_{\ell,m}\) it defines on all of \(\mathscr{H}\), and hence the multipoles they define. The time-dependent multipoles on a DH are given by: \[I_{\ell,m}(v)+iL_{\ell,m}(v)\,:=\,-\oint_{\mathbb{S}_{v}}\Phi(v)\,\mathring{ Y}_{\ell,m}\,\mathrm{d}^{2}\mathring{\mathrm{V}}. \tag{5.12}\] Einstein's equations relate the change in the multipoles between two cuts, \(v\!=\!v_{1}\) and \(v\!=\!v_{2}\), to fluxes of geometric fields across the portion \(\mathscr{H}_{1,2}\) of \(\mathscr{H}\) bounded by the two cuts. One thus obtains two infinite family of laws of DH mechanics, each labeled by \(\ell,m\), that supplement the the three laws discussed in sections 2 and 3 by carrying much more detailed information about horizon dynamics. These moments have been calculated using numerical simulations of binary black hole mergers. On the one hand, the laws governing the change of multipoles dictated by Einstein's equations serve as a tool to check the accuracy of simulations. On the other hand, the simulations serve to quantify the appearance of distortions in the shape and spin structures of individual horizons of the progenitors as they approach each other, as well as the disappearance of these distortions as the final remnant settles down to the Kerr IH. Let us consider the post-merger phase first and discuss the progenitors in the next sub-section. Gupta et al (2018), for example, provide detailed information on the (\(\ell\)-dependent) decay of the first few multipoles as the remnant DH asymptotes to the Kerr IH. It is often said that the complicated multipolar structure -or the 'hair'- at the formation of the common DH is 'radiated away' to infinity as the DH settles down to the Kerr IH. However, this description of the process is incorrect; no radiation can propagate from a DH (or an EH) to infinity because of the causal structure of space-time. Rather, it is the radiation falling _into the DH_ that irons out the irregularities so that the final multipoles are completely characterized just by the two Kerr parameters, \(M\) and \(a\). Now, as discussed below Eq. (4.8), \(|\sigma_{ab}|^{2}_{\mathscr{H}}\) can be interpreted as the energy density associated with gravitational waves falling _into_\(\mathscr{H}\) that is directly responsible for the increase in area of the cuts \(\mathbb{S}\). Detailed analysis has brought out correlations between the angular distribution of this infalling flux and the damping of the differences between the DH multipoles and their final Kerr values (Pook-Kolb et al 2020). Interestingly, the damping in the strong field regime is also correlated with the so-called quasi-normal damping of the waveform registered at \(\mathscr{I}^{+}\) (discussed below). Thus, the mechanics of multipole moments provide sharp tools to probe rich interplay between geometry and physics in the strong field, dynamical regime of general relativity that had not been explored before. Note that one cannot carry out such investigations using EHs. For, one cannot define shape and spin multipoles on EHs because in the strongly dynamical regime, EHs fail to be smooth sub-manifolds -they have creases, corners and caustics (Gadioux & Reall 2023). Even on portions that may be smooth at late times, EHs do not have a natural foliation that was crucially used to define multipoles on DHs. **Gravitational wave tomography:** As we noted above, there is no causal propagation between DHs \(\mathscr{H}\) (or EHs) and \(\mathscr{I}^{+}\). Nonetheless, there are unforeseen correlations between the horizon dynamics and the strain measured by gravitational wave detectors far away! As the term 'tomography' suggests, even though DHs \(\mathscr{H}\) are not 'visible' to asymptotic observers, thanks to Einstein's equations, one can create a movie of the time evolution of the shape and spin structure of \(\mathscr{H}\) using gravitational wave detectors in asymptotic regions. We will conclude with a discussion of the interplay between dynamics in the strong field and in the asymptotic regimes, suggested in Ashtekar & Krishnan (2004). Over the years, a number of simulations have provided concrete evidence for these fascinating correlations (for a summary of the early work, see Jaramillo 2012). Let us begin with the inspiral phase of the binary black hole coalescence when the two progenitors are relatively close, but before they merge. Then the waveform at \(\mathscr{I}^{+}\) is oscillatory with increasing amplitude and frequency, characterized by a 'chirp'. The energy flux at infinity is encoded in the so-called Bond-news -the shear of the null normal \(n^{a}\) to \(\mathscr{I}^{+}\). As noted above, the energy-flux across the DH of each progenitor is encoded in the shear of the null normal \(\ell^{a}\) to the cuts \(\mathbb{S}(v)\) of \(\mathscr{H}\). These fields have been computed for mergers in which the initial black holes are non-spinning (Prasad et al 2020). Shear of the DHs of each progenitor also exhibits a chirp and furthermore all three shears are correlated, once the initial phases and times are matched. There have also been a number of simulations that focus on the time evolution of geometry and the multipolar structure of the DH of the remnant (see, in particular, Gupta et al 2018, Prasad et al 2022, Chen etal 2022). Together, they strongly hint that there is a detailed relation between the waveform extracted in the asymptotic region at \(\mathscr{I}^{+}\), and the evolution of multipoles in the strong field region of \(\mathscr{H}\). An analytical understanding of this interplay is beginning to emerge using \(I_{\ell,m}\) and \(L_{\ell,m}\) at the horizon, and two infinite sets of observables at \(\mathscr{I}^{+}\) -the Bondi-Sachs supermomenta \(Q_{\ell,m}(u)\) and their'magnetic' (or, Newman-Unti-Tamburino) analogs \(Q^{\star}_{\ell,m}(u)\)- each of which is also labelled by \(\ell,m\). One uses the fact that, soon after the merger, the common DH is well approximated by a precisely defined 'perturbed Kerr IH' (Ashtekar et al 2022b). Numerical simulations confirm that the perturbations are linear combinations of 'quasi-normal modes'. These modes are ingoing at the horizon and outgoing at infinity and have complex frequencies that are completely determined by the parameters \(M,a\) labelling the final Kerr IH. In this quasi-normal regime, using Einstein's equations one can show that there is a direct relation between the time-rate of change of \(I_{\ell,m}\) and \(L_{\ell,m}\) at \(\mathbf{H}\), and and that of \(Q_{\ell,m}\) and \(Q^{\star}_{\ell,m}\) at \(\mathscr{I}^{+}\). The waveform at \(\mathscr{I}^{+}\) determines the rate of change of \(Q_{\ell,m}\) and \(Q^{\star}_{\ell,m}\) and the relation determines the horizon dynamics. Thus, one has a detailed tomography map in the quasi-normal regime. Finally, one can compare the exact time-dependent multipoles \(I_{\ell,m}(v)\) and \(L_{\ell,m}(v)\) provided by numerical simulations with their final asymptotic Kerr values that correspond to the \(v\to\infty\) limit. How far in the past can the difference be well approximated by the quasi-normal corrections to the Kerr multipoles, calculated _using the waveform_ by the procedure given above? For a given desired accuracy (determined by the detector sensitivity), the approximation could fail and nonlinear corrections may become important near the merger (Khera et al 2023). The multipole moment considerations provide a sharp criterion determine the domain on which the quasi-normal approximation is viable on entire exterior region of space-time. This tool is likely to play an important role in the ongoing debate (see, e.g., Giesler et al 2019, Baibhav et al 2023) on the domain of validity of the quasi-normal domain, already within general relativity. ## 6 Discussion Let us conclude with general perspectives. On the whole, in the passage from event horizons in stationary space-times to isolated horizons, and then to dynamical horizons, one considers increasingly more realistic situations. The laws of horizon mechanics follow from the specific definitions of these horizons and Einstein's equations. The simplest among them, discussed in sections 2-4, have close similarity with the laws of thermodynamics and have led to deeper understanding, as well as puzzles, at the interface of general relativity, quantum physics and statistical mechanics. In the dynamical regime, detailed insights on horizon mechanics have come from the infinitely many 'balance laws' that relate the time derivatives of multipoles to the fluxes of geometric fields across \(\mathscr{H}\). While these balance laws do not have direct thermodynamical analogs, they provide powerful tools to probe how horizons evolve and to develop gravitational wave tomography. Use of quasi-local horizons is essential in these applications; there are fundamental conceptual difficulties in extending the constructions and results to EHs. Similarly, since the notions of IHs and DHs make no reference to infinity, these frameworks can be used also in spatially compact space-times. The notion of an event horizon, by contrast, does not naturally extend to these space-times. The results we summarized have been extended to allow presence of a cosmological constant \(\Lambda\). Moreover, in the \(\Lambda<0\) case, a difficulty with the first law pointed out by Caldarelli et al 2000 (in 4 dimensions) and Gibbons, et al 2005 (in higher dimensions) does not arise in the IH framework (Ashtekar et al 2006). For DHs, the only significant change is that the topology of cuts \(\mathbb{S}\) of DHs is restricted to be \(\mathbb{S}^{2}\) if \(\Lambda>0\) and is completely unrestricted if \(\Lambda<0\). For EHs and IHs, results have also been extended to higher dimensions. For DHs, the issue remains open. Similarly, while the first law has been generalized to globally stationary space-times in a wide class of diffeomorphism covariant theories (Iyer & Wald 1994), such a generalization has not been carried out for IHs in non-stationary space-times. Hamiltonian treatment of the first law with an IH as the inner boundary has been available for some time in the covariant phase space framework. Since DHs are space-like, one can use standard 3+1 Hamiltonian framework. In fact Eq. (4.8) is precisely a linear combination of constraints. However, a fully satisfactory covariant phase space framework in which \(\mathscr{H}\) appears as an inner boundary is not yet available. Such a framework could be very useful to efforts aimed at extending tomography to the fully non-linear regime, beyond the quasi-normal mode approximation. Finally, due to space limitation this article focuses only on classical gravity. The quasi-local horizon framework also plays a key role in the analysis of the Hawking evaporation process beyond the external field approximation. There are strong indications from the analysis of the tractable models that what forms in the gravitational collapse and evaporates quantum mechanically are quasi-local horizons. During the collapse, a dynamical horizon forms, which then turns into a time-like world tube of shrinking marginally trapped surfaces during quantum evaporation because the energy flux into the horizon is negative. In the semi-classical regime while we have these quasi-local horizons, one cannot even ask if there is an event horizon because \(\mathscr{I}^{+}\) is incomplete. For a summary, including expectations beyond the semi-classical regime, see, e.g. Ashtekar 2020. ## Acknowledgments I have benefitted from discussions with a large number of colleagues. I would especially like to thank Badri Krishnan, Jerzy Lewandowski, Ivan Booth, Steve Fairhurst, Jose Luis Jaramillo and Neev Khera.
2304.07917
Non-unitary Trotter circuits for imaginary time evolution
We propose an imaginary time equivalent of the well-established Pauli gadget primitive for Trotter-decomposed real time evolution, using mid-circuit measurements on a single ancilla qubit. Imaginary time evolution (ITE) is widely used for obtaining the ground state of a system on classical hardware, computing thermal averages, and as a component of quantum algorithms that perform non-unitary evolution. Near-term implementations on quantum hardware rely on heuristics, compromising their accuracy. As a result, there is growing interest in the development of more natively quantum algorithms. Since it is not possible to implement a non-unitary gate deterministically, we resort to the implementation of probabilistic imaginary time evolution (PITE) algorithms, which rely on a unitary quantum circuit to simulate a block encoding of the ITE operator - that is, they rely on successful ancillary measurements to evolve the system non-unitarily. Compared with previous PITE proposals, the suggested block encoding in this paper results in shorter circuits and is simpler to implement, requiring only a slight modification of the Pauli gadget primitive. This scheme was tested on the transverse Ising model and the fermionic Hubbard model and is demonstrated to converge to the ground state of the system.
Chiara Leadbeater, Nathan Fitzpatrick, David Muñoz Ramo, Alex J. W. Thom
2023-04-16T23:37:34Z
http://arxiv.org/abs/2304.07917v3
# Non-unitary Trotter circuits for imaginary time evolution ###### Abstract We propose an imaginary time equivalent of the well-established Pauli gadget primitive for Trotter-decomposed real time evolution. Imaginary time evolution (ITE) is widely used for obtaining the ground state of a system on classical hardware. Near-term implementations on quantum hardware rely on heuristics, compromising their accuracy. As a result, there is a growing interest in the development of more natively quantum algorithms. Since it is not possible to implement a non-unitary gate deterministically, we resort to the implementation of probabilistic imaginary time evolution (PITE) algorithms, which rely on a unitary quantum circuit to simulate a block encoding of the ITE operator - that is, they rely on successful ancillary measurements to evolve the system non-unitarily. This scheme was tested on the transverse Ising model and the fermionic Hubbard model and is demonstrated to converge to the ground state of the system. ## I Introduction Imaginary time evolution (ITE) is a routine that is widely used on classical hardware to obtain the ground state (GS) of a system. In this procedure, an initial trial state is evolved through imaginary time, \(\tau=it\), by applying the ITE operator, \(e^{-\hat{H}\tau}\). This operator has the property of being a ground state projector in the large \(\tau\) limit; over time, the excited state contributions from the initial state \(\ket{\Psi(\tau=0)}\) decay until only the GS, \(\ket{\phi_{0}}\), remains \[\lim_{\tau\to\infty}e^{-(\hat{H}-E_{0})\tau}\ket{\Psi(\tau=0)}\propto\ket{\phi _{0}}. \tag{1}\] Although it is not expected that quantum computers will be able to make the task of ground state determination for generic Hamiltonians efficient [1], they may provide useful speedups in the near future for certain applications using quantum dynamics [2]. In particular, owing to the exponential scaling of the Hilbert space with system size, classical techniques for quantum simulation are generally designed to avoid two features in a computation: (1) explicitly storing the many-body wave function, and (2) propagating the wave function by matrix exponentiation. Quantum computers allow for efficient implementation of both of these features. For the former, there are quantum circuits which cannot be efficiently simulated using classical resources [3]. The quantum computer thus has access to a richer state space for the ground state search of certain systems. For the latter feature, there are quantum algorithms for efficiently simulating real time evolution. Each algorithm corresponds to a different decomposition of the real time evolution operator; for instance, the operator can be expressed as a truncated Taylor series via a linear combination of unitaries (LCU) [4; 5; 6; 7], or as a series of alternating signal rotations and signal processing rotations via quantum signal processing (QSP) [8]. Due to its simplicity and ease of implementation with the Pauli gadget primitive (refer to Appendix 1) [9], the most widely adopted method is the Trotter decomposition [10; 11; 12; 13], which evolves each Hamiltonian term for a small time slice. On the other hand, imaginary time evolution is much more challenging to implement, owing to the non-unitarity of the ITE operator. Various approaches have emerged as indirect implementations of the ITE operator. Previous techniques have mostly been designed for hardware in the current 'noisy intermediate-scale quantum' (NISQ) era, characterised by a limited coherence time and gate fidelity, and are thus hybrid quantum-classical. Whilst this partitioning saves quantum resources, it typically necessitates the use of heuristics, which can hinder the attainable accuracy of the final ground state. Indeed, the variational quantum eigensolver [14; 15], which is among the most promising and widely-used examples of variational algorithms, requires gate errors significantly lower than the error thresholds of fault-tolerant hardware in order to achieve chemical accuracy [16]. Within the class of hybrid ITE techniques, two main approaches have emerged: variational ITE (VITE) [17; 18; 19; 20; 21] and quantum ITE (QITE) [22; 23; 24; 25]. Refer to Section II.1 for more information regarding these methods. Unlike their near-term competitors, fault-tolerant algorithms do not rely on heuristics and offer rigorous performance guarantees. To this end, we consider a third approach: the probabilistic ITE (PITE). PITE algorithms are a type of block encoding - that is, the non-unitary operator is embedded inside a larger unitary matrix, which is accessed via post-selection of'successful' mid-circuit measurements. Unlike VITE, PITE does not require an ansatz, and unlike QITE, it does not require the selection of Pauli strings according to the locality of the Hamiltonian. PITE evolves the system exclusively using quantum operations, without the need for classical updates to the state. The main drawback of PITE algorithms arises from the need to implement them as a series of iterative circuits [26], each of which must be applied successfully - that is, each imaginary time step is implemented using at least one successful mid-circuit measurement. Consequently, as with most block encoding methods, PITE algorithms are characterised by an exponentially decaying probability of success - though they are guaranteed to converge to the GS, convergence is limited by the number of measurement shots required. The practicality of implementing most block encoding methods, including PITE, will be tied to the development of amplitude amplification (AA) schemes to boost the overall success probability of the circuits [27; 28; 29]. The imaginary time evolution operator can be decomposed using similar strategies as the real time evolution operator; namely, QSVT [30; 31], LCU and Trotterisation. PITE algorithms use mid-circuit measurements, which naturally renormalise the quantum state throughout the evolution process; this represents an advantage of PITE methods over other approaches which must compute this normalisation, including hybrid methods such as QITE, as well as QSVT. In recent years, a number of PITE algorithms have been proposed [26; 32; 33; 34] and implemented on hardware [35; 36]. Amongst these, the LCU proposal from Kosugi et al [34] is the most comparable to the scheme that will be proposed in this paper: both methods are general schemes for constructing the necessary circuits directly given any input Hamiltonian, without the need for system-specific block encodings and without incurring an additional classical cost in computing an initial singular value decomposition of the ITE operator [32]. In the LCU proposal, the approximate implementation of the ITE operator is achieved by the repeated application of the forward and backward RTE operators, \(\cos\!\left(\hat{H}_{m}t\right)\propto e^{-i\hat{H}mt}+e^{+i\hat{H}_{m}t}\), where \(H_{m}\) is a local term in the Hamiltonian, requiring two controlled Pauli gadgets per term for each time step. In this paper, we propose an alternative block encoding for PITE, suited to a Trotter decomposition of the ITE operator. The algorithm presented can be considered to be the imaginary time counterpart of the well-established Trotterised RTE algorithm, which makes use of the Pauli gadget primitive. There are several advantages to using this approach. It will become apparent that the structure of the Pauli gadget naturally results in diagonal matrices, avoiding the need to incur an initial classical cost in computing the singular value decomposition (SVD) of the ITE operator [32]. The method we propose in this paper builds on the work of Zhang et al [33], which considered the real time evolution of a non-interacting non-Hermitian operator: considering only the imaginary component of this operator, their method reduces to a Trotterised PITE method for a non-interacting Hermitian operator. This work extends this to consider arbitrary Hermitian Hamiltonians expressed in a Pauli basis; it is then easy to account for non-Hermitian Hamiltonians by concatenating these imaginary time Pauli gadgets with the ubiquitous real time Pauli gadgets. Finally, by building on the Pauli gadget framework, we ensure that any of the optimisation passes previously designed for real time Pauli gadgets can be directly transferred to our circuits [37]. The structure of the paper is as follows. Background information is given in Section II, describing the implementation of non-unitary operators on quantum circuits. The proposed algorithm will then be presented in Section III, and the results obtained will be detailed in Section IV. Finally, a discussion of the results will be presented in Section V, and conclusions drawn in Section VI. ## II Background ### NISQ ITE Methods Heuristic ITE approaches approximate the action of the non-unitary ITE propagator with a unitary operator. This unitary operator is optimised by taking measurements and post-processing the results classically. Within this class of techniques, two main approaches have emerged: variational ITE (VITE) [17; 18; 19; 20; 21] and quantum ITE (QITE) [22; 23; 24; 25]. VITE represents the unitary operator with a parameterised quantum circuit, whose parameters are optimised classically according to a variational principle [18], which carries out a finite projection of the wavefunction on the tangent space of the exact evolution. The hardware requirements of the standard VITE procedure can be significantly reduced by assuming a Trotter product form for the ansatz and variationally optimising each Trotter term sequentially [21]. Further, optimising the circuits using classical tensor network methods can result in shallower and more accurate circuits [38]. Variational quantum algorithms (VQAs) such as these have substantial drawbacks [39], including the barren plateau (BP) problem, which could prevent convergence for larger problem sizes [40]. There are multiple driving factors; in particular, the more expressive the ansatz is, the more susceptible it is to BPs [40; 41]. A common approach to mitigate the BP problem is to reduce the expressibility of the ansatz. In the context of quantum chemistry problems, this corresponds to restricting the span of the ansatz to a section of the Hilbert space, for instance by exploiting the symmetry of the problem [41]. Consequently, VQAs are highly dependent on the choice of ansatz. On the other hand, QITE represents the unitary operator in a Pauli basis and solves a linear system of equations for the coefficients of the Pauli tensors; measurements taken from the quantum circuit are used to estimate the matrix elements, and the equations are solved on classical hardware. The derivation is mathematically equivalent to McLachlan's variational principle for VITE [34]. However, the domain of these Pauli tensors grows exponentially with the spreading of entanglement. As a result, the matrix dimension for the linear equations, and correspondingly the number of measurements, tends to grow rapidly as the problem size increases. Its scaling can be regularised if restrictive approximations regarding the locality of partial Hamiltonians are adopted [25]. ### Block encoding scheme for non-unitary operation Block encoding represents a general framework for the implementation of non-unitary operators: a non-unitary matrix \(A\) that operates on an \(n\)-qubit quantum state \(\ket{\psi}\) is embedded as a block inside a larger unitary matrix \(U_{A}\) that operates on \((n+l)\) qubits. In theory, any non-unitary matrix A can be block encoded with a single ancilla using a singular value decomposition scheme [42]. However, these circuits may be exponentially deep. Application of \(A\) is then contingent on successfully post-selecting the \(l\) ancillary qubits corresponding to this block, \(\ket{0}^{\otimes l}\bra{0}^{\otimes l}\otimes\ket{\psi}\bra{\psi}\) of the unitary matrix \(U_{A}\). For the purpose of this work, we will iteratively use a single ancilla qubit, i.e. \(l=1\), block encoding. When \(l=1\), the block encoded matrix is given by \[U_{A}=\left(\begin{array}{cc}\frac{1}{\alpha}A&*\\ *&*\end{array}\right)\begin{array}{c}\ket{0}\\ 1\end{array}\,, \tag{2}\] where \(\frac{1}{\alpha}\in\mathbb{R}\) is a necessary scaling required to ensure that \(U_{A}\) is unitary; more precisely, the spectral norm of \(\frac{1}{\alpha}A\) satisfies \(\|\frac{1}{\alpha}A\|=\frac{1}{|\alpha|}\|A\|\leq 1\). The spectral norm \(\|\cdot\|\) of a matrix \(A\) is the maximum scale by which \(A\) can stretch a vector, defined as \[\|A\| =\sup_{\bra{x}x\ket{x}=1}\sqrt{\bra{x}A^{\dagger}A\ket{x}} \tag{3}\] \[=\sigma_{\max}\left(A\right), \tag{4}\] where \(\sigma_{\max}\left(A\right)\) is the maximum singular value of \(A\). Entries given by an asterisk \(*\) indicate that there is some freedom in choosing their values - again, the only requirement is that \(U_{A}\) is unitary. The action of \(U_{A}\) is: \[\ket{\Psi}=U_{A}(\ket{0}\otimes\ket{\psi})=\ket{0}\otimes\tfrac{1}{\alpha}A \ket{\psi}+\ket{\bot}. \tag{5}\] The first and second terms represent the components of the state \(\ket{\Psi}\) that are parallel and perpendicular to \(\ket{0}\) in the auxilliary space, respectively. That is, \(\ket{\bot}\) must satisfy the following orthonormality constraints: \(\left(\bra{0}\otimes I^{\otimes n}\right)\ket{\bot}=0\) and \(\bra{\bot}\ket{\bot}=\sqrt{1-\frac{1}{\alpha^{2}}\bra{0}\bra{\psi}A^{\dagger}A \ket{\psi}}\), but its system space component is determined by the action of the bottom left element in \(U_{A}\) (2) on \(\ket{\psi}\). To successfully apply \(A\), one must measure the ancilla qubit to be in the state \(\ket{0}\). According to the postulates of quantum mechanics, following a partial measurement on the ancillary register, the remaining state vector is renormalised, preserving its purity. Thus, the value of \(\frac{1}{\alpha}\) does not affect the successful post-measurement state. To see this, consider the projective measurement operator \(\hat{M}_{0}=\left(\ket{0}\bra{0}\otimes I^{\otimes n}\right)\), which only acts on the ancilla qubit, applied to the state \(\ket{\Psi}\). The successful ancillary measurement outcome is defined to be \[\ket{\Psi} \rightarrow\frac{\hat{M}_{0}\ket{\Psi}}{\sqrt{\bra{\Psi}\hat{M}_{0} ^{\dagger}\hat{M}_{0}\ket{\Psi}}}\] \[=\frac{\ket{0}\otimes\frac{1}{\alpha}A\ket{\psi}}{\sqrt{\frac{1}{ \alpha^{2}}\bra{0}\bra{\psi}A^{\dagger}A\ket{\psi}}} \tag{6}\] \[=\frac{\ket{0}\otimes A\ket{\psi}}{\sqrt{\bra{\psi}A^{\dagger}A \ket{\psi}}}.\] The value of \(\frac{1}{\alpha}\) does, however, affect the probability of successfully applying \(A\), \[p_{\text{success}}=\tfrac{1}{\alpha^{2}}\left\langle\psi\right|A^{\dagger}A\left| \psi\right\rangle. \tag{7}\] The dependence of the success probability on the initial state \(\left|\psi\right\rangle\) is a consequence of the non-unitarity of \(\frac{1}{\alpha}A\) - roughly speaking, the more non-unitary \(\frac{1}{\alpha}A\) is, the more information that is lost from \(\left|\psi\right\rangle\) on application of \(\frac{1}{\alpha}A\), and the smaller the success probability is. Of course, the non-unitarity of \(\frac{1}{\alpha}A\) is affected by the value of the scaling, \(\frac{1}{\alpha}\): the greater the scaling, the greater the success probability. Different block encodings \(U_{A}\) of \(A\) may produce different scalings; however, there is a maximum value this scaling can take, \(\left(\frac{1}{\alpha}\right)_{\text{max}}\). Above this value, the block encoding \(U_{A}\) is no longer unitary and cannot be implemented with a quantum circuit. This optimal block encoding - that is, the block encoding that applies \(A\) with the maximal success probability for any \(\left|\psi\right\rangle\) - satisfies \(\|\left(\frac{1}{\alpha}\right)_{\text{max}}A\|=1\), giving \[\left(\frac{1}{\alpha}\right)_{\text{max}}=\frac{1}{\|A\|}=\frac{1}{\sigma_{ \text{max}}\left(A\right)}. \tag{8}\] The intuition behind the block encoding approach can be seen by drawing an analogy with the physical process of system-environment interactions. In physical systems, non-Hermicity is generated from entanglement with the environment; in Equation (5), we can identify the'system' to be the \(n\)-qubit state \(\left|\psi\right\rangle\), whereas the 'environment' is represented by the ancillary qubit. First, \(U_{A}\) entangles the system with the environment. Next, the local measurement on the ancilla state, before checking the measurement result, produces a mixed state and corresponds to the process of taking the partial trace over the environmental degrees of freedom, resulting in a non-unitary evolution of the system. In particular, however, we are interested in the non-unitary evolution of the component of the system state vector that is entangled with the \(\left|0\right\rangle\) ancilla state, \(\left|//\right\rangle\). Initially, we begin in the normalised state \(\left|//\right\rangle=\left|0\right\rangle\otimes\left|\psi\right\rangle\). Application of \(U_{A}\) is a continuous real time evolution process, throughout which the magnitude of \(\left|//\right\rangle\) decreases. At the end of this process, we have \(\left|//\right\rangle=\left|0\right\rangle\otimes\frac{1}{\alpha}A\left|\psi\right\rangle\) and the magnitude of \(\left|//\right\rangle\) has decreased by a factor of \(\sqrt{p_{\text{success}}}\) (Equations (5) and (7)). Finally, after looking at the measurement result, the system collapses into the corresponding pure state. This is the second process: discontinuous renormalisation of the state vector (Equation (6)). ### Universal gate set for non-unitary operation with a single ancillary qubit The cnot gate and all single qubit unitary gates constitute a universal set for the unitary quantum circuit. Terashima et al. Terashima et al. (2013) proved that, considering only block encodings with one ancillary qubit, \(l=1\), a possible universal set for non-unitary circuits is given by the cnot gate, all single qubit unitary gates, and a specific single qubit non-unitary gate, \(\mathcal{N}_{1}(a)\), which relies on a single qubit projective measurement. \(\mathcal{N}_{1}(a)\) has the matrix representation \[\mathcal{N}_{1}(a)=\begin{pmatrix}1&0\\ 0&a\end{pmatrix}, \tag{9}\] for some \(a\in[0,1]\). Its circuit construction is given in Figure 1; it is composed of a controlled single qubit unitary gate \(\mathcal{U}_{1}(a)\), whose matrix representation is \[\mathcal{U}_{1}(a)=\begin{pmatrix}a&*\\ *&*\end{pmatrix}, \tag{10}\] along with a single ancillary measurement. Again, entries given by an asterisk \(*\) indicate that there is some freedom in choosing their values; specifically, a single qubit unitary is described by three degrees of freedom. Note that \(|a|\leq 1\) is a necessary requirement for \(\mathcal{U}_{1}\) to be unitary. Written explicitly using little endian ordering - that is, in the \(\left|\phi_{\text{anc}}\right\rangle\otimes\left|\psi\right\rangle\) basis, where \(\left|\phi_{\text{anc}}\right\rangle\) denotes the ancillary qubit state - the unitary corresponding to the circuit in Figure 1, \(U_{\mathcal{N}_{1}}(a)\), is a block encoding of \(\mathcal{N}_{1}\): \[U_{\mathcal{N}_{1}}(a)=\begin{pmatrix}1&0&0&0\\ 0&a&0&*\\ 0&0&1&0\\ 0&*&0&*\end{pmatrix}=\begin{pmatrix}\mathcal{N}_{1}(a)&*\\ *&*\end{pmatrix}. \tag{11}\] Upon post-selecting the \(0\) measurement on the ancilla qubit one obtains the \(\mathcal{N}_{1}(a)\) matrix of elements shown in red: \[\big{(}|0\rangle\langle 0|\otimes I\big{)}U_{\mathcal{N}_{1}}(a)\big{(}|0 \rangle\langle 0|\otimes I\big{)}=\mathcal{N}_{1}(a). \tag{12}\] Figure 1: The non-unitary nature of \(\mathcal{N}_{1}(a)\) projecting out the magnitude of \(|0\rangle\) relative to \(|1\rangle\) upon successful measurement of \(|0\rangle\) on the ancilla. Using this decomposition, successful implementation of a non-unitary circuit corresponds to a measurement shot in which all ancillary measurements from the constituent \(\mathcal{N}_{1}(a)\) gates yield \(\ket{0}\). ## III Trotterised PITE circuit We consider building the ITE operator, \(\frac{1}{\alpha}e^{-\hat{H}\tau}\), in terms of \(\mathcal{N}_{1}(a)\) single qubit non-unitary gates (9). Each constituent \(\mathcal{N}_{1}(a)\) gate is implemented as a block encoding (11) according to Figure 1, and the overall block encoding we achieve for the ITE operator will determine its scaling \(\frac{1}{\alpha}\) (Section II.2). This is a natural approach to take when the ITE operator has been Trotterised. Given a Hamiltonian expressed in a qubit (Pauli) basis, the Trotter decomposition approximately factors the ITE operator into a product of exponentiated Pauli strings. Since each of these terms can be easily diagonalised with the use of single qubit rotations (refer to Appendix -A), they can be readily represented in terms of the diagonal \(\mathcal{N}_{1}(a)\) matrices. The proof presented by Terashima et al [43] could be regarded as a general synthesis strategy for non-unitary operators; however, no attempt has been made to optimise the number of gates for the specific non-unitary operation of ITE. Moreover, the proof relies on an initial singular value decomposition (SVD) of the non-unitary matrix in question, incurring a classical cost that scales exponentially with the number of qubits \(n\), \(\mathcal{O}\left(2^{3n}\right)\). Instead, we consider a decomposition based on the Pauli gadget framework for Trotterised real time evolution (RTE) [9]. This allows us to transfer any optimisation passes previously designed for real time propagation with Pauli gadgets directly to our framework [37]. ### Trotter decomposition Consider an \(n\) qubit Hamiltonian that can be expressed as a sum of \(M\) local interactions, \(\hat{H}=\sum_{m=1}^{M}\hat{H}_{m}\), where \(M\sim\mathcal{O}\left(\text{poly}(n)\right)\). The first-order Trotter formula, applied to an imaginary time evolution \(\tau=it\), states that [10] \[e^{-\hat{H}\Delta\tau}=\prod_{m=1}^{M}e^{-\hat{H}_{m}\Delta\tau}+\mathcal{O}(( L\Delta\tau)^{2}), \tag{13}\] where \(L\) is the system size. For a longer simulation time, the evolution is divided into \(r\in\mathbb{N}\) Trotter steps \[e^{-\hat{H}\tau}=\left(\prod_{m=1}^{M}e^{-\hat{H}_{m}\Delta\tau}\right)^{r}+ \mathcal{O}\left((L\Delta\tau)^{2}r\right), \tag{14}\] where \(\frac{\tau}{r}=\Delta\tau\) and \(r\) is also referred to as the 'Trotter number'. From Equation (14), we can determine the gate count \(N\) required for a given evolution time \(\tau\) and target precision \(\epsilon\): \[\epsilon=\mathcal{O}(L^{2}\tau\Delta\tau),\,N=Lr=\mathcal{O}\left(\frac{L^{3 }\tau^{2}}{\epsilon}\right). \tag{15}\] The Trotter decomposition performs significantly better when many pairs of operator summands commute [10; 44; 45]; this can be used to give stricter upper bounds on the Trotter error [46]. Higher order product formulas are found recursively using Equation (14); the higher the order, the more accurate the Trotterisation. In this paper, we only implement the first-order Trotter error in our experimental simulations and theoretical examination, however, our techniques also hold for the higher order decompositions. Given a fermionic Hamiltonian in second quantised form, the Hamiltonian must first be converted to a qubit representation. The Jordan-Wigner (JW) transformation [47] maps electronic configurations onto computational basis states; for instance, the state \(\ket{01}\) indicates a system with two spin orbitals, in which the first (second) spin orbital is unoccupied (occupied). Fermionic creation and annihilation operators are mapped onto qubit operators (namely, tensor products of Pauli operators, or 'Pauli strings'). Thus, for the remainder of this paper, we will consider Hamiltonians expressed in a Pauli basis, \[\hat{H}=\sum_{k=1}^{M}c_{k}\hat{\sigma}_{\mathbf{m}_{k}}, \tag{16}\] where \(\hat{\sigma}_{\mathbf{m}}\) is a Pauli string of length \(n\), \[\hat{\sigma}_{\mathbf{m}}=\hat{\sigma}_{m_{n-1}}\otimes...\otimes\hat{\sigma}_{m _{0}},\quad\hat{\sigma}_{m}\in\{I,X,Y,Z\}. \tag{17}\] ### Pauli gadgets for Trotterised PITE We now approach the task of block encoding the non-unitary operator, \(\frac{1}{\alpha}e^{-\hat{H}\tau}\) for \(\hat{H}=\hat{H}^{\dagger}\), via a Trotter decomposition. As discussed in Section II.2, \(\frac{1}{\alpha}\) is a scaling factor that depends on the block encoding used. The greater this scaling is, the greater the success probability of the block encoding; its maximum allowable value, which still ensures that the block encoding is unitary and thus implementable on a quantum circuit, is discussed in the following section (Section III.2.1). We note that a circuit structure implementation of the operator \(e^{+|\gamma|\Delta\tau(\hat{Z}-\hat{I})}\) for any \(\gamma\in\mathbb{R}\), which can be seen as the exponentiation of a simplified Hamiltonian, has recently been proposed [33]; our work can be seen as an extension of this to include any Hamiltonian expressed in a Pauli basis. In order to do this, we make use of the well-established Pauli gadget primitive, ubiquitously used in Trotterised RTE simulations. An overview of the Pauli gadget and its use in RTE is provided in Appendix 1. iii.2.1 Maximum scaling for the block encoded ITE operator, \(\left(\frac{1}{\alpha}\right)_{\text{max}}e^{-\hat{H}\tau}\) First, we consider the exact (non-Trotterised) ITE operator. According to Equation (8), the maximum possible scaling for the block-encoded exact ITE operator is given by \[\left(\frac{1}{\alpha}\right)_{\text{PITE, max}}=\frac{1}{\|e^{-H\tau}\|}= \frac{1}{\sigma_{\text{max}}\left(e^{-H\tau}\right)}=e^{E_{0}\tau}, \tag{18}\] where \(E_{0}\) is the (unknown) ground state energy. Once the ground state is reached, the block encoded ITE operator cannot project out any more states: continued application does not change the state, but if the scaling is less than the maximal value, the success probability will be less than \(1\), \(p_{\text{success}}=\frac{1}{\alpha^{2}}e^{-2E_{0}}\). On the other hand, if the scaling is maximal, the ITE operator will be applied deterministically, \(p_{\text{success}}=1\). In other words, this maximal scaling ensures that the ITE operator becomes the identity operator once the ground state is reached. Note that this scaling is also present in our original definition of the ITE procedure for obtaining the ground state of a system (1). When using ITE methods which do not iteratively renormalise the wavefunction, the prefactor in this definition ensures that the amplitude of the wavefunction does not decay to zero as the ground state is approached. Typically this will require re-scaling of the operator; consider, for instance, classical stochastic ITE methods (quantum Monte Carlo), in which the wavefunction is propagated in imaginary time by sampling the action of the Hamiltonian in some discrete basis populated by 'walkers'. Since the total number of walkers at any one point in the evolution is related to the normalisation of the wavefunction, the ground state energy is estimated \(\tilde{E}_{0}\sim E_{0}\) throughout the run-time of the algorithm and used to shift the Hamiltonian, \(e^{-\left(\hat{H}-\tilde{E}_{0}\right)\tau}\), as a means of walker population control [48; 49; 50; 51]. On the other hand, PITE algorithms naturally renormalise the intermediate state \(|\psi_{i}\rangle\) at each time step \(i\) through the use of partial measurements: \[|\Psi\rangle=\prod_{i=1}^{r}\frac{e^{-\hat{H}\Delta\tau}}{\sqrt{\langle\psi_{ i}|(e^{-\hat{H}\Delta\tau})^{\dagger}e^{-\hat{H}\Delta\tau}|\psi_{i}\rangle}}| \psi_{0}\rangle, \tag{19}\] rather than affecting the normalisation of the quantum state, due to the cancellation shown in Equation 6. The scaling \(\frac{1}{\alpha}\) affects only the success probability of these measurements (Section II.2). Thus, the factor of \(e^{E_{0}\tau}\) in Equation (1) is not required by PITE to prevent the decay of the ground state: \(\frac{1}{\alpha}<e^{E_{0}\tau}\) is allowed. Instead, this factor manifests as the optimal scaling \(\left(\frac{1}{\alpha}\right)_{\text{max}}\) that minimises the decay of the success probability. We may want to take inspiration from quantum Monte Carlo methods and vary the scaling of the block encoding. However, since this is a non-unitary operation, it is more difficult to implement it using quantum processes. It is likely that PITE algorithms will need to be run in conjunction with an AA procedure to boost the success probability of the circuits. The PITE block encoding proposed by Liu et al [32] was defined in terms of an extra parameter; they were able to vary this parameter throughout the simulation, alongside Grover's algorithm [27; 28], to provide more flexible optimisation of the number of Grover iterates required. In a similar manner, Nishi et al [52] were able to make \(\frac{1}{\alpha}\) a tunable parameter within the LCU-based PITE framework [34], with reduced computational cost as compared with the fixed point search [53; 54] or oblivious AA routine [6]. As will become apparent, the value of \(\frac{1}{\alpha}\) for the block encoding we propose in Section III.2.3 is initially fixed by the circuit implementation we use. However, we note that our framework is also amenable to varying the value of the scaling throughout the simulation, although as discussed, this is not a trivial task and does not lie within the scope of this work. Whilst it is easy to determine the maximum scaling for the exact ITE operator, it is not obvious how to relate this to the Trotterised form, which would depend on the Trotter error [46]. Weak upper and lower bounds for the maximum scaling of the Trotter decomposed operator, defined by Equations (14) and (16), can be obtained (Appendix 2): \[e^{-\lambda r\Delta\tau}\leq\left(\frac{1}{\alpha}\right)_{\text{TrotPTTE, max}}\leq e^{\lambda r\Delta\tau}, \tag{20}\] where \(\lambda=\sum_{k=1}^{M}|c_{k}|\) is the 1-norm of the Hamiltonian. Using the following relation between the spectral norm and the 1-norm of the Hamiltonian expressed in a Pauli basis: \[\|\hat{H}\|\leq\sum_{k=1}^{M}|c_{k}|\cdot\underbrace{\|\hat{\sigma}_{\mathbf{m}_{k}} \|}_{=1}=\lambda, \tag{21}\] we find that the maximum scaling for the exact ITE operator also lies in this range, \[e^{-\lambda r\Delta\tau}\leq e^{E_{0}\tau}=\left(\frac{1}{\alpha}\right)_{ \text{PITE, max}}\leq e^{\lambda r\Delta\tau}. \tag{22}\] #### ii.2.2 Block encoding for \(e^{-\gamma\Delta\tau\hat{Z}}\) with a single ancillary qubit As a preliminary to block encoding the exponentiated Hamiltonian, we first consider the task of block encoding the simple operator \(\frac{1}{\alpha}e^{-\gamma\Delta\tau\hat{Z}}\), where \(\gamma\in\mathbb{R}\). According to Equation (8), this operator is implemented with the maximum possible success probability when the rescaling is given by: \[\left(\frac{1}{\alpha}\right)_{\text{max}}=\frac{1}{\|e^{-\gamma\Delta\tau Z }\|}=e^{-|\gamma|\Delta\tau}. \tag{23}\] The matrix representation of the optimally block encoded operator \(\left(\frac{1}{\alpha}\right)_{\text{max}}e^{-\gamma\Delta\tau\hat{Z}}\) can be easily expressed in terms of the \(\mathcal{N}_{1}(a)\) matrix structure (Section II.3). Consider first the case \(\gamma<0\): \[e^{-|\gamma|\Delta\tau}e^{-\gamma\Delta\tau Z}=\begin{pmatrix}1&0\\ 0&e^{-2|\gamma|\Delta\tau}\end{pmatrix}=\mathcal{N}_{1}(e^{-2|\gamma|\Delta \tau}). \tag{24}\] Similarly, for the case \(\gamma>0\), we identify: \[e^{-|\gamma|\Delta\tau}e^{-\gamma\Delta\tau Z}=\begin{pmatrix}e^{-2|\gamma| \Delta\tau}&0\\ 0&1\end{pmatrix}=X\mathcal{N}_{1}(e^{-2|\gamma|\Delta\tau})X. \tag{25}\] In order to implement Equations (24) and (25) using the circuit structure defined in Figure 1, we must identify a single qubit unitary gate \(\mathcal{U}_{1}(e^{-2|\gamma|\Delta\tau})\). Setting \(\phi=2\arccos\left(e^{-2|\gamma|\Delta\tau}\right)\), we find that we can easily implement \(\mathcal{U}_{1}(e^{-2|\gamma|\Delta\tau})\) as a rotation, \(R_{x}(\phi)\). Overall, the implementation of \(\left(\frac{1}{\alpha}\right)_{\text{max}}\hat{A}=e^{-|\gamma|\Delta\tau}e^{- \gamma\Delta\tau\hat{Z}}\) is obtained with the circuits shown in Figures 2 and 3, provided that the ancillary state is measured to be in the \(|0\rangle\) state. #### ii.2.3 Block encoding for Trotterised \(e^{-\hat{H}\tau}\) Since the \(|0\rangle\) post-selected two qubit gate becomes a diagonal non-unitary single qubit gate, we can use the same machinery of the \(N\)-qubit parity gate, otherwise known as the phase gadget (Figure 9 in Appendix.1). Combining the phase gadget structure with the non-unitary single qubit gate (Figures 2 and 3) gives the overall block encoding for the operator \(e^{-|\gamma|\Delta\tau}e^{-\gamma\Delta\tau\hat{Z}^{\otimes n}}\) (Figure 4). An example circuit for the ITE of the Pauli string \(\hat{X}\hat{Y}\hat{Z}\) is given in Figure 5. The overall circuit is then formed by concatenating circuits corresponding to each of the exponentiated Pauli strings, with their corresponding basis transformation gates (10), and for each of the Trotter steps, according to the Trotter decompo sition defined in Equations (14) and (16). Since the ITE Pauli gadget for each Pauli string contains one \(\mathcal{N}_{1}(e^{-|c_{k}|\Delta\tau\hat{Z}})\) block encoding circuit, which has a scaling of \(e^{-|c_{k}|\Delta\tau}\) (Equation (23)), the scaling \(\left(\frac{1}{\alpha}\right)_{\text{PITE }\hat{\sigma}_{\mathbf{m}_{k}}}\) for each Pauli gadget is also \(e^{-|c_{k}|\Delta\tau}\) and is therefore maximised (Section III.2.2): \[\left(\frac{1}{\alpha}\right)_{\text{PITE }\hat{\sigma}_{\mathbf{m}_{k}},\text{ max}}=\frac{1}{\|e^{-c_{k}\Delta\tau\sigma_{\mathbf{m}_{k}}}\|}=e^{-|c_{k}| \Delta\tau}. \tag{26}\] From this, the overall scaling for the Trotterised PITE block encoding is given by \[\left(\tfrac{1}{\alpha}\right)_{\text{TrottPITE}} =\exp\!\left\{-r\Delta\tau\!\sum_{k=1\atop\hat{\sigma}_{\mathbf{m}_{k} }\neq\hat{I}^{\otimes n}}^{M}\!\left|c_{k}\right|\right\} \tag{27}\] \[\geq\exp\!\left\{-r\Delta\tau\lambda\right\}\!,\] where the sum in the second line (28) is carried out over all coefficients of the constituent Pauli strings in \(\hat{H}\), excluding any Pauli tensors of identity operators \(\hat{I}^{\otimes n}\), and \(\lambda=\sum_{k=1}^{M}\left|c_{k}\right|\) is the 1-norm of the Hamiltonian. Equality is achieved when the Hamiltonian does not include the term \(\hat{I}^{\otimes n}\). Although each individual Trotter term has been optimally block encoded, the scaling for the overall Trotter product will depend on the order of the Trotter terms and may not be optimal (Equation (20)). Successful application of the Trotterised ITE operator requires all mid-circuit ancillary measurements to yield \(\left|0\right\rangle\) - if a measurement is unsuccessful, the state is projected into the wrong subspace. Given a Trotter decomposition defined according to Equations (14) and (16), the success probability after \(r\) Trotter steps is given by \[p_{\text{success}} =\tfrac{1}{\alpha^{2}}\bra{\psi}A^{\dagger}A\ket{\psi}\] \[=\exp\!\left\{-2r\Delta\tau\!\sum_{k=1\atop\hat{\sigma}_{\mathbf{m}_ {k}}\neq\hat{I}^{\otimes n}}^{M}\!\left|c_{k}\right|\right\}\bra{\psi}\left( \left(\prod_{k=1}^{M}e^{-\hat{\sigma}_{\mathbf{m}_{k}}c_{k}\Delta\tau}\right)^{ \dagger}\right)^{r}\!\left(\prod_{k=1}^{M}e^{-\hat{\sigma}_{\mathbf{m}_{k}}c_{k} \Delta\tau}\right)^{r}\!\left|\psi\right\rangle \tag{28}\] \[\geq\exp\!\left\{-2r\Delta\tau\lambda\right\}\bra{\psi}\left( \left(\prod_{k=1}^{M}e^{-\hat{\sigma}_{\mathbf{m}_{k}}c_{k}\Delta\tau}\right)^{ \dagger}\right)^{r}\!\left(\prod_{k=1}^{M}e^{-\hat{\sigma}_{\mathbf{m}_{k}}c_{k} \Delta\tau}\right)^{r}\!\left|\psi\right\rangle\] (29) \[\geq\exp\!\left\{-2r\Delta\tau\lambda\right\}\bra{\psi}\left( \prod_{k=1}^{M}e^{-|c_{k}|\Delta\tau}\right)^{r}\!\left(\prod_{k=1}^{M}e^{-|c _{k}|\Delta\tau}\right)^{r}\!\left|\psi\right\rangle\] (30) \[=\exp\!\left\{-4r\Delta\tau\lambda\right\}\!.\] As expected, the success probability decays exponentially with the number of time steps taken and depends heavily on \(\left|\psi\right\rangle\). The lowest success probability occurs for the state \(\left|\psi\right\rangle_{\text{worst}}\) which undergoes pure amplitude damping under the action of every exponentiated operator in the Trotter decomposition - that is, \(e^{-\hat{\sigma}_{\mathbf{m}_{k}}c_{k}\Delta\tau}\left|\psi\right\rangle_{\text{ worst}}=e^{-|c_{k}|\Delta\tau}\left|\psi\right\rangle_{\text{worst}}\) for all \(k\), giving rise to the final inequality in Equation (30). This would require \(\left|\psi\right\rangle\) to be a simultaneous eigenstate of all the Pauli strings comprising \(\hat{H}\), which is only possible when all the Pauli strings comprising \(\hat{H}\) commute; however, consideration of this scenario does provide us with a lower bound for the general case. It is evident that the success probability for this algorithm can be increased by reducing the 1-norm of the Hamiltonian (30); this task is also of central importance to the reduction of gate complexity for several simulation algorithms, including simulation based on an LCU decomposition [5]. Loaiza et al [55] have carried out significant work towards reducing the 1-norm of a molecular Hamiltonian; for instance, they describe several approaches for changing the Pauli basis decomposition of the Hamiltonian (16), including grouping together anti-commuting Pauli strings. When applied to the Trotterised PITE procedure, we must consider that whilst using these methods to reduce the 1-norm of the Hamiltonian would result in a higher success probability, it may also increase the Trotter error, resulting in a higher gate complexity. An alternative approach for reducing the 1-norm is to apply a symmetry shift operator to the Hamiltonian, \(e^{-\hat{H}\tau}=e^{-\hat{S}\tau}e^{-(\hat{H}-\hat{S})\tau}\), such that the 1-norm of the operator \(\hat{H}-\hat{S}\) is less than that of \(\hat{H}\). Provided the state vector satisfies symmetry constraints, for real time evolution \(t=-i\tau\), the unitary operator \(e^{-i\hat{S}t}\) applies a phase shift to the state vector and can be ignored. However, this cannot be exploited for imaginary time evolution: shifting the Hamiltonian leaves behind a non-unitary operator, \(e^{-\hat{S}\tau}\), which, as previously discussed in Section III.2.1, is non-trivial to implement as a quantum circuit. In practice, currently one must run a number of circuits to the end of the simulation, and the shots for which all ancillary measurements yielded \(\left|0\right\rangle\) - that is, the successful shots - are post-selected. Since the probability of successful application decreases exponentially with the number of mid-circuit measurements, the number of successful shots from which to estimate the expectation value also decreases exponentially; a sufficiently high number of shots must be selected to ensure that the final energy is determined with high accuracy. In future applications, the implementation of a quit-if-fail functionality would significantly improve the overall run-time of the algorithm. The gate complexity and the total number of shots required to obtain an accurate estimate of the ground state energy at the end of the ITE procedure depend on the total evolution time \(\tau\) for convergence. Consider expressing an arbitrary state \(\left|\psi(0)\right\rangle\) in the eigenbasis of \(\hat{H}\), \(\left|\psi(0)\right\rangle=\sum_{i=0}^{2^{n}-1}a_{i}\left|\phi_{i}\right\rangle\), where \(\hat{H}\left|\phi_{i}\right\rangle=E_{i}\left|\phi_{i}\right\rangle\) and \(E_{0}<E_{1}<...<E_{2^{n}-1}\)[56]. The unnormalised state after a time \(\tau\) is \[\left|\psi(\tau)\right\rangle=\sum_{i=0}^{2^{n}-1}a_{i}e^{-E_{i}\tau}\left| \phi_{i}\right\rangle, \tag{31}\] and its normalisation is given by \[\left\|\left|\psi(\tau)\right\rangle\right\|^{2} =\sum_{i=0}^{2^{n}-1}|a_{i}|^{2}e^{-2E_{i}\tau}\] \[=|a_{0}|^{2}e^{-2E_{0}\tau}\left(1+\sum_{i>0}^{2^{n}-1}\left( \frac{\left|a_{i}\right|}{\left|a_{0}\right|}\right)^{2}e^{-2(E_{i}-E_{0}) \tau}\right). \tag{32}\] To specify how long we need to evolve the state, let \(\tau\) be the time required to achieve an accuracy of \(\epsilon\ll 1\) in the squared overlap with the ground state, \[\frac{\left|\left\langle\psi(\tau)\right|\phi_{0}\right\rangle|^{2}}{\left\| \left|\psi(\tau)\right\rangle\right\|^{2}}\geq 1-\epsilon. \tag{33}\] In the large \(\tau\) limit, only the ground state and first excited state contributions are left: \[\frac{|a_{0}|^{2}e^{-2E_{0}\tau}}{|a_{0}|^{2}e^{-2E_{0}\tau}\left(1+\left( \frac{\left|a_{1}\right|}{\left|a_{0}\right|}\right)^{2}e^{-2(E_{1}-E_{0}) \tau}\right)}\geq 1-\epsilon. \tag{34}\] \[1+\left(\frac{\left|a_{1}\right|}{\left|a_{0}\right|}\right)^{2}e^{-2(E_{1}-E _{0})\tau}\lesssim\frac{1}{1-\epsilon}. \tag{35}\] The number of times, \(r_{\epsilon}=\frac{\tau}{\Delta\tau}\), that a time step of \(\Delta\tau\) must be applied to achieve an error, \(\epsilon\), is thus given by \[r_{\epsilon}\gtrsim\frac{1}{2(E_{1}-E_{0})\Delta\tau}\ln\bigg{(}\frac{|\alpha_ {1}|^{2}}{\epsilon|\alpha_{0}|^{2}}\bigg{)}. \tag{36}\] As expected, the more easily distinguishable the ground state and first excited state energies are, the faster convergence to the ground state is achieved. Further, the better the initial guess, i.e. the greater the initial overlap with the ground state, the faster convergence is achieved. Combining Equations (30) and (36), assuming \(\Delta\tau\) is small enough to be able to neglect the Trotter error, would in principle give a lower bound on the total number of shots required to obtain an error of \(\epsilon\) in the ground state estimation. The circuit depth required for convergence to the GS is then determined by the number of Pauli gadgets, \(rM\), where \(M\) is the number of local terms comprising the Hamiltonian (16). Each of these Pauli gadgets requires \(2n\) cnot gates; thus each Pauli gadget produces a gate depth that scales at worst linearly in \(n\), and at best logarithmically in \(n\)[37]. The whole procedure can be implemented using one ancillary qubit by resetting it to \(\left|0\right\rangle\) following every mid-circuit measurement. Results Pytket [57] is used for the construction and compilation of the circuits, and all quantum simulations are performed with the Qiskit Aer simulator [58]. For each of the Hamiltonians, the constituent Pauli strings are partitioned into mutually commuting sets to reduce the Trotter error. Energies and their associated errors are estimated in the following manner. Individual Pauli strings are sampled and the mean energy is constructed as \[\langle\hat{H}\rangle=\sum_{k=1}^{M}c_{k}\langle\hat{\sigma}_{\mathbf{m}_{k}}\rangle. \tag{37}\] The error \(\Delta E\) in the mean energy arising from using a finite number of shots is determined using a similar method to Kandala et al [59]: \[\Delta E=\sqrt{\sum_{k=1}^{M}\frac{c_{k}^{2}\langle\Delta\hat{\sigma}_{\mathbf{m}_ {k}}^{2}\rangle}{N_{k}}}, \tag{38}\] where \(\langle\Delta\hat{\sigma}_{\mathbf{m}_{k}}^{2}\rangle\) is the variance on Pauli string \(\hat{\sigma}_{\mathbf{m}_{k}}\) and \(N_{k}\) is the number of successful shots used in the measurement of \(\langle\Delta\hat{\sigma}_{\mathbf{m}_{k}}^{2}\rangle\). ### Models studied #### iv.1.1 Ising Model The transverse field Ising model (TIM) [60] is the simplest spin model that reveals interesting properties of quantum magnetism, such as quantum phase transitions and quantum spin glasses, and can been used to simulate quantum annealing [61]. In this paper, given the naive implementation of the algorithm, we restrict ourselves to the one-dimensional case, for which the spectrum can be found analytically by a JW transformation from the spin model onto free fermions [60] (note that this is the reverse of the JW mapping mentioned elsewhere in this work, which refers to encoding fermionic degrees of freedom into qubits). The Hamiltonian for the TIM with \(n\) sites and periodic boundary conditions is defined as: \[\hat{H}=-J\sum_{i=1}^{n}\hat{Z}_{i}\hat{Z}_{i+1}-h\sum_{i=1}^{n}\hat{X}_{i}, \tag{39}\] where \(\hat{Z}_{n+1}=\hat{Z}_{1}\). The Ising model with \(n\) sites is represented with \(n\) qubits. Under the JW transformation, the TIM Hamiltonian is composed of \(2n-1\) Pauli strings. We prepare the initial state to be an equal superposition of all spin basis states using the Hadamard gate: \(H^{\otimes n}\ket{0}^{\otimes n}\). This represents a uniform prior - that is, we have not encoded any prior information about what we might expect the ground state (GS) to be. Importantly, since we consider the ferromagnetic limit, \(J>h\), we do not benefit from a good initial guess for the GS. #### iv.1.2 Hubbard model The fermionic Hubbard model (FHM) is the simplest possible model for correlated electrons; it approximates long-range Coulomb interactions with on-site interactions, but still exhibits a wide range of interesting phenomena, including magnetic ordering, metal-insulator transition and superconductivity [62, 63]. Further, the FHM exhibits correlations that are difficult to capture by classical methods. To this end, the Hubbard model is widely used as a benchmark for quantum algorithms. Again, we restrict our analysis to the one-dimensional chain under periodic boundary conditions, for which an analytic solution is known [64]. The Hamiltonian for the non-relativistic single-band fermionic Hubbard model in real space is given by: \[\hat{H}=t\sum_{\lambda=\uparrow,\downarrow}\sum_{i,j}\hat{c}_{i,\lambda}^{ \dagger}\hat{c}_{j,\lambda}+U\sum_{i}\hat{c}_{i,\uparrow}^{\dagger}\hat{c}_{i,\uparrow}\hat{c}_{i,\downarrow}^{\dagger}\hat{c}_{i,\downarrow}, \tag{40}\] where \(U>0\) corresponds to repulsive on-site electron-electron interactions, and \(t<0\) corresponds to a lowering of the kinetic energy of the system by allowing for delocalisation over the sites. We consider the case where electrons are strongly interacting - that is, when \(\left|\frac{U}{t}\right|\geq 1\). The sum over \(i,j\) is typically restricted to account for the exponentially decaying overlap of wavefunctions between sites. We adopt the standard model, in which hopping is only considered between nearest-neighbours. The Hubbard model with \(m\) sites is represented by \(2m\) qubits under the JW transformation. An occupation number basis is used to enumerate the states in the Hilbert space; for instance, the 2-site model has basis states \(\ket{n_{1\uparrow}n_{1\downarrow}n_{2\uparrow}n_{2\downarrow}}\). The Hamiltonian under the JW transformation is comprised of \(7m-3\) Pauli strings, including \(\hat{I}^{\otimes 2m}\). Since the Hubbard Hamiltonian conserves the total number of spin up and spin down electrons, \([\hat{H},\hat{n}_{\uparrow}]=[\hat{H},\hat{n}_{\downarrow}]=0\), we can consider the action of the Hamiltonian on a particular sector \((n_{\uparrow},n_{\downarrow})\) of Hilbert space [65]. For a system of \(m\) sites, we consider the half-filled sector \((m/2,m/2)\). Figure 6 gives the initial state preparation circuit; the X and cnot gates are used to excite the state into the sector \((m/2,m/2)\), whereas the H and R\({}_{y}\) gates produce a superposition of the two antiferromagnetic states: \(\frac{1}{\sqrt{2}}(\cos\frac{\theta}{2}-\sin\frac{\theta}{2})\ket{011001...}+ \frac{1}{\sqrt{2}}(\cos\frac{\theta}{2}+\sin\frac{\theta}{2})\ket{100110...}\). We choose \(\theta=\pi\), giving an out-of-phase superposition of the two states. ### Simulation Results Figures 7 and 8 show the simulation results for the 4-site TIM and the 2-site Hubbard model respectively; both the expected energy and the fraction of successful measurement shots are plotted and compared to the exact results, obtained from an exact diagonalisation of the Hamiltonian, as well as to the expected evolution of the Trotterised ITE operator. There is good agreement between the expected Trotter results and the simulation results; deviations not accounted for by the error calculations can be attributed to additional stochastic errors. We observe the expected exponential decay in the total probability of successful application - the longer the evolution time required for convergence to the ground state, the fewer the number of successful shots remaining that can be used in the final estimation of the energy. As expected, the stochastic errors in the mean energy increase throughout the evolution as the number of successful shots decreases. As the system size increases, the final success probability at convergence is reduced. After increasing the number of sites in the Hubbard model to 3 and 4 sites, the success probability at convergence decreases to \(10^{-5}\) and \(10^{-9}\) respectively, requiring a prohibitive number of shots (roughly \(10^{9}\) and \(10^{13}\)) to obtain an accurate estimation of the ground state. ## V Discussion The PITE algorithm presented in this paper, Trotterised PITE, is shown to systematically recover the ground state for the small systems investigated, giving incentive for further investigation. The most comparable PITE proposal to the method presented in this paper uses a block encoding constructed from a LCU [34]. Similar to Trotterised PITE, the LCU algorithm uses one ancillary qubit, which is reset after each mid-circuit measurement, and unlike many of the previous PITE proposals, both methods automatically build the required circuits given any input Hamiltonian in a Pauli basis. Additionally, both methods rely on the Pauli gadget primitive, immediately giving access to a wealth of optimisation techniques [37]. The main difference between the two proposals arises as a tradeoff between circuit depth and the number of mid-circuit measurements required. LCU PITE uses \(2M\) controlled Pauli gadgets per time step, where \(M\) is the number of Pauli strings comprising the Hamiltonian, followed by a single mid-circuit measurement. On the other hand, Trotterised PITE uses \(M\) (not controlled) Pauli gadgets per time step, along with \(M\) mid-circuit measurements. Note that the requirement for more mid-circuit measurements in our algorithm is not a restriction for some quantum computer architectures, like those based on trapped ions, as they are able to execute mid-circuit measurements on a similar timescale as gate operations. The limiting factor in the implementation of all PITE algorithms, as with most block encoding methods, is the exponential decay in the success probability with the number of time steps, necessitating the use of large numbers of shots, from which the successful shots are post-selected. We emphasise that it is likely that any PITE algorithm will need to be run in tandem with some form of AA procedure to boost the success probability, although it is important to appreciate that the design of this AA procedure is a non-trivial task and requires specific consideration of the particular PITE procedure that is to be boosted; for instance, using Grover's search algorithm, as demonstrated by Liu et al [32] and by Nishi et al [52], who proposed an AA procedure for the LCU PITE method. Indeed, the simulation results demonstrate that, given an initial naive implementation of the Trotterised PITE algorithm, it is prohibitively expensive to apply it to larger systems; for the Hubbard Hamiltonian, it was even infeasible to run simulations for more than 2 sites. Note that, although Trotterised PITE requires more mid-circuit measurements per time step than LCU PITE, this does not affect the final success probability. This is because the success probability at any given time is only determined by the degree of non-unitarity of the operator - that is, it is largely determined by the Hamiltonian itself, and to a lesser extent by the prefactor \(\frac{1}{\alpha}\) in the block encoding, which reduces the success probability of the implementation of the block encoding by a factor of \(\alpha^{2}\) (7). The value of the prefactor \(\frac{1}{\alpha}\) for the Trotterised PITE method can be increased by reducing the 1-norm of the qubit Hamiltonian, a task that is of interest to several Hamiltonian simulation procedures; to this end, significant work has already been carried out [55]. The final success probability in a PITE procedure largely depends on the evolution time required for convergence to the ground state \(\tau\), which is a property inherent to the system, since it is fixed by the energy difference between the ground state and first excited Figure 8: Energy estimate (left panel), relative energy compared to the true GS energy, \(E_{0}\) (centre panel) and success probability (right panel) plotted as a function of the number of time steps for the 2-site Hubbard Hamiltonian, with \(t=-0.1\), \(U=0.1\) (40), and \(\Delta\tau=0.1\). The red curves are the exact results obtained from an exact diagonalisation of the Hamiltonian, the green curves are the expected results from the Trotterised ITE operator, and the black points have been evaluated with the Trotterised PITE algorithm using \(10^{5}\) measurement shots. Figure 7: Energy estimate (left panel), relative energy compared to the true GS energy, \(E_{0}\) (centre panel) and success probability (right panel) plotted as a function of the number of time steps for the 4-site TIM Hamiltonian, with \(J=0.5\), \(h=0.1\) (39), and \(\Delta\tau=0.1\). The red curves are the exact results obtained from an exact diagonalisation of the Hamiltonian, the green curves are the expected results from the Trotterised ITE operator, and the black points have been evaluated with the Trotterised PITE algorithm using \(10^{5}\) measurement shots. state of the system (Equation 36). However, \(\tau\) is also dependent on the initial guess wavefunction. When using a Jordan-Wigner transformation, which maps each spin orbital onto a qubit, subsequent entanglement of the qubits will encode a Full Configuration Interaction (FCI) wavefunction. It is important to note that, since this work presents a proof of principle implementation of a Trotterised PITE procedure, the initial states used in this work were particularly poor. Typically, the Hartree-Fock (HF) state is used as the initial guess: as the system size grows, the contribution of the HF state to the ground state diminishes [66], and \(\tau\) must correspondingly increase. Reducing \(\tau\) requires choosing an initial state with better overlap with the ground state: for instance, by running an initial classical simulation to screen for the most important amplitudes in the FCI expansion. Filip et al. used this approach to screen for important amplitudes in a Unitary Coupled Cluster Ansatz, used in a variational quantum eigensolver routine [51]. On quantum hardware, we have the additional complexity of needing to find methods of efficiently preparing the desired initial state [67]. The mitigation of the exponential decay in the success probability is the most important obstacle for the implementation of PITE methods. However, we should also consider methods for reducing the circuit depths. These depend on: (1) the number of time steps, and (2) the number of Hamiltonian terms applied per time step. Aside from being proportional to \(\tau\), the minimum number of time steps required is also inversely proportional to the maximal time step \(\Delta\tau\) that will still guarantee convergence to the ground state. This is, in turn, determined by the error in the approximation used. As mentioned, both the recently proposed LCU-based algorithm [34] and the Trotter decomposed method presented in this paper incur an error that scales as \((\Delta\tau)^{2}\) per time step (Equation 14). The Trotter error is, in part, also determined by the inherent complexity of the Hamiltonian: the error is increased when many non-commuting terms are applied in series - the ordering of the local terms should therefore be optimised to maximise the number of commuting partitions [44; 45; 10]. However, this approach is only effective when the Hamiltonian contains many commuting terms. Indeed, numerical simulations demonstrate that it is possible to reach much lower Trotter errors than those predicted by the best proven error bounds [44; 45]. In lieu of this, recent approaches to Trotter error mitigation have considered randomising the order of the Hamiltonian terms [68; 69]. Further, classical stochastic ITE (quantum Monte Carlo) methods randomly sample Hamiltonian terms, effectively weighted by the population of walkers located between connected basis states. They rely on the outcome that, in so doing, it is still possible to converge to the ground state [48; 49; 50]. Similarly, the qDRIFT algorithm randomly samples Hamiltonian terms weighted by their coefficients: despite largely forsaking knowledge of the internal commutative structure of the Hamiltonian, it eliminates the dependence of the gate counts on the system size (Equation (15)) [70; 71]. Further to this, the number of local terms may be minimised with a sensible choice of basis. Whilst we have used a second quantised basis throughout this paper, giving rise to a qubit representation of the Hamiltonian comprised of Pauli strings, it has been argued that a first quantised basis may produce more efficient scalings [34]. Any ITE algorithm using a Pauli representation for the Hamiltonian can easily be extended to the first quantised basis using the protocol proposed by Kosugi et al. [34; 72]. Aside from improving the success probability, and thus reducing the number of shots required, we should also consider reducing the time taken for each shot to run. We note that improvements in hardware to provide a quit-if-fail functionality would remove the need to run failed shots to the end of the circuit, greatly reducing the average run-time across shots. ## VI Conclusions We have developed a purely quantum routine for performing probabilistic imaginary time evolution (PITE), based on a Trotter decomposition of the Hamiltonian. The block encoding suggested can be thought of as a modification of the Pauli gadget primitive, which is an efficient and widely used circuit implementation for real time evolution. PITE algorithms avoid many of the limitations of near-term algorithms. Namely, they avoid the restriction on the accuracy that can be achieved as a consequence of using a fixed ansatz in variational methods, including VITE, and any restrictions placed on the locality of the Hamiltonian, as is the case in QITE. In this paper, we have implemented the Trotterised PITE block encoding with a simple initial protocol: successful shots are post-selected, no amplitude amplification procedure is used, and minimal optimisation for the number of mid-circuit measurements required, and the depth of the circuits, is used. We applied this routine to the task of ground state determination in one-dimensional for the Transverse Ising Model with 4 sites and the fermionic Hubbard model with 2 sites. In particular, we found that the number of shots required by this naive implementation became prohibitively high even for a 4-site fermionic Hubbard model. We argue that this behaviour is expected given the nature of the PITE approach. The limiting factor in the performance of all PITE algorithms is that they exhibit an exponential decay in the probability of successful application with the number of mid-circuit measurements applied - indeed, this problem is shared by many block encoding methods. Importantly, the algorithm successfully recovers the ground state for small systems, giving incentive to adapt its structure to overcome this limitation. We discussed a multitude of strategies that could be applied to reduce the run-time of the algorithm, of which the inclusion of an amplitude amplification procedure is likely to be the most significant contribution [32, 52]. ITE methods implemented on quantum circuits have found many applications in recent years. Sokolov et al used ITE for the determination of the ground state of transcorrelated Hamiltonians [73]. They obtained promising results, gaining up to four orders of magnitude improvement in the absolute energy error in comparison to non-transcorrelated approaches. More generally, algorithms that perform imaginary time evolution can be used as a subroutine; for instance, in the calculation of correlation functions [74] and in the estimation of low-lying excited states using quantum subspace diagonalisation methods [22]. Moreover, we note that our algorithm can be readily extended to the real (or imaginary) time Trotterised simulation of non-Hermitian systems, \(\hat{H}=\hat{H}_{1}+i\hat{H}_{2}\), where \(\hat{H}_{1}\) and \(\hat{H}_{2}\) are Hermitian, by concatenating Pauli gadgets for real time propagation and for imaginary time propagation. ## Acknowledgements The authors would like to thank Michael Foss-Feig and Gabriel Greene-Diniz for feedback on the manuscript.
2310.13625
Oversight for Frontier AI through a Know-Your-Customer Scheme for Compute Providers
To address security and safety risks stemming from highly capable artificial intelligence (AI) models, we propose that the US government should ensure compute providers implement Know-Your-Customer (KYC) schemes. Compute - the computational power and infrastructure required to train and run these AI models - is emerging as a node for oversight. KYC, a standard developed by the banking sector to identify and verify client identity, could provide a mechanism for greater public oversight of frontier AI development and close loopholes in existing export controls. Such a scheme has the potential to identify and warn stakeholders of potentially problematic and/or sudden advancements in AI capabilities, build government capacity for AI regulation, and allow for the development and implementation of more nuanced and targeted export controls. Unlike the strategy of limiting access to AI chip purchases, regulating the digital access to compute offers more precise controls, allowing regulatory control over compute quantities, as well as the flexibility to suspend access at any time. To enact a KYC scheme, the US government will need to work closely with industry to (1) establish a dynamic threshold of compute that effectively captures high-risk frontier model development, while minimizing imposition on developers not engaged in frontier AI; (2) set requirements and guidance for compute providers to keep records and report high-risk entities; (3) establish government capacity that allows for co-design, implementation, administration and enforcement of the scheme; and (4) engage internationally to promote international alignment with the scheme and support its long-term efficacy. While the scheme will not address all AI risks, it complements proposed solutions by allowing for a more precise and flexible approach to controlling the development of frontier AI models and unwanted AI proliferation.
Janet Egan, Lennart Heim
2023-10-20T16:17:29Z
http://arxiv.org/abs/2310.13625v1
# Oversight for Frontier AI through a ###### Abstract To address security and safety risks stemming from highly capable artificial intelligence (AI) models, we propose that the US government should ensure compute providers implement Know-Your-Customer (KYC) schemes. Compute - the computational power and infrastructure required to train and run these AI models - is emerging as a node for oversight. KYC, a standard developed by the banking sector to identify and verify client identity, could provide a mechanism for greater public oversight of frontier AI development and close loopholes in existing export controls. Such a scheme has the potential to identify and warn stakeholders of potentially problematic and/or sudden advancements in AI capabilities, build government capacity for AI regulation, and allow for the development and implementation of more nuanced and targeted export controls. Unlike the strategy of limiting access to AI chip purchases, regulating the _digital access_ to compute offers more precise controls, allowing regulatory control over compute quantities, as well as the flexibility to suspend access at any time. To enact a KYC scheme, the US government will need to work closely with industry to (1) establish a dynamic threshold of compute that effectively captures high-risk frontier model development, while minimizing imposition on developers not engaged in frontier AI; (2) set clear requirements and guidance for compute providers to keep records and report high-risk entities; (3) establish government capacity that allows for co-design, implementation, administration and enforcement of the scheme; and (4) engage internationally to promote international alignment with the scheme and support its long-term efficacy. While the scheme will not address all AI risks, it complements existing proposed solutions by allowing for a more precise and flexible approach to controlling the development of frontier AI models and unwanted AI proliferation. ## Executive Summary Emerging risks associated with the development of frontier AI models1 warrant additional regulatory intervention by the US government. The potential of these AI capabilities to enhance adversarial military capabilities and facilitate human rights abuses has led the US to introduce export controls that restrict exports of the specialized AI chips required to develop and deploy large AI models, among other restrictions. Yet gaps in these controls have emerged: there are currently no restrictions on entities accessing controlled chips and their associated computing power through digital means, such as cloud compute provision, offering a potential avenue for adversarial militaries and non-state actors of concern to benefit from US technology. While a blanket ban on cloud access could harm US technology leadership and would be difficult to enforce, there are clear security grounds for addressing these proliferation risks. Footnote 1: Defined as ‘highly capable foundation models that could exhibit dangerous capabilities’, Anderljung et al., _Frontier AI Regulation_ At the same time, broader risks to security and public safety are eliciting concern and a willingness to act from industry and government alike. Experts in industry and academia are warning of significant misuse risks, such as AI increasing the availability of biological weapon information and incentivizing malicious actors to attempt their development,2 as well as increasing risks of misinformation and electoral interference. US AI leaders have committed to following voluntary guidelines,3 but as noted by Senator Blumenthal, Chair of the Senate Judiciary Subcommittee on Privacy, Technology, and the Law, 'there will be regulation. The only question is how soon, and what.-4 As the home to leading players in the AI industry and supply chain, the US is uniquely positioned to shape regulatory approaches. Yet interventions will need to carefully balance maintaining US influence and industry power with providing avenues to effectively identify and mitigate critical risks. Footnote 2: Zakrzewski, Lima, and DiMolfetta, “Tech leaders including Musk, Zuckerberg call for government action on AI.” Footnote 3: Kang, “In Show of Force, Silicon Valley Titans Pledge ‘Getting This Right’ With A.L.” Across these proliferation, safety, and security risks, compute - the computational power and infrastructure required to train and run these AI models - offers a key node for oversight and control. The quantity of compute required for frontier AI models has resulted in cloud compute forming a key part of the AI supply chain, with the US as the global leader in AI compute provision. Alongside other potential regulatory interventions,5 increasing oversight of AI compute could enable earlier identification of emerging risks and more targeted responses. Footnote 4: _Oversight of A.L._. Footnote 5: Such as interventions at the model and applications levels, as proposed by Microsoft. Smith, _Developing and Deploying AI Responsibly: Elements of an Effective Legislative Framework to Regulate AI_. This paper recommends that the US government implement a Know-Your-Customer (KYC) scheme for AI compute providers, most notably Cloud Service Providers (CSPs), to enable greater oversight of the development of frontier AI models. Such a concept has already been proposed by Microsoft,6 as well as AI researchers,7 as a way of increasing accountability and managing risks. Implemented in partnership with industry, a KYC scheme has the potential to warn of significant advances in AI capability, build government capacity in AI regulation, and allow for more nuanced and targeted controls. This scheme could be accompanied by updated Export Administration Regulations that restrict the provision of above-threshold compute to companies on the Entity List. Beyond export controls, a KYC scheme could provide the groundwork for domestic safety regulations and support responsible AI development. The KYC scheme could be designed to leverage existing technical metrics and preserve privacy for compute providers and customers. This paper draws on lessons learned from the mature application of KYC in the financial sector to propose the development of a KYC scheme for compute providers. It recommends that the US government work with industry to: 1. **Establish a threshold of compute for the scheme** that effectively captures high-risk frontier model development,8 while minimizing imposition on developers not engaged in frontier AI. The threshold should be defined by the total amount of computational operations - a metric easily accessible to compute providers, as they employ chip-hours for client billing, convertible to total computational operations. Additionally, this threshold would need to be dynamic and subject to periodic reassessments by government, in close consultation with industry, to remain in step with developments in training efficiency as well as broader societal changes. It would also need to be supported by collaboration between compute providers, as well as with government, to minimize evasion risks. Footnote 8: In this paper, we are primarily addressing the governance of AI systems development. Questions about potential large-scale deployment, i.e. inference compute, are outside the scope of this particular discussion. However, the oversight of deployment compute might also be a future policy tool for regulating AI deployment (Appendix A in O’Brien, Ee, and Williams, _Deployment Corrections: An Incident Response Framework for Frontier AI Models_; Brundage et al., forthcoming). 2. **Set clear requirements for compute providers,** including requirements for gathering information, implementing fraud detection, keeping records, and reporting to government any entities that match government-specified 'high-risk' profiles. These requirements should be technically feasible, resilient against efforts to evade detection and enforceable, while preserving privacy. 3. **Establish government capacity** within the US Department of Commerce that allows for the co-design, implementation, administration, and enforcement of the scheme. This capacity should draw on existing expertise within the US government, as well as contribute to a deeper understanding of AI regulatory challenges to inform broader policies. 4. **Engage with international partners** to promote alignment with the scheme. While the US, as a significant global compute provider that yields substantial influence in the semiconductor supply chain, can exert broad influence through a domestically implemented scheme, cooperation with international partners will be a key enabler of increased oversight in the longer term. Consistent international standards will help ameliorate the risk of diminishing US AI leadership and will be essential to the long-term effectiveness of the scheme. In support of this scheme, this paper makes several further recommendations to the US government, including to engage industry to co-design the scheme; develop more targeted controls for the cloud; publish guidance on information sharing in the context of US antitrust laws to enable effective risk management by CSPs; and strong international advocacy and engagement to garner international buy-in and alignment. Policy Context ### Gaps have emerged in the US's AI regulatory architecture The US's October 2022 export controls, further strengthened in 2023,9 are designed to limit the PRC's ability to access and develop semiconductors for highly capable artificial intelligence (AI) applications.10 Experts expect this measure to slow the PRC's development of advanced AI,11 forming a cornerstone of US efforts to counter the proliferation of US-enabled AI technology to adversarial militaries. A subsequent executive order in August 2023 complements this, by inhibiting US investment in China's advanced technology sector.12 Yet there is evidence of other avenues through which PRC entities can still access and develop high-risk advanced AI capabilities. It is not necessary for these entities to own the physical chips if they are accessible digitally. PRC actors can use the computing power of controlled US chips through the cloud, without any formal mechanism for oversight. There are clear national security grounds for addressing this gap. Regardless of whether management is achieved through a blanket ban (not recommended, see Box 1 for details) or active monitoring and more targeted controls (recommended), increased visibility of advanced AI cloud compute is an essential prerequisite to effective action. Footnote 9: Bureau of Industry and Security, _Commerce Strengthens Restrictions on Advanced Computing Semiconductors, Semiconductor Manufacturing Equipment, and Supercomputing Items to Countries of Concern_. Footnote 10: Bureau of Industry and Security, _Commerce Implications New Export Controls on Advanced Computing and Semiconductor Manufacturing Items to the People’s Republic of China_. Footnote 11: Allen and Benson, “Clues to the U.S.-Dutch-Japanese Semiconductor Export Controls Deal Are Hiding in Plain Sight.” Footnote 12: The White House, _Executive Order on Addressing United States Investments in Certain National Security Technologies and Products in Countries of Concern_. **Box 1: Controlling the PRC's access to chips through the cloud** As the PRC's access to advanced AI chips through the cloud seems to be at odds with current export controls, US lawmakers are already looking to take action. A Bill introduced into Congress in mid July 2023 proposes to 'prohibit support for the remote use or cloud use of integrated circuits' covered by export controls to prevent their use by actors in China or Macau.13 Yet a blanket ban on this access would likely lead to adverse outcomes.14 In particular, it risks diminishing US market share of cloud computing services and the US hardware used in these services, while the PRC focuses completely on its own sovereign capability or establishing access elsewhere.15 Footnote 12: The White House, _Executive Order on Addressing United States Investments in Certain National Security Technologies and Products in Countries of Concern_. Unlike physical exports, cloud access allows for a precise and flexible mechanism to manage proliferation risks. Export controls on chips are blunt by necessity - once exported, chips cannot be retracted nor their end use controlled. So, while a PRC entity's use of a small number of high-capability chips would not in themselves raise strategic AI proliferation concerns,16 individual chips exported to the PRC could be amalgamated to bolster and advance in-country capability. The use of this capability is then beyond US awareness and control. In contrast, digital access to these chips only provides entities with point-in-time compute power, which can be restricted or shut off at any stage. Compute providers also retain insight into how much compute is being used per customer. Cloud access therefore allows for flexible digital controls that can be precisely targeted to capture the most resource-intensive compute uses, and be easily adapted as geopolitical conditions, capabilities, and risks change. As the compute provider already has compute usage information for its customers, this approach does not require privacy trade-offs to be effective. This allows for a targeted approach that directly addresses risk, while minimizing impact on US industry and technology leadership. Footnote 10: Bureau of Industry and Security, _Commerce Implications New Export Controls on Advanced Computing and Semiconductor Manufacturing Items to the People’s Republic of China_. However, controlling compute via providers is not a perfect instrument and we want to be conscious of its limitations. We discuss them in the Appendix below. In addition to closing export control gaps, greater oversight over AI compute can yield broader domestic and international benefits In light of increasing concerns around AI misuse and risks to public safety, governments, industry,17 and researchers18 are calling for more oversight and regulation of AI. As posited by Senator Richard Blumenthal, Chair of the Senate Judiciary Subcommittee on Privacy, Technology, and the Law (the Subcommittee), 'Congress failed to meet the moment on social media. Now we have the obligation to do it on AI before the threats and the risks become real.'19 Congress continues to engage industry on potential AI regulation. Seven leading US AI companies have agreed to voluntary ethical, security, and safety standards and called for greater public oversight, but these measures are not currently enforceable and will be unlikely to affect the behavior of broader industry players.20 Microsoft's Brad Smith has called for KYC-inspired techniques to be used to increase accountability in AI development.21 As a key enabler of AI development, compute is emerging as a key domestic and international governance instrument that can offer greater government oversight. In particular: Footnote 18: Rep. Jackson, _Closing Loopholes for the Overseas Use and Development of Artificial Intelligence Act_. Footnote 19: Dühmenthal, _Blumenthal (And AI Software) Delivers Opening Remarks at Senate Hearing on Oversight of Artificial Intelligence_. Footnote 20: Shear, Kang, and Sanger, “Pressured by Biden, A.I. Companies Agree to Guardrails on New Tools.” Footnote 21: Smith, _Developing and Deploying AI Responsibly: Elements of an Effective Legislative Framework to Regulate AI_. Footnote 22: Furthermore, these entities could also encompass AI agents themselves. Consider, for instance, an AI model exhibiting behaviors akin to a computer worm, actively seeking to acquire additional compute resources (e.g, for self-replication). Measures should be in place to ensure that such entities accessing compute are legitimate legal entities, thereby preventing rogue AI systems from taking unauthorized actions. 23. Whittestone et al., “Response to the UK’s Future of Compute Review: A Missed Opportunity to Lead in Compute Governance.” 24. Anderljung et al., _Frontier AI Regulation_. 25. The White House, _FACT SHEET: Biden-Harris Administration Secures Voluntary Commitments from Leading Artificial Intelligence Companies to Manage the Risks Posed by AI_. 26. Waters, “US should use chip leadership to enforce AI standards, says Mustafa Suleyman | Financial Times.” 27. #### Cloud compute is an essential enabler of AI development Compute refers to the computational power and infrastructure that is required to train and run AI models and systems. Training compute has grown by a factor of 55 million since 2010, outpacing hardware price performance.27 The intensity of compute required for AI makes it impractical for most AI designers and operators to build and maintain their own data processing centers. This has led to compute emerging as a key node in the AI supply chain, with AI compute providers, most notably CSPs, renting out the hardware infrastructure, including access to advanced AI chips. It is in the US's interests that AI innovators continue to use cloud compute (rather than procuring and maintaining their own costly hardware capabilities) given that the majority of CSPs are in the US.28 Allowing easy access to and use of superior US capabilities can also make AI innovators more invested in and reliant on leading US software stacks, like NVIDIA's, reducing the incentives for them to develop their own with less US oversight. Continuing to enable and support a strong cloud compute industry in the US will support US technology leadership more broadly. Notably, US companies are subject to US regulations, which serve as a useful channel for interventions to manage AI risks. Footnote 27: Sevilla et al., _Compute Trends Across Three Eras of Machine Learning_. Footnote 28: Taylor, _Data centers worldwide by country 2023_. Footnote 29: Anderljung et al., _Frontier AI Regulation_. #### Increasing risks call for increasing vigilance There is currently no obligation for US cloud providers to actively verify the identity of their customers before providing them with access to advanced AI compute. This is problematic in terms of US export controls. It is also of concern from a broader public interest perspective. As AI models grow in capability, so too do the risks to public safety. Foundation models, also known as general-purpose models (like ChatGPT) have already demonstrated significant capability advances. AI experts across sectors are warning of risks that may arise in future generations of general purpose models, particularly because it is hard to predict how dangerous capabilities might arise, and/or how they might be misused.29 Researchers have coined the term 'frontier AI models', defined as 'highly capable foundation models that could exhibit dangerous capabilities'.30 For example, frontier AI models could have the potential to conduct sophisticated, automated offensive cyber activities, enable non-experts to design and access new bioweapons, generate and proliferate persuasive deepfake content, or act in unexpected ways that cause harm.31 Like their forerunners ChatGPT, Bard, and other current foundation models, these future frontier AI models would need to be trained through significant use of computing power: typically the more computation, the greater the capability. Therefore, in considering options to prevent technology transfer to adversarial militaries, the US government should consider interventions that can also address broader safety and security needs. Footnote 29: Anderljung et al. Footnote 30: Anderljung et al. Footnote 31: Shevlane et al., _Model evaluation for extreme risks_. #### Box 2: What do we mean by 'compute providers'? In this context, we use the term 'compute provider' to refer to an entity providing access to computing power and infrastructure. In the majority of cases for AI development, these compute providers are CSPs, businesses that provide access through the cloud (including to foreign entities). Yet given the potential domestic safety implications of frontier AI models, taking a more expansive view of 'compute providers' may increase the longevity and impact of the proposed scheme. From a domestic perspective, it leaves the scheme open to the possibility of capturing providers of compute hardware and in-house compute capability (when it has the ability to go above the designated threshold). For example, it could be desirable that large technology companies using their AI compute internally also monitor and keep records for compliance with regulations in alignment with a KYC scheme. This is an area that would benefit from further research and analysis in collaboration with compute providers. A Know-Your-Customer scheme for compute providers The US government should introduce a KYC scheme that ensures adequate monitoring for advanced AI cloud compute and allows the government to require compute providers to report high-risk entities and deny access to entities of concern.32 A KYC scheme requires businesses and organizations to verify the identity of their clients in order to provide them with access to particular goods and services. Introducing KYC requirements for entities accessing significant amounts of compute could help identify risks and enable further targeted restrictions where there is significant risk to national and global interests. Requiring compute providers to build greater awareness of the risks could encourage a safer AI industry aligned with public benefit. Footnote 32: Fist, Heim, and Schneider, _Chinese Firms Are Evading Chip Controls_. It is important to note that this proposed KYC scheme for advanced AI cloud compute would not capture all AI models being developed or used by malicious actors. For example, there are already specific less-advanced models today, which do not require massive amounts of compute, that raise biochemical weapon development concerns [33] or enable more targeted malicious cyber activity. Setting the threshold to capture and monitor the compute of all AI models would not be beneficial, as it would capture too much information to be useful while imposing a significant imposition on industry. Such risks could instead be managed through other safeguards,34 while the proposed in-depth KYC would focus on powerful foundation models trained on significant amounts of compute. Footnote 33: Urbina et al., “Dual use of artificial-intelligence-powered drug discovery.” Footnote 34: For example, through Government-Industry engagement on malicious cyber activity, intelligence monitoring of actors of concern etc. Footnote 35: Office of the Comptroller of the Currency, _Bank Secrecy Act (BSA)_; Financial Crimes Enforcement Network, _USA PATRIOT Act_. The history of the implementation of KYC in the financial sector could provide useful lessons for scheme design (Box 3). **Box 3: Learning from KYC in the financial sector** The most mature application of KYC can be seen in the financial sector, where it forms the cornerstone of Anti-Money Laundering (AML) and Counter-Terrorist Financing (CTF) obligations. In this context, KYC is implemented through domestic US legislation (particularly through the _Bank Secrecy Act_ including as amended by the _USA PATRIOT Act_). This legislation creates obligations for financial institutions to: * implement a customer identification program * conduct risk assessments * undertake enhanced due diligence for customers assessed as higher risk * identify and verify a customer's beneficial owners * conduct ongoing monitoring * report suspicious activity to the US government.36 Footnote 36: Financial Crimes Enforcement Network, _USA PATRIOT Act_. These obligations are enforced by the Financial Crimes Enforcement Network (FinCEN) within the US Treasury, which acts as a regulator, investigator and financial intelligence unit.37 In recent years, FinCEN has faced enforcement difficulties from the use of shell companies to hide illicit activity.38 In response, FinCEN has implemented a rule establishing a beneficial ownership information reporting requirement, which will obligate companies registered in the US to provide information on the persons who control them, with a start date of January 2024.39 Footnote 37: Hamilton and Condon, “U.S. Treasury Wants a Registry to Crack Down on Shell Companies.” Given the global nature of financial markets and transactions, KYC for the financial sector is well supported by international cooperation. In particular, the Financial Action Task Force (FATF) is the key intergovernmental organization that promotes global AML/CTF standards.[39] The FATF was established by the Group of 7 (G7) in 1989.[40] The FATF standards include 40 recommendations - including on KYC requirements - that form a framework for member nations to implement through their own domestic legislation.[41] The FATF also publishes lists of jurisdictions that fail to effectively implement safeguards, and calls for members to apply enhanced due diligence and countermeasures to entities from these countries to protect the international financial system.[42] This ensures a consistent approach that supports efforts to counter money laundering, terrorism financing, and the proliferation of weapons of mass destruction.[43] The financial sector's KYC scheme is also supported by a well-established intelligence architecture specifically for AML/CTF threats. The Egmont Group of Financial Intelligence Units brings together 166 Financial Intelligence Units globally to share intelligence and expertise to counter money laundering and terrorism financing threats.[44] The cost of compliance and enforcement of AML/CTF is significant. The 2024 Federal Budget allocated $229 million for FinCEN, including funding for 350 staff members, an increase of $39 million from 2023.[45] In 2022, the cost of financial crime compliance efforts across financial institutions is estimated to be $40.7 billion in the US and $274.1 billion globally.[46] _Implementation: obstacles and adjustments_ The financial sector presents a case study of situations in which non-compliance with obligations persisted until significant penalties were applied. For the first decade of the Bank Secrecy Act, from 1972 to 1985, regulators did not enforce reporting requirements, resulting in low compliance from financial institutions.[47] However, a 1985 $500,000 penalty issued to the Bank of Boston led to a sharp increase in reporting, as well as more evasive behavior from customers.[48] More recently, Congress has taken further action to increase enforcement. The Anti-Money Laundering Whistleblower Improvement Act, signed into law in December 2022, increases reporting incentives with greater financial rewards for successful tips.[49] KYC in the financial sector also demonstrates implementation risks, including the use of "structuring" to evade publicly set thresholds.[50] In response to obligations for banks to report transactions exceeding $10,000 in any one day, entities and individuals started to intentionally break up transactions across bank accounts and/or days to avoid scrutiny.[51] Amendments made through the _2001 PATRIOT Act_ have sought to address this by making structuring a criminal offense. #### Developing a KYC scheme for advanced AI cloud compute Informed by the model established by the financial sector, the development of a new KYC scheme will require: 1. defining a threshold of AI compute at which the scheme would apply 2. introducing requirements and guidance for compute providers above that threshold, including reporting to government entities that match specified 'high-risk' profiles, and adhering to rules 3. establishing government capacity for engagement, regulation, and enforcement 4. engaging and cooperating with international partners 5. establishing a process for evaluation and updates to the scheme. The introduction of a KYC scheme for advanced AI cloud compute would lay the groundwork for understanding the threat, including any substantial access attempts by People's Liberation Army (PLA)-linked entities, and enable more targeted restrictions to prevent entities of concern from access through the cloud. Importantly, it would also increase government's ability to see trends and emerging risks and point AI policy makers towards companies operating at the cutting-edge to allow for better engagement in risk management. ### Compute is indicative of AI capabilities and KYC thresholds should be set accordingly KYC obligations should apply at and beyond a threshold of advanced AI compute that captures the most critical AI risks, while minimizing regulatory impost on industry.52 One can quantify the computational performance of hardware in terms of a unit of floating point operations per second (FLOP/s). The resulting accumulation of computing power going into developing or deploying an AI model can be measured in the quantity of FLOP.53 This allows for a defined threshold. Government could work closely with AI experts and compute providers to ensure that the threshold is set at a level that captures frontier AI models. Because such models are most likely to emerge at the largest compute scales, the initial threshold could be set at the level of FLOP used to train current foundational models, like GPT-454 or Claude 2,55 thereby also capturing models trained on even more compute. This would also ensure that the burden of compliance with the KYC scheme only falls on operators able to absorb it; GPT-4, for example, is estimated to have cost $50 million to train.56 This would only capture a handful of models at the time of writing. With training requirements continuing to double every six-to-12 months, we can expect that an increasing number of cloud-trained AI model developers will be subject to KYC.57 Footnote 52: Here we focus on compute used for the development of advanced AI systems, and potential large scale deployment. We do not answer the question of inference compute. Improvements in algorithmic efficiency over time may result in increased compute efficiency - requiring less compute to achieve the same training results, and potentially necessitating a lower threshold to capture models of concern.58 Conversely, advances in regulation or society's ability to manage AI impacts may decrease the risks associated with advanced AI models, thereby making a higher threshold appropriate.59 The threshold should therefore be dynamic and subject to regular review by government and industry. It should be responsive to metrics broader than computing power that influence AI capability and society's resilience to AI risks. This will also allow for continued refinement as processes mature and the ability to adapt to changing geostrategic conditions and risks. Applying a specified FLOP threshold offers a feasible path to implementation and does not require cloud providers to access the data or confidential information of their customers. Cloud access to chips is billed by the hour, so the accumulated total FLOP is easily identifiable by the compute provider. The compute provider could then implement KYC checks and enhanced due diligence for any projects seeking to cross that threshold. In many cases, the total amount of compute procured will be specified at the time of entering into contract, but there may also be cases where additional compute is purchased over time, to the point at which a specific vendor crosses the threshold. Compute providers should therefore continuously monitor compute use, and ensure that entities approaching the threshold are funneled into the KYC process before that point is reached. #### Regulatory impost is likely to be low, with few stakeholders affected Given the proposed threshold, only a small number of customers for a small number of US compute providers would be affected. Providers that offer, or use in-house, the most advanced computing power also tend to be the most resourced, such as Microsoft Azure, Google Cloud, NVIDIA, Amazon Web Services (AWS), and Oracle Cloud Infrastructure.60 These factors could help mitigate the risk of an overly costly, burdensome regime (criticisms often directed at KYC in the financial sector). In addition to having the bandwidth to implement KYC, these companies may already be working to control significant AI risks, given their public commitments to ethical and/or responsible AI.61 Microsoft has specifically called for the implementation of a KYC program in their report _Governing AI: A Blueprint for the Future_,62 and Amazon, Anthropic, Google, Inflection, Meta, Microsoft, OpenAI and NVIDIA, among others, have committed to further safeguards against risky AI.63 Footnote 60: NVIDIA Corporation, _NVIDIA Hopper GPUs Expand Reach as Demand for AI Grows_. Footnote 61: Croak, _Google Research, 2022 & beyond_. Footnote 62: Smith, _How do we best govern AI?_. Footnote 63: Shear, Kang, and Sanger, “Pressured by Biden, A.I. Companies Agree to Guardrails on New Tools.” Footnote 64: Bureau of Industry and Security, _Commerce Implements New Export Controls on Advanced Computing and Semiconductor Manufacturing Items to the People’s Republic of China_. ### Requirements on compute providers - due diligence that identifies risk and implements controls For AI cloud compute services that possess sufficient computational power, i.e., type and number of AI chips, to surpass the threshold, the scheme would require compute providers to identify and verify the entity and its beneficial owners, and maintain appropriate records. The government could also define 'high-risk' profiles, and require compute providers to report such cases, in order to monitor for emerging risks and to inform future controls. The KYC scheme should also be used as a key mechanism to ensure the implementation of rules. While the KYC scheme provides the mechanism for oversight of advanced AI compute, rules governing US companies in their provision of such compute can be implemented through complementary existing authorities. For example, the US government could update the rules affecting the Export Administration Regulations to prevent the provision of above-threshold compute to entities on the Entity List without a license.64\({}^{\text{\textregistered}}\)65 Footnote 65: This paper does not propose expanding export controls to prevent entities beyond the Entity List from accessing cloud compute. Instead, we recommend KYC is used to develop a more nuanced understanding of the risks of PRC use of US compute. Given the flexible, digital controls it enables, it can be quickly applied once risks are identified. Beyond export controls, the KYC scheme will allow for the implementation of broader regulations on AI. For example, should the government choose to mandate the voluntary commitments made by leading AI companies - including, for example, having implemented safe-development practices and cybersecurity standards - the KYC checks could also ensure that those accessing advanced AI compute are in compliance before permitting access. In this way, the KYC scheme forms a flexible foundation to ensure oversight of an increasingly sensitive set of technologies. While we can seek to adapt the model established by the financial sector to the context of compute providers as a starting point (Box 4), consultation and stress-testing with industry stakeholders will be key to developing a workable scheme. **Box 4: Possible requirements for compute providers** Building on requirements established in the financial sector, and requiring further consultation with compute providers, KYC requirements for advanced AI cloud compute might include: * Identifying the entity, including: * company name * proof of incorporation * legal form and status * address of registered office * list of directors and senior management * list of board members * basic regulating powers (e.g. memorandum and articles of association) * unique identifier (e.g. tax identification number or equivalent, where applicable). * Identifying key personnel and all beneficial owners,66 including: * full name, including any aliases * date and place of birth * home address * government issued identification number. Footnote 66: As defined in FinCEN’s final rule implementing beneficial ownership information reporting requirements, a beneficial owner is ‘any individual who, directly or indirectly, either (1) exercises substantial control over a reporting company, or (2) owns or controls at least 25 percent of the ownership interests of a reporting company. Financial Crimes Enforcement Network, _Beneficial Ownership Information Reporting Rule Fact Sheet_ | _FinCEN.gov_ * Verifying information provided, including through checking domestic and international government registries. * Providing a high-level overview of the purpose for the use of compute power, and where relevant, investigating details of sub Figure 1: Mechanisms of KYC Scheme for compute providers * Conducting ongoing high-level usage monitoring to identify changes or emerging characteristics that could change the assessment: * changes in contracts that bring entities into the KYC threshold or increase in procured cloud compute that exceeds the expected scope of the stated project * use of compute different from what would be expected from the stated purpose (e.g. high-level usage patterns that may indicate AI training rather than AI deployment). * Sharing information with other compute providers to identify and mitigate evasion attempts while preserving privacy. * Assessing whether an entity would match a defined 'high-risk' profile and reporting these cases to the government. Factors that could be considered include: * the US specially designated nationals and blocked persons list;67 Footnote 67: Office of Foreign Assets Control, _Specially Designated Nationals And Blocked Persons List (SDN) Human Readable Lists_. * the US Specially designated nationals and blocked persons list;67 Footnote 68: Bureau of Industry and Security, _Export Administration Regulations: Supplement No. 4 to Part 744 - Entity List_. * the US Entity List.68 Footnote 69: Bureau of Industry and Security, _Commerce Implements New Export Controls on Advanced Computing and Semiconductor Manufacturing Items to the People’s Republic of China_. * other entities restricted from accessing advanced AI chips under export controls.69 Strong links to a country of concern, which may be informed by factors including that: * the entity or beneficial owner/s are based in a country of concern * there is evidence that the entity or beneficial owner/s have significant ties to a country of concern * the entity's board has significant ties to a country of concern * director/s or senior management are currently affiliated with research institutions from a country of concern * IP addresses originate in a country of concern * evidence of large amounts of data going to/from a country of concern. * the source of the entity's capital or funding is not clear (which could potentially lead to requiring the entity to clarify their source or lose compute access). * entities from countries on the FATF high-risk jurisdiction list.70 Footnote 70: As at August 2023, these are DPRK, Iran and Myanmar. Financial Action Task Force, _High-Risk Jurisdictions subject to a Call for Action_. * requests for significantly more compute power than is typically used to develop current cutting-edge models. * Using KYC to enforce established rules, which may include: * updated Export Administration Regulations that restrict companies providing above-threshold compute to entities on the Entity List * seeking confirmation of the entity's implementation of safe-development practices, including cybersecurity standards (if the voluntary commitments agreed upon between industry and the White House become mandatory).71 Footnote 71: The White House, _FACT SHEET: Biden-Harris Administration Secures Voluntary Commitments from Leading Artificial Intelligence Companies to Manage the Risks Posed by AI_. * Maintaining records on the provision or denial of above-threshold compute to aid in investigations or demonstrate compliance, when needed. #### 2.2.1 Technical feasibility It is likely that information pertaining to an entity's identity and beneficial owners will be readily attainable, given that entities would have such information on hand for their financial institutions. Similarly, checking information against government registries and lists is also feasible, and can draw on existing compliance expertise from the financial sector. However, while compute providers can collect statements from customers on their planned use of cloud compute, this can be difficult to verify in practice. CSPs often take pride in their ability to offer privacy to their customers, with some providers designing 'confidential compute' offerings to make it technically impossible for compute providers to look in at the customer's data.72 It is not always clear what compute is being used for, with both the training of foundational models and using AI models for inference requiring intensive compute power.73 Given the sensitive proprietary information and data involved in cutting-edge AI models, requirements that significantly affect privacy will likely generate significant industry backlash and diminish US industry power. The dispute between the FBI and Apple in 2016 is evidence of the tension between the US government's security priorities and privacy principles held by the technology sector and general public.74 Further research and collaboration with industry is warranted to identify mechanisms that allow for more effective verification in a way that preserves privacy. A helpful starting point could be focusing on the types of clusters used and how the GPUs are networked, as well as chip hours, which tend to differ according to purpose. This information is known to the compute provider, as these requirements would generally be specified as part of a customer order. Thus, the implementation of the KYC scheme would not require the compute provider to access the underlying code, data, or any system level insights, maintaining appropriate privacy standards. Footnote 72: NVIDIA Corporation, _NVIDIA Confidential Computing_. Footnote 73: L. Heim, _Compute - Transformative AI and Compute_. Footnote 74: Kharpal, _Apple vs FBI_. Footnote 75: US Department of Justice, _The Antitrust Laws_. #### 2.2.2 Mitigating attempts to evade detection As seen in the case of KYC in the financial sector, malicious actors may seek to evade detection by engaging in "structuring". In the case of AI, that structuring could involve finding ways to deconstruct compute-intensive projects into smaller, discrete sub-projects that fall below the reporting threshold. Compute providers would be responsible for undertaking their own fraud prevention to ensure that they identify a single entity acting as multiple customers, a measure that is likely already present in existing practices to prevent customers from bypassing terms and conditions. The government would need to work closely with compute providers and the AI industry to monitor how AI development changes in response to the introduction of such schemes and develop responses. An information-sharing mechanism could be a key tool that compute providers could use to work together to identify actors purchasing disaggregated compute from different companies. However, information-sharing between competitors may be restricted by US Antitrust Laws.75 Close engagement with legal and regulatory experts will be required to carefully design an appropriate scheme, and/or statutory protection could be achieved through legislation. Previous case studies offer some hope: statements from the Department of Justice and Federal Trade Commission noted that sharing information regarding cybersecurity threats was appropriate and a well designed cyber threat sharing scheme would not be likely to raise antitrust concerns.76 Privacy preserving techniques, such as Private Set Intersection computation,77 can also be employed effectively in support of information sharing. Footnote 76: Federal Trade Commission, _FTC, DOJ Issue Antitrust Policy Statement on Sharing Cybersecurity Information_. Footnote 77: NIST Computer Security Research Center, _CSRC Presentation_. Footnote 78: Financial Crimes Enforcement Network, _FinCEN Issues Final Rule for Beneficial Ownership Reporting to Support Law Enforcement Efforts, Counter Illicit Finance, and Increase Transparency_. Another detection evasion risk could arise from the use of shell companies to obscure an entity's ultimate owners. The beneficial ownership information reporting requirement, commencing January 2024 in the US and is currently being adopted more broadly by FATF members, could help decrease this risk by requiring companies to disclose information on the people who ultimately own them.78 However, given the strategic and economic importance of advanced AI, it is likely that [malicious] actors will continue to try to obfuscate their identities. Given the relatively small numbers of entities seeking to access significant amounts of advanced AI compute in the near term, a government enforcement team could consider undertaking their own investigations and spot checks on companies. #### 2.2.3 Enforcing the KYC Scheme A key challenge for the scheme is how regulators can enforce compliance with KYC requirements amongst compute providers. For example, how would the regulator identify a compute provider's failure to effectively implement the scheme, and/or deliberate non-compliance? Information and expertise asymmetry between technologically informed compute providers and nascent government regulation capability will add to this challenge. Microsoft's advocacy for the introduction of KYC means there is some industry goodwill to leverage, but the government will also have to incorporate external expertise to guard against regulatory capture. As a starting point, enlisting support from jurisdictions with experience in AI regulation, such as the EU, as well as engaging AI researchers, could help strengthen enforcement abilities. The administration of this scheme would build a deeper understanding of AI in government, and enable more effective AI policy more broadly. Unlike in the parallel financial-sector crime of money laundering, the AI sector does not have established entities and resources for identifying and policing cloud compute access. However, following the model of the financial sector, government could consider establishing a whistleblower scheme with a financial reward to incentivize employees to report suspected violations. The scheme should be established to allow for flexible enforcement tools, including the ability to direct a compute provider to pause or cease providing compute to any entity found to be in violation of the scheme. Enforcement should also be paired with the effective use of the Bureau of Industry and Security's (BIS) Export Administration Regulations. Where cloud compute is identified as supporting military, intelligence, or security end uses that conflict with US interests, BIS could then introduce export controls to prevent US compute providers from providing this service.79 The KYC scheme should also be enforced through the application of penalties. To avoid a culture of non-compliance like that seen in the early days of the Banking Secrecy Act, regulators should demonstrate a willingness to issue non-trivial penalties for deliberate non-compliance, in addition to playing a regulatory coaching role. Depending on how the threat context evolves, there may also be value in dedicating government resources to actively seek out and investigate potential violations.80 Footnote 79: Dohmen et al., _Controlling Access to Compute via the Cloud_. Footnote 80: This could involve scouring the darkweb for evidence of undisclosed/underhand resale of substantial AI compute power; and/or intelligence gathering and analysis, depending on the nature of the identified risks. ### Establishing government capacity for engagement, regulation, and enforcement To co-design, implement, administer, update, and enforce a KYC scheme for advanced AI cloud compute, the US government should establish a new unit in the Department of Commerce that is responsible for AI regulatory policy. This unit should collaborate with and leverage expertise from other relevant entities including the BIS, the National Institute of Standards and Technology, FinCEN and AI industry and researchers. It should also work closely with counterparts at the Department of Defense, to ensure that national security needs are met, as well as the Department of State, to help promulgate the scheme internationally. To keep the scheme up-to-date and geared towards key risks, the introduction of a KYC scheme should be accompanied by a process for regular evaluation and iteration. Updates should be informed by consultation with relevant agencies, stakeholder industries and international partners. The US National AI Advisory Council (NAIAC), established in 2022, could play a key role in providing cross-sectoral advice in this process. While the NAIAC has a broad remit that includes workforce, ethical, and leadership issues, it has a key role in ensuring ongoing US AI leadership, coordination across civil and security agencies, and matters relating to oversight of AI systems.81 To ensure that technical expertise continues to inform updates, the NAIAC could establish a working group on AI compute controls that brings together private, public, and academic sector expertise. Footnote 81: _National Artificial Intelligence Advisory Committee Charter_. ### Engaging and cooperating with international partners While US dominance of cloud compute will render even domestic KYC globally significant, international support for the scheme will help maximize its effectiveness, particularly in the longer term. As the leader in cloud service provision, the US is uniquely positioned to shape global regulations and standards for advanced AI cloud compute. There are an estimated 335 to 1325 data centers globally with a capacity of above 10MW - enough to enable a large AI training run - with the majority owned by US technology leaders.82 However, only a minority of them actually host the AI-specific compute and the exact numbers are unknown; KYC could assist in developing a clearer picture. Nevertheless, unilateral US action will be limited in shaping the market long term. Operating alone could give rise to adverse outcomes, diminishing the attractiveness of US compute providers and pushing those with greater privacy premiums to lower regulatory environments. Over time, this could degrade US leadership in compute and AI. Footnote 82: Pilz and Heim, _Compute at Scale_. The US should therefore work with key international partners on an aligned KYC scheme for advanced AI cloud compute. While nations' and companies' adherence to the scheme could ultimately be enforced via the threat of withholding US chip hardware exports, such a scheme will be most effective if supported by goodwill and shared purpose. The US should therefore engage first diplomatically, with a particular focus on countries with significant data center architecture. These would include European countries, which are estimated to host approximately 25 percent of all data centers globally,83 and Japan with its world-leading supercomputer capabilities.84 Engagement with a broader set of likeminded countries will also help to add momentum to the international initiative. For information on these countries' positions on AI regulation, refer to Box 5. Footnote 83: Pilz and Heim. Footnote 84: _June 2022 - Top 500: the List_. Footnote 85: Smith, _How do we best govern AI?_. In addition to direct outreach on the shared risks, the US should leverage partnerships with cross-border companies that can internally advocate for a KYC regime. For example, because Microsoft has already proposed implementing a KYC model for advanced AI,85 it could help engage partners like the UK that are seeking to prioritize innovation and light-touch regulation. Footnote 86: Wyckoff, _GlobalPolicy.AI unites the work of eight international organisations on artificial intelligence - OECDAI_. Following engagement with key partners, the US should work to establish longer-term international architecture. Like the FATF in the financial sector, an intergovernmental organization for AI compute controls could help share risk information and align standards and best practices. Given a focus on tactical and implementation matters, this organization would be separate and complementary to existing international AI risk groupings, like the OECD AI Policy Observatory.87 Footnote 87: _June 2022 - Top 500: the List_. Footnote 88: Smith, _How do we best govern AI?_. Footnote 87: Wyckoff, _GlobalPolicy.AI unites the work of eight international organisations on artificial intelligence - OECDAI_. **Box 5: Current positions on AI regulation and implications for US KYC engagement** **Europe** The European Union's (EU) approaches to AI regulation will be shaped by the EU AI Act. On 14 June 2023, the draft text of the law was agreed upon between the European Parliament and the EU's executive branch, with negotiations beginning with EU countries on the final form of the law.87 Amongst other measures, the draft law prohibits some specific AI uses (like real-time facial recognition), introduces safety requirements for high-risk use cases, and imposes risk assessments for foundation models.88 The draft law does not currently introduce specific risk requirements based on level of compute used.89 Given the extensive negotiation and law-making process involved in the EU AI Act, the EU and its members may be hesitant to enter further discussions on additional restrictions. Yet the passage of the AI Act would signify a focus on safety and security that could be further enhanced through cloud KYC. Given the significant regulatory burden of the EU AI Act, emphasizing the minimal compliance costs of KYC will be key. The EU member state of the Netherlands has also shown a willingness to align with US efforts to curb dangerous AI proliferation to China.[90] **Japan** Like the Netherlands, Japan has already engaged closely with the US on efforts to curb China's access to semiconductor technologies, which impact AI development.[91] However, Japan's AI policies to date have focused on maximizing AI's positive impact, rather than concentrating on risks. Japan considers further regulation not yet necessary.[92] Japan may therefore be more motivated by the national security benefits of KYC, rather than broader public safety concerns. **UK** The UK's 2023 policy paper _A pro-innovation approach to AI regulation_ cautions against 'placing unnecessary regulatory burdens on those deploying AI,' and instead focuses on the context in which AI is deployed.[93] Nevertheless, the UK is actively considering the role of measuring compute in the governance of foundation models,[94] and is a close strategic partner of the US. Given its focus on making innovation easier and strengthening its leadership in AI, the UK will be particularly sensitive to regulatory impacts of a KYC scheme. **Canada** On 16 June 2022, Canada introduced the Artificial Intelligence and Data Act into Parliament.[95] The Act seeks to ensure that high impact AI systems meet safety and human rights standards, to enable enforcement and policy to keep up with technology developments, and to prohibit reckless and malicious uses of AI.[96] One of the principles for managing high-risk systems is human oversight and monitoring, including in how systems are designed and trained.[97] KYC could be used to support this effort. **Australia** Given its relatively small market size, Australia is aware that its 'ability to take advantage of AI supplied globally and support the growth of AI in Australia will be impacted by the extent to which Australia's responses are consistent with responses overseas.'[98] From May to July 2023, the Australian government undertook a public consultation process on how the government could support safe and responsible AI. ## 3 Recommendations The US Department of Commerce should work with compute providers, industry stakeholders and AI researchers to develop a domestic KYC scheme for advanced AI cloud compute. This should include: * defining the threshold of AI compute at which the scheme would apply that captures frontier AI models and risks of bolstering an adversarial military's capability. * introducing requirements for compute providers above that threshold to verify customers' identities, keep records, report to government any entities that match government-specified 'high-risk' profiles, and implement controls. * establishing a government unit for implementation, monitoring, and enforcement. * a clear process for evaluation and updates to the scheme. The Department of Commerce should update rules affecting the Export Administration Regulations to explicitly prohibit the provision of above-threshold compute to entities on the Entity List without a license. This would help prevent US resources from being used by entities acting contrary to US national security interests, while still enabling US industry leadership. The KYC scheme would provide the mechanism for US companies to effectively implement these controls. The Department of Commerce should implement this scheme under the authority of a Presidential Executive Order, potentially leveraging existing efforts. President Trump's 2021 Executive Order _Taking Additional Steps to Address the National Emergency with Respect to Significant Malicious Cyber-Enabled Activities_ charged the Department of Commerce with introducing an obligation for providers of US IaaS to verify and record the identity of persons applying for accounts in order to combat malicious cyber activity.100 More recently, the 2023 National Cybersecurity Strategy commits the Administration to implementing this Executive Order.101 While the scope of this measure is broader - applying to all IaaS - and its exact form unclear, there is an opportunity to combine these schemes to create a tiered approach. Light touch KYC could be implemented across the board, with comprehensive due diligence obligations applied to the frontier model and strategic risk threshold. Should the US Administration consider that this existing authority is insufficient, the President should consider issuing a new Executive Order explicitly granting authority for the KYC Scheme. This additional authority would likely be needed to broaden the scheme to capture non-CSP entities, such as compute hardware providers and in-house use of compute for AI development. Footnote 101: The White House, _Executive Order on Taking Additional Steps to Address the National Emergency with Respect to Significant Malicious Cyber-Enabled Activities_. 101. The White House, _National Cybersecurity Strategy_. The Department of State should work closely with the Department of Commerce to build international support for an advanced AI cloud compute KYC scheme. This should include partnering with entities with significant data center capability and developing new international architecture to support the scheme. ## 4 Conclusion Amidst calls for more active management of AI, a KYC scheme will address a gap in export controls, and provide the US government with the groundwork for greater oversight and control over risky AI developments. While the scheme will not address all AI risks, it will allow for a more precise and flexible approach to controlling risks from frontier AI models and unwanted AI proliferation. It will also increase government visibility over AI and allow for more proactive management. When scaled internationally, this scheme has the potential to support global AI governance architecture. Essential to the scheme's success is genuine co-design with industry and experts. This partnership will be keen to minimize harm while maximizing the benefits of AI. ## Appendix ### Practical Considerations for Implementing KYC based on Compute Threshold While compute serves as a crucial instrument in the proposed Know Your Customer (KYC) scheme, it is important to acknowledge its limitations and the associated open questions that might influence its effective implementation. **First, the total amount of compute utilized across various models and tasks is inherently dynamic and not pre-defined.** The complete aggregate compute, accumulating the computational power that a system consumes over time, ultimately results in a total FLOP count, becoming evident as it is being used. This dynamic accumulation implies that identifying which customer surpasses the threshold preemptively is challenging.102 Footnote 102: Furthermore, it’s an oversimplification to assume that all the rented compute contributes solely to the system’s training. The development process usually involves iterative refinements, so that the total compute usage could be two to four times the actual training compute. However, the practical scarcity of GPUs necessitates advance requests for access, especially considering the substantial number required to meet the discussed thresholds (thousands of GPUs). Developers also prefer a large, interconnected cluster of GPUs rather than disparate units distributed across different clusters or data centers, as this facilitates high-bandwidth connections essential for AI development and deployment. Moreover, the level of compute under consideration, amounting to two-digit dollar millions, usually requires negotiations with sales teams, rather than on-demand purchases (where CSPs might already undertake something similar to a KYC to make sure that they are being paid). Therefore, we suggest that, practically speaking, CSPs are already aware of large compute orders and can then enact the required KYC. In scenarios where exact compute needs are indeterminate, continuous monitoring of chip-hour accumulation is important. Warnings should be set at varying levels, such as 50% and 80% of the threshold, to maintain awareness of the customer when approaching limits. Upon meeting the threshold, KYC procedures should be instigated, and if not previously completed, access should be terminated. Given the inherent characteristics of computational requirements in AI, we anticipate a bimodal distribution where most actors either utilize minimal amounts below the threshold or significantly larger quantities. In most instances, users will either surpass the threshold from the onset or they won't come remotely close to reaching it. This expectation is attributed to the order of magnitude differences in AI compute usage for AI systems, the connected monetary costs, and the resulting small number of frontier AI developers. Consequently, the application of an indicator value to establish a lower boundary, or warning level, proves to be particularly effective in this context. It ensures that the systems predominantly captured are those that either exceed or are in close proximity to the set threshold. **Second, the limitations in discerning specific insights about user (AI) activities**, such as the phase of the AI lifecycle or model architecture, are important considerations for the implementation of this scheme. It's critical to leverage instruments that preserve the privacy and integrity of user workloads. Despite these limitations, we maintain optimism that the available tools and metrics are sufficiently indicative and could be expanded by instigating further inquiries and assurances from the customer without compromising their privacy. Compute providers can easily access data related to total compute usage, such as the number of chip hours and the type of chip, as they are required for billing. However, the intrinsic privacy and confidentiality clauses of cloud business prohibit providers from gaining insights into the precise activities of the customer at the system level. This restriction necessitates reliance on high-level, abstract metrics, which are the only permissible indicators available for monitoring. Our approach leverages available high-level metrics, which serve as preliminary indicators prompting further checks, without delving into more detailed aspects of user workloads. Potential concepts like proof of training or the yet-to-be-explored proof of inference could allow customers to validate their compute usage without requiring providers to access more information than total compute usage. In future work, we plan to investigate how various cluster-level metrics, extending beyond chip-hours, can be leveraged by cloud providers to facilitate enhanced oversight while preserving user privacy. For instance, exploring metrics like network traffic patterns could be instrumental in differentiating between AI training and inference processes. We also urge the wider research community to investigate these issues. In this paper, we are primarily addressing the governance of AI systems development. Questions about potential large-scale deployment, i.e. inference compute, are outside the scope of this particular discussion. However, the oversight of deployment compute might also be a future policy tool for regulating AI deployment (Appendix A of O'Brien, Ee, and Williams, _Deployment Corrections: An Incident Response Framework for Frontier AI Models_; Brundage et al., forthcoming). The limitations in acquiring detailed insights necessitate a balanced approach, focusing on high-level indications without compromising user privacy. The available metrics, which extend beyond type of chip and chip-hours, are considered adequate for initial insights and prompt actions. Further exploration of these topics is necessary, and we advocate for continued research to refine and advance the methodologies for achieving effective oversight while maintaining user trust and privacy. ## Acknowledgements We are grateful for valuable feedback and suggestion from Markus Anderljung, Tim Fist, Konstantin Pilz, Alan Chan, and the rest of the GovAI team. All errors are our own.